Lab is fun
AeroLab simplifies the process of installing Aerospike clusters and clients for development, testing, and lab use on either Docker or in AWS. A rich feature set simulates network faults, quickly inserts and updates test data, partitions disks, and more.
We will setup the following stack:
- Cluster - six nodes across two racks in two availability zones
- Monitoring - Prometheus Exporter on the cluster nodes and an Aerospike Monitoring Stack instance
- Clients - Aerospike tools instances where I will run
asbench
- a benchmark tool - Custom client - a Virtual Studio Code instance where I can develop my own code to interact with the database
Install AeroLab
- Download AeroLab from here, then we choose the backend type to be AWS and pick our region:
aerolab config backend -t aws -r us-west-1
Install AWS CLI with the instructions in this manual. (Ignore this if you already have it installed.)
Because we are using the Enterprise Edition of Aerospike, we configure AeroLab to always use the feature key file for all relevant commands in the future:
aerolab config defaults -k '*FeaturesFilePath' -v /path/to/features.conf
Create security groups
For safety, let's instruct AeroLab to create security group firewall rules, and protect them so that only our IP address can access the instances we create.
aerolab config aws create-security-groups
aerolab config aws lock-security-groups
Deploy the cluster
- We deploy a 3-node cluster, preconfigured to already form correctly, using r5ad.4xlarge instances with NVMe disks in us-west-1a. We do not want the nodes to start at this point.
aerolab cluster create -c 3 -n mycluster -I r5ad.4xlarge -U us-west-1a -s n
- Add three more nodes, this time in another availability zone.
aerolab cluster grow -c 3 -n mycluster -I r5ad.4xlarge -U us-west-1b -s n
- Instruct AeroLab to change the rack configuration on the nodes.
aerolab conf rackid -n mycluster -l 1-3 --id=1 -e
aerolab conf rackid -n mycluster -l 4-6 --id=2 -e
- We want to use the
test
namespace, but we want to use all NVMe disks for that namespace as well. Let's create four partitions on each NVMe as well. Time to prepare the disks and configure Aerospike on all nodes:
aerolab cluster partition create -n mycluster --filter-type=nvme -p 25,25,25,25
aerolab cluster partition conf -n mycluster --namespace=test --filter-type=nvme --configure=device
- It is time to start Aerospike on all the nodes and check the logs on one of the nodes, specifically looking for a successful startup and the
CLUSTER-SIZE
correctly showing six nodes.
aerolab aerospike start -n mycluster
aerolab logs show -n mycluster --journal --follow --node=6
To verify, we query for basic cluster information using the asadm
command on one of the nodes:
aerolab attach shell -n mycluster -- asadm -e info
Install the monitoring stack
- Add the Prometheus Exporter to all the nodes to query Aerospike and gather the metrics.
aerolab cluster add exporter -n mycluster
- Deploy a special client machine with Prometheus database and Grafana preinstalled for Aerospike monitoring:
aerolab client create ams -n mymonitor -s mycluster -I r5a.xlarge -U us-west-1a
- To access the Grafana dashboards on the monitoring stack, list the client machines:
aerolab client list
- Navigate in your browser to the node's IP on port 3000. For example:
http://127.0.0.1:3000
Clients and client benchmark monitoring
- Start some client machines with Aerospike Tools packages preinstalled.
aerolab client create tools -n myclients -c 5 -I r5a.xlarge -U us-west-1a
- Add
asbenchmark
monitoring to the monitoring stack, to populate the client graphs.
aerolab client configure tools -l all -n myclients --ams mymonitor
Generating the load
- Run the following
asbench
command to insert data.
asbench -h ${NODEIP}:3000 -U superman -Pkrypton -n test -s \$(hostname) --latency -b testbin -K 0 -k 1000000 -z 16 -t 0 -o I1 -w I --socket-timeout 200 --timeout 1000 -B allowReplica --max-retries 2
- Use the following command to run a read-update load.
asbench -h ${NODEIP}:3000 -U superman -Pkrypton -n test -s \$(hostname) --latency -b testbin -K 0 -k 1000000 -z 16 -t 86400 -g 1000 -o I1 -w RU,80 --socket-timeout 200 --timeout 1000 -B allowReplica --max-retries 2
- Instruct AeroLab to run all the commands on all the client tools instances. First, get the IP addresses.
NODEIP=$(aerolab cluster list -j |grep -A7 mycluster |grep IpAddress |head -1 |egrep -o '([0-9]{1,3}\.){3}[0-9]{1,3}')
echo "Seed: ${NODEIP}"
- Use the following commands to run
asbench
.
aerolab client attach -n myclients -l all --detach -- /bin/bash -c "run_asbench -h ${NODEIP}:3000 -U superman -Pkrypton -n test -s \$(hostname) --latency -b testbin -K 0 -k 1000000 -z 16 -t 0 -o I1 -w I --socket-timeout 200 --timeout 1000 -B allowReplica --max-retries 2"
aerolab client attach -n myclients -l all --detach -- /bin/bash -c "run_asbench -h ${NODEIP}:3000 -U superman -Pkrypton -n test -s \$(hostname) --latency -b testbin -K 0 -k 1000000 -z 16 -t 86400 -g 1000 -o I1 -w RU,80 --socket-timeout 200 --timeout 1000 -B allowReplica --max-retries 2"
Custom client
Use the following command to create an instance with vscode
preinstalled with c#,java,python,golang
libraries already configured for development.
aerolab client create vscode -n vscode -I r5a.medium
Use aerolab client list
to get the client list and display the IP addresses. To connect to the full IDE, open the vscode
client IP port 8080
in your browser. For example: http://1.2.3.4:8080
.
Summary
AeroLab simplifies the deployment of a custom-configured Aerospike cluster with test clients, a test load and a monitoring stack. Many more features exist to ease further configuration and installation. To view a list of commands, AeroLab has a useful help
parameter that can be appended anywhere, for example:
aerolab help
aerolab cluster help
aerolab cluster create help
Top comments (0)