Previously, I have covered an article on Load Testing SQL Databases with k6. For your information, from k6 version 0.29.0 onwards, you can write a k6 Go extension and build your own k6 binaries. This comes in handy as you can use a single framework for load testing different protocols, such as ZMTQ, SQL, Avro, MLLP, etc.
In this series of k6 extensions, let’s benchmark Redis now. According to redis.io, Redis is a type of in-memory data structure store that can be used as database, cache and message broker.
You might want to evaluate the performance or scalability of Redis instances in given hardware, giving you better insights into the throughput the Redis service can handle.
This tutorial covers Redis performance testing via two different approaches on a Linux machine:
- redis-benchmark
- xk6-redis
redis-benchmark
By default, Redis comes with its own benchmark utility called redis-benchmark. It is similar to Apache's ab utility and can simulate a number of clients sending a total number of queries simultaneously.
Options
Make sure that you have Redis installed in your system. If you have not done so, kindly head over to the official Redis download page and install it based on the instructions given.
Once you are done with it, you should be able to run the following command:
redis-benchmark --help
You should see the following output:
Usage: redis-benchmark [-h <host>] [-p <port>] [-c <clients>] [-n <requests>] [-k <boolean>]
-h <hostname> Server hostname (default 127.0.0.1)
-p <port> Server port (default 6379)
-s <socket> Server socket (overrides host and port)
-a <password> Password for Redis Auth
-c <clients> Number of parallel connections (default 50)
-n <requests> Total number of requests (default 100000)
-d <size> Data size of SET/GET value in bytes (default 3)
--dbnum <db> SELECT the specified db number (default 0)
-k <boolean> 1=keep alive 0=reconnect (default 1)
-r <keyspacelen> Use random keys for SET/GET/INCR, random values for SADD
Using this option the benchmark will expand the string __rand_int__
inside an argument with a 12 digits number in the specified range
from 0 to keyspacelen-1. The substitution changes every time a command
is executed. Default tests use this to hit random keys in the
specified range.
-P <numreq> Pipeline <numreq> requests. Default 1 (no pipeline).
-e If server replies with errors, show them on stdout.
(no more than 1 error per second is displayed)
-q Quiet. Just show query/sec values
--csv Output in CSV format
-l Loop. Run the tests forever
-t <tests> Only run the comma separated list of tests. The test
names are the same as the ones produced as output.
-I Idle mode. Just open N idle connections and wait.
Examples
Depending on your needs, a typical example is to just run the benchmark with the default configuration:
redis-benchmark
It is a good idea to use the -q
option. Here is an example for running 100k of requests in quiet mode:
redis-benchmark -q -n 100000
In addition, you can run parallel clients via the -c
option. The following example use 20 parallel clients for a total of 100k requests:
redis-benchmark -q -n 100000 -c 20
You can restrict the test to run only a subset of the commands. For example, you can use the following command to test only set and get commands:
redis-benchmark -q -t set,get -n 100000
In fact, you can run test on specific commands for benchmarking like the following example:
redis-benchmark -q -n 100000 script load "redis.call('set','key','value')"
If your Redis server is running on a different hostname and port, you can benchmark the server as follows:
redis-benchmark -h 192.168.1.1 -p 6379 -n 100000 -c 20
You should get the following output indicating the requests per second for each of the test conducted:
PING_INLINE: 43478.26 requests per second
PING_BULK: 41666.67 requests per second
SET: 43478.26 requests per second
GET: 43478.26 requests per second
INCR: 40000.00 requests per second
LPUSH: 43478.26 requests per second
RPUSH: 37037.04 requests per second
LPOP: 45454.55 requests per second
RPOP: 34482.76 requests per second
SADD: 43478.26 requests per second
HSET: 45454.55 requests per second
SPOP: 45454.55 requests per second
LPUSH (needed to benchmark LRANGE): 40000.00 requests per second
LRANGE_100 (first 100 elements): 45454.55 requests per second
LRANGE_300 (first 300 elements): 43478.26 requests per second
LRANGE_500 (first 450 elements): 47619.05 requests per second
LRANGE_600 (first 600 elements): 38461.54 requests per second
MSET (10 keys): 41666.67 requests per second
Latency
Sometimes, you might prefer to analyze the latency instead. There are two types of latency measurement provided by redis-cli:
- latency
- intrinsic latency
In this case, we measure latency as the time between sending a request to Redis and receiving a response. On the other hand, intrinsic latency refers to the system latency that is highly dependent on external factors such as operating system kernel or virtualization. Since Redis 2.8.7, you can measure the intrinsic latency independently.
Please note that you can only run redis-cli in the machine which hosts the Redis server unlike redis-benchmark which is runnable on the client machine. Besides that, this mode is not connected to a Redis server at all and the measurement is based on the largest time in which the kernel does not provide CPU time to run to the redis-cli process itself. As a result, it is not an actual measurement of the latency between client and Redis server.
Having said that, it does provide a quick analysis if there is something wrong with the machine that hosts the Redis server.
Run the following command to get the overall latency of your Redis server:
redis-cli --latency
You should see an increase in the sample as time goes by and the average latency:
min: 0, max: 5, avg: 0.22 (2406 samples)
Use Ctrl+C
to stop it as the process will run indefinitely without stopping.
For intrinsic latency, you should use the following command instead:
redis-cli --intrinsic-latency 10
You can pass an integer representing the duration of the test. In this case, the test will run for 10 seconds. The output is as follows:
Max latency so far: 1 microseconds.
Max latency so far: 15 microseconds.
Max latency so far: 16 microseconds.
Max latency so far: 17 microseconds.
Max latency so far: 18 microseconds.
Max latency so far: 20 microseconds.
Max latency so far: 21 microseconds.
Max latency so far: 24 microseconds.
Max latency so far: 25 microseconds.
Max latency so far: 50 microseconds.
Max latency so far: 74 microseconds.
Max latency so far: 87 microseconds.
Max latency so far: 150 microseconds.
Max latency so far: 1089 microseconds.
Max latency so far: 1715 microseconds.
Max latency so far: 2344 microseconds.
Max latency so far: 7438 microseconds.
Max latency so far: 8002 microseconds.
158645097 total runs (avg latency: 0.0630 microseconds / 63.03 nanoseconds per run).
Worst run took 126948x longer than the average latency.
The average latency is about 0.22 milliseconds while the intrinsic latency is 0.063 microseconds.
Let’s proceed to the next section and start exploring another testing approach using k6.
xk6-redis
k6 provides the capabilities to do performance testing with scripting language. This is a big plus to developers and Q&A testers as you will have a better control of the entire workflow of the test. For example, you can ramp up or ramp down the requests at specific intervals of the test which is not achievable when using redis-benchmark.
Fortunately, k6 provides the xk6-redis extension as part of their ecosystem. You can use it directly to build your own custom k6 binaries for testing Redis server.
This extension comes with the following API:
Output | Usage |
---|---|
Client(options) | Represent the Client construtor. Returns a new Redis client object. |
client.set(key, value, expiration time) | Set the given key with the given value and expiration time. |
client.get(key) | Get returns the value for the given key. |
Building k6 with the redis extension
Before that, make sure you have the following installed in your machine:
- Go
- Git
Once you have completed the installation, run the following to install xk6 module:
go install github.com/k6io/xk6/cmd/xk6@latest
If you have installed xk6 directory to Go module, you can make your Redis k6 build by running:
xk6 build --with github.com/k6io/xk6-redis
You should get a k6 executable in your current working directory.
Alternatively, you can download the pre-compiled binaries at the following Github repository. The latest version at the time of this writing is v0.4.1. If you have trouble identifying the architecture of your Linux machine, simply run the following command:
dpkg --print-architecture
Let’s say that the command returns the following:
amd64
You should download the xk6_0.4.1_linux_amd64.tar.gz asset and extract it as follows:
tar -xvf xk6_0.4.1_linux_amd64.tar.gz
You should get the following files in your working directory:
- README.md
- LICENSE
- xk6
Then, run the following command to build k6 for Redis:
./xk6 build --with github.com/k6io/xk6-redis
You should have now a new k6 binary in your working directory.
k6 Script
Next, let’s create a new JavaScript file called test_script.js in the same directory as your k6 executable. Append the following import statement at the top of the file:
import redis from 'k6/x/redis';
Continue by adding the following code which connect to your Redis server:
const client = new redis.Client({
addr: 'localhost:6379',
password: '',
db: 0,
});
It accepts the following an object with the following fields:
- addr: hostname and port of your Redis server denoted as hostname:port.
- password: password of your Redis server.
- db: the db number ranging from 0 to 15.
To keep it simple and short, the test case is going to be as follows:
- Set a new key:value on start of test.
- Running parallel VUs to get the same key repeatedly.
The k6 setup function runs only once at the test start, independently of the test load and duration. Let’s set the key: value as follows:
export function setup() {
client.set('key', 'value', 0);
}
The set function accepts three input parameters:
- key
- value
- expiration time
Then, define the default function which will be called repeatedly by each VU during the entire test:
export default function () {
client.get('key');
}
The complete code is as follows:
import redis from 'k6/x/redis';
import { check } from 'k6';
const client = new redis.Client({
addr: 'localhost:6379',
password: '',
db: 0,
});
export function setup() {
client.set('key', 'value', 0);
}
export default function () {
client.get('key');
}
Running the test
Save the test script and run the following command to the test your Redis server for 5 seconds:
./k6 run test_script.js --duration 5s
By default, it is using one Virtual User (VU) but you can modify it with the --vus
flag. You should see the following output:
/\ |‾‾| /‾‾/ /‾‾/
/\ / \ | |/ / / /
/ \/ \ | ( / ‾‾\
/ \ | |\ \ | (‾) |
/ __________ \ |__| \__\ \_____/ .io
execution: local
script: test_script.js
output: -
scenarios: (100.00%) 1 scenario, 1 max VUs, 35s max duration (incl. graceful stop):
* default: 1 looping VUs for 5s (gracefulStop: 30s)
running (05.0s), 0/1 VUs, 42037 complete and 0 interrupted iterations
default ✓ [======================================] 1 VUs 5s
█ setup
data_received........: 0 B 0 B/s
data_sent............: 0 B 0 B/s
iteration_duration...: avg=104.45µs min=53.7µs med=88.6µs max=9.32ms p(90)=115.4µs p(95)=129.5µs
iterations...........: 42037 8401.691798/s
vus..................: 1 min=1 max=1
vus_max..............: 1 min=1 max=1
This test reports that the Redis server handles 8401 iterations per second. Because each iteration refers to one execution of the default function and there is one request call in our default function, the server is handling 8401 GET requests per second in this test.
Scale the load
Let’s increase the load gradually until it encounters an error. For a start, set the VUs to 100 as follows:
./k6 run test_script.js --duration 5s --vus 100
The output is as follows:
running (05.0s), 000/100 VUs, 111939 complete and 0 interrupted iterations
default ↓ [======================================] 100 VUs 5s
█ setup
data_received........: 0 B 0 B/s
data_sent............: 0 B 0 B/s
iteration_duration...: avg=4.39ms min=46.8µs med=3.32ms max=87.24ms p(90)=9.5ms p(95)=12.51ms
iterations...........: 111939 22304.954101/s
vus..................: 100 min=100 max=100
vus_max..............: 100 min=100 max=100
It indicates that your Redis server can sustain about 22304 iterations per second for 100 users at the same time.
Continue the test and set the VUs to 1000 this time:
./k6 run test_script.js --duration 5s --vus 1000
Depending on the configuration of your Redis, you might encounter the following error:
ERRO[0003] ERR max number of clients reached
running at go.k6.io/k6/js/common.Bind.func1 (native)
default at file:///home/wfng/test_script.js:14:14(4) executor=constant-vus scenario=default source=stacktrace
It indicates that you have reached the max number of clients allowed. You can check the number of active connection by running the following command inside redis-cli:
info clients
It will returns the following output:
# Clients
connected_clients:7
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
To get the max limit, use the following instead:
config get maxclients
The output is as follows:
1) "maxclients"
2) "500"
Latency
Now, let’s have a look at how to get the latency via k6. At the time of this writing, the xk6-redis extension does not report latency as part of its metrics. However, you can easily extend the code in your script and implement your own custom metrics.
Have a look at the following workaround to measure latency. First, let’s add the following import statement at the top of your k6 script:
import { Trend } from 'k6/metrics';
Then, initialize a Trend instance as follows:
let RedisLatencyMetric = new Trend('redis_latency', true);
It accepts two input arguments:
-
name
: the name of the custom metric. -
isTime
: a boolean indicating whether the values added to the metric are time values or just untyped values.
Add the final touch by modifying the default function as follows:
export default function () {
const start = Date.now();
client.get('key');
const latency = Date.now() - start;
RedisLatencyMetric.add(latency);
}
Have a look at the following complete code which initialize the options directly inside the script:
import { Trend } from 'k6/metrics';
import redis from 'k6/x/redis';
let RedisLatencyMetric = new Trend('redis_latency', true);
export let options = {
vus: 40,
duration: '10s',
}
const client = new redis.Client({
addr: 'localhost:6379',
password: '',
db: 0,
});
export function setup() {
client.set('key', 'value', 0);
}
export default function () {
const start = Date.now();
client.get('key');
const latency = Date.now() - start;
RedisLatencyMetric.add(latency);
}
You should be able to see redis_latency metrics once the test has completed.
iteration_duration...: avg=782.57µs min=67.35µs med=732.92µs max=15.86ms p(90)=1.1ms p(95)=1.3ms
iterations...........: 506755 50660.636169/s
redis_latency........: avg=764.8µs min=0s med=1ms max=16ms p(90)=1ms p(95)=1ms
⚠️ Please note that this workaround of measuring the latency is only indicative, as the JavaScript implementation adds an overhead that might skew the reported latency, especially when the latency is in the sub-microsecond range.
It would be great if the xk6-redis extension provided its own built-in Redis latency metrics similar to the HTTP request metrics. Measuring Redis latency in Go directly would be much more accurate and avoid the unnecessary RedisLatencyMetric script code.
Conclusion
All in all, redis-benchmark is a good tool that provides you with a quick glimpse of the performance of your Redis server. On the other hand, k6 is scriptable in JavaScript and can provide you with better control over the execution and workflow of your test. A scripting language is more flexible for testing various ways to connect and query your Redis server.
In fact, you can utilize both of the tools to get the best out of them. For example, you can run redis-benchmark when you install it on your machine for the first time, to get a rough idea of the performance. Subsequently, use k6 for more advanced cases like integrating your test with your existing toolbox or automating your testing.
Top comments (1)
[[ Pingback ]]
This article was curated as a part of #22nd Issue of Software Testing Notes Newsletter.