DEV Community

Cover image for Using EC2 to enhance k6 load tests
Daniel Fitzpatrick
Daniel Fitzpatrick

Posted on

Using EC2 to enhance k6 load tests

k6 is the best load test tool available, but it's pricey. I wanted to start unlimited VUs (virtual users) without paying a lot of money and found a neat way to do it by spawning many short-lived EC2 instances!

Too many templates

I used a simple user-data script with heredocs. Make sure you have an AMI with Docker installed. You will also need S3 access.

Start by setting up AWS credentials.



# load_test.user-data
echo "setting up aws-cli"
mkdir -p /home/ec2-user/.aws
cat <<EOF > /home/ec2-user/.aws/config
[default]
output=json
region=us-west-1
EOF

cat <<EOF > /home/ec2-user/.aws/credentials
[default]
aws_access_key_id={{ aws-creds.access-key-id }}
aws_secret_access_key={{ aws-creds.secret-access-key }}
EOF


Enter fullscreen mode Exit fullscreen mode

Next, you can write out the load-test and make it executable:



# load_test.user-data
echo "writing load test: {{load-test}}"
mkdir -p /home/ec2-user/load_test
chmod a+rw /home/ec2-user/load_test
cat <<EOF > /home/ec2-user/load_test/{{load-test}}
{{load-test-content|safe}}
EOF


Enter fullscreen mode Exit fullscreen mode

We pull the k6 docker image and execute it with a single command.



# load_test.user-data
echo "running load test"
sudo -u ec2-user docker run --rm \
  --ulimit nofile=65536:65536 \
  --network host \
  -v /home/ec2-user/load_test/:/mnt/load_test:rw \
  -i loadimpact/k6 \
  run --summary-export=/mnt/load_test/out.json \
  /mnt/load_test/{{load-test}} > /dev/null


Enter fullscreen mode Exit fullscreen mode

Finally, upload the test results to s3.



# load_test.user-data
echo "uploading results"
sudo -u ec2-user aws s3api put-object \
  --bucket load-test-results \
  --key {{uuid}}/{{instance-name}} \
  --body /home/ec2-user/load_test/out.json > /dev/null


Enter fullscreen mode Exit fullscreen mode

IMPORTANT: You should set a trap function to call systemctl halt to shutdown the EC2 instances. Leaving EC2 instances running would be counter-productive to our goal of saving money.

You may have noticed that I have used templates in the user-data. We can use Selmer to supply all of the important stuff.



(defn build-user-data
  [aws-creds uuid instance-name load-test load-test-template aws-creds]
  (selmer/render-file
   "load_test.user-data"
   {:aws-creds aws-creds
    :uuid uuid
    :instance-name instance-name
    :load-test (.getName (io/file (io/resource load-test)))
    :load-test-content (selmer/render-file load-test load-test-template)}))


Enter fullscreen mode Exit fullscreen mode

aws-creds should be a map with keys :access-key-id and :secret-access-key. These are your S3 credentials.

You may have noticed that load-test-content is itself a template. That is so we can point the load test to an arbitrary address and host our web app from anywhere. Here is an example load-test.



import http from 'k6/http';
import { check, sleep } from 'k6';

export let options = {
    stages: [
        { duration: '30s', target: 5000 }, // ramp up
        { duration: '30s', target: 5000 }, // sustained load
        { duration: '30s', target: 0 }, // cool down
    ],
};

export default function () {
    let res = http.get('{{url-prefix}}/ping')

    check(res, {'status is 200': (r) => r && r.status === 200});

    sleep(1);
}


Enter fullscreen mode Exit fullscreen mode

load-test-template should be a map with a single key :url-prefix. You may find more exciting uses for using templates in your load tests. 😉

Orchestration

We can use the excellent Cognitect AWS library to kick off our tests.



(defn launch-load-test-instance
  [ec2 aws-creds uuid instance-name load-test load-test-template]
  (aws/invoke ec2
              {:op :RunInstances
               :request {:ImageId "my-ami"
                         :InstanceType "my-size"
                         :MinCount 1 :MaxCount 1
                         :InstanceInitiatedShutdownBehavior "terminate"
                         :UserData (-> (build-user-data aws-creds uuid instance-name load-test load-test-template)
                                       byte-streams/to-byte-array
                                       base64/encode
                                       byte-streams/to-string)
                         :TagSpecifications [{:ResourceType "instance"
                                              :Tags [{:Key "Name" :Value instance-name}
                                                     {:Key "load-test-id" :Value uuid}]}]
                         :SecurityGroupIds [...]
                         :KeyName "my-company"}}))

(defn launch-load-test-instances
  [ec2 aws-creds uuid instance-prefix load-test load-test-template num-instances]
  (doall
   (map
    #(launch-load-test-instance ec2 aws-creds uuid
                      (str instance-prefix %)
                      load-test load-test-template)
    (range num-instances))))


Enter fullscreen mode Exit fullscreen mode

There are some gaps in the above code that you will need to fill.

I feel like a mad scientist after writing that code. Wreak havoc, my minions!

image of gene wilder as young Frankenstein doing his best mad scientist impression

Gathering test results

In the previous code, you may have noticed that we supplied a uuid to each ec2 instance and used it in the user-data script as a shared bucket for all of our k6 results. I hope you kept track of it. 😈



(def s3-bucket "load-test-results")

(defn get-k6-summary [s3 key]
  (->> {:op :GetObject
        :request {:Bucket s3-bucket
                  :Key key}}
       (aws/invoke s3)
       :Body
       io/reader
       json/decode-stream))

(defn get-k6-summaries [s3 uuid]
  (map
   (comp (partial get-k6-summary s3) :Key)
   (->> {:op :ListObjects
         :request {:Bucket s3-bucket
                   :Prefix uuid}}
        (aws/invoke s3)
        :Contents)))


Enter fullscreen mode Exit fullscreen mode

Calling get-k6-summaries will return a sequence of all the generated results. Those individual results may be helpful for some analysis, but you should aggregate as many metrics as can be combined.



(defn metric-type [m]
  (condp = (set (keys m))
    #{"count" "rate"} :counter
    #{"min" "max" "p(90)" "p(95)" "med" "avg"} :trend
    #{"fails" "value" "passes"} :rate
    #{"min" "max" "value"} :gauge
    :unknown))

(defn merge-summaries
  "summaries is a vector of k6 output summaries converted from JSON to EDN.
   get-k6-summaries returns this structure precisely. This function works with the various types
   of k6 metrics (https://k6.io/docs/using-k6/metrics) and does the following:
    - min values are the minimum in all the maps
    - max values are the maximum in all the maps
    - avg values are the average in all the maps
    - count is the sum in all the maps
    - rate is the average in all the maps
    - fails is the sum in all the maps
    - passes is the sum in all the maps
   p, med, & 'value' values are not included in the merged summary, but they can still be viewed
   for individual machines by inspecting the output from get-k6-summary"
  [summaries]
  (letfn [(merge-metrics [v1 v2]
            (condp = (metric-type v2)
              :counter {"count" (+ (get v1 "count") (get v2 "count"))
                        "rate" (double (/ (+ (get v1 "rate") (get v2 "rate")) 2))}
              :trend {"min" (min (get v1 "min") (get v2 "min"))
                      "max" (max (get v1 "max") (get v2 "max"))
                      "avg" (double (/ (+ (get v1 "avg") (get v2 "avg")) 2))}
              :rate {"fails" (+ (get v1 "fails") (get v2 "fails"))
                     "passes" (+ (get v1 "passes") (get v2 "passes"))}
              :gauge {"min" (min (get v1 "min") (get v2 "min"))
                      "max" (max (get v1 "max") (get v2 "max"))}
              {}))
          (merge-checks [v1 v2]
            {"passes" (+ (get v1 "passes") (get v2 "passes"))
             "fails" (+ (get v1 "fails") (get v2 "fails"))})
          (max-vus [m]
            (get-in m ["metrics" "vus_max" "max"]))]
    {:metrics (apply
               merge-with
               merge-metrics
               (map #(get % "metrics")
                    summaries))
     :checks (apply
              merge-with
              merge-checks
              (map
               #(get-in % ["root_group" "checks"])
               summaries))
     :max-vus (->> summaries
                   (map max-vus)
                   (apply +))}))


Enter fullscreen mode Exit fullscreen mode

Call merge-summaries to get aggregate results.

Tradeoffs

A paid k6 plan will provide you with a friendlier interface and better result-sets. But is $25k/year worth that amount of money? I don't know. That's up to each user.

If you're willing to manage some ec2 instances yourself and work with less information in aggregate, then perhaps this approach is better.

One problem I didn't go into any detail about is managing failed EC2 instances. Certain failure states require an external TerminateInstances operation: management of 'dangling' ec2 instances is now your problem.

k6 cloud will also distribute traffic across regions. I think it will also simulate clients with slower bandwidth. Could you provide all of this if you need it?

I believe you could get close to k6's paid feature-set, but only with a lot of effort. Hopefully, I have given you a starting point.

Don't forget to have fun!

Top comments (0)