DEV Community

Cover image for Crucial update for k6 results analysis
Grzegorz Piechnik
Grzegorz Piechnik

Posted on • Edited on

Crucial update for k6 results analysis

K6 recently released a new update that has a huge impact on writing scenarios for performance testing in the K6 framework and their analysis. A few weeks ago we wrote about analyzing aggregated results in .json format. This was a heavy-handed approach because it required us to be familiar with third-party tools, and filtering the data was time-consuming and inefficient. So the k6 developers introduced a change to get around this.

InfluxDB & Grafana

K6 is being developed on behalf of Grafana. This has a big impact on the development of the performance of the tool itself, as they are placing emphasis on integrating it with other products. This is demonstrated, for example, by the built-in ability to upload data to the InfluxDB server, which is so often used in data collection for the next observability of application behavior.

stateDiagram-v2
    K6 --> InfluxDB: Sends data
    Grafana --> InfluxDB: Scraps data and displays it to the user
Enter fullscreen mode Exit fullscreen mode

Data from the K6 tool is uploaded to the InfluxDB server during the tests, and then scraped from it by Grafana every indicated number of seconds. After that, the processed data is displayed in the form of a table or graph, for example. A sample view might look as follows.

k6

Tags

Tags in the k6 tool play a special role. They have many uses, and one of them is the creation of datasets that we can then filter through tags by third-party tools. This allows for easier analysis of requests and application errors.

One of the premises of k6, which they announced, is integration with larger application observability ecosystems. This means that instead of building a solution inside a performance analysis tool (skipping the cloud solution), they will rely on those already in place. These include Grafana + InfluxDB, Datadog or Prometheus. The tags are expected to ultimately expand the capabilities of the transmitted data.

Interestingly, the approach of using external services to analyze the results solves many problems that were previously insurmountable. These include, for example, the heaviness of filtering data from the console and the difficulty of analyzing it.

Example Script

However, in order for the performance of tags to be efficient and optimized as much as possible, they should be used judiciously. This means that we should not aggregate all the data we get in tests. There is no need to do so, since some of it is not something we will use for analysis anyway.

We provide an example script with the tags included below. Let's analyze one by one what happens in it.

import { check, sleep, group } from "k6";
import { Counter } from 'k6/metrics';
import { Httpx } from 'https://jslib.k6.io/httpx/0.0.3/index.js';


const errors = new Counter('errors');

const session = new Httpx({
  baseURL: 'http://httpbin.test.k6.io',
  headers: {
    'Content-Type': 'application/x-www-form-urlencoded'
  },
  timeout: 20000
});

function aggregate(response, check, name) {
  if (!check) {
    // couldn't make point from sample: max key length exceeded: 519029 > 65535 - InfluxDB validation
    const responseBody = JSON.stringify(response.body).slice(0, 5000)
    const requestBody = JSON.stringify(response.request.body).slice(0, 5000)
    errors.add(true, {
      name: name,
      error_code: response.error_code,
      request_headers: JSON.stringify(response.request.headers),
      request_cookies: JSON.stringify(response.request.cookies),
      request_method: response.request.method,
      request_body: requestBody,
      response_headers: JSON.stringify(response.headers),
      response_cookies: JSON.stringify(response.cookies),
      response_status: response.status,
      response_body: responseBody

    })
  }
}

export default function () {
  let name
  let response
  let status

  group('get 407 status', function () {
    name = '/status/<status>'
    response = session.get("/status/407", null, {
      tags: { name: name }
    });
    check(response, {
      'status is 407': (r) => r.status === 407
    })
    aggregate(response, status, name)
  })
};
Enter fullscreen mode Exit fullscreen mode

In the initial part we define a counter, which will be responsible for holding data on erroneous requests (using tags) and indicate in the summary the number of errors detected. A lot of data is sent, because we care about accurate analysis - but more about that later.

Next, in the main function, we predefine three variables that should be used for each request. These are the name (equivalent to Sampler Name in JMeter), the response object and the result of the check function. Based on the last two elements, we are able to determine whether the request was successful. The key here is the aggregation of the results in the aggregate function - if the check function were a typical assertion, this would not be possible. We would simply not arrive at the call to the aggregate function.

Just running the above script won't do us any good, because we have neither properly configured tables nor running InfluxDB and Grafana applications locally.

Docker

Docker is a tool that allows you to create, run and manage containers. Containers are a type of lightweight virtualization that allows applications to be isolated from the rest of the operating system and easily moved between different systems.

Using Docker to install applications has several advantages:

  1. Isolation: Docker containers allow you to isolate your applications from the rest of the system, preventing conflicts with other applications and ensuring that they work as they should.
  2. Ease of installation: Docker allows you to create container images that contain everything you need to run an application. This makes application installation simple and fast, as all you need to do is run the container image.
  3. Share applications with their configuration: Docker container images can be shared with others, making it easy to share projects and install them easily on other systems.
  4. Compatibility with different systems: Docker containers run on a wide variety of operating systems, allowing you to run scripts on different platforms without having to customize them for each individual system.

On the other hand, docker compose is a tool for managing applications consisting of multiple Docker containers. It allows you to define all the containers needed to run an application in a single file, making it easy to launch and manage multiple containers simultaneously.

To simplify the installation and configuration of InfluxDB and Grafana locally, we have created a repository with docker-compose so that we can run the already configured applications locally with the indicated K6 script. Sample test results look like the following attachment.

k6

Thanks to tag configurations in the script, it is possible to display summaries of all requests and their errors (views familiar from JMeter's Dashboard).

In addition, using Grafana's functionality, we created one more view imitating the View Results Tree, which in the case of JMeter causes memory exhaustion problems. This makes it easier for us to analyze requests that failed for some reason (based on the result of the check function).

The above solution implies one serious problem - influxDB at the time of receiving a large amount of data can "block" us. On the other hand, for such a situation to occur, the number of errors would have to be drastically high - this indicates the need to stop the tests, because:

  1. they were written incorrectly
  2. the system could not cope with the load

Top comments (0)