Part 2 - Adding an API to control Gatling
I am going to preface this by saying that I know rather little about Gatling overall so this was a best effort to get this at least running. I could not find any other tutorials on how to have Gatling running at all in Azure.
I will also assume if you are reading this you have at least some knowledge of Azure and the Azure CLI.
Setting Up
If you are unaware of what Azure Container Instances are, I would suggest having a quick read through this page. Essentially they can provide a short lived container for a very cheap price.
Creating the Azure File Shares
The first thing you're going to need is an Azure Storage Account which can be either created through the portal or through a small script.
az storage account create \
--resource-group gatling \
--name stgatling \
--location uksouth \
--sku Standard_LRS
You will then need a three file shares, conf
, user-files
and results
az storage share create --name conf --account-name stgatling
az storage share create --name user-files --account-name stgatling
az storage share create --name results --account-name stgatling
And finally you're going to want to grab the account key.
az storage account keys list --resource-group gatling --account-name stgatling --query "[0].value" --output tsv
Of course you can also grab this very simply through the portal.
Creating the Container Instance
Now you're going to want to create a YAML file to deploy the Gatling container. To do this we'll be using the container groups template but just deploying the one container. Below is an example that should work, you will just need to insert your storage key into the the volumes
.
apiVersion: '2018-10-01'
location: uksouth
name: gatling-aci
properties:
containers:
- name: gatling
properties:
environmentVariables: []
image: denvazh/gatling
ports:
- port: 80
resources:
requests:
cpu: 1.0
memoryInGB: 1.5
volumeMounts:
- mountPath: /opt/gatling/conf
name: conf
- mountPath: /opt/gatling/user-files
name: user-files
- mountPath: /opt/gatling/results
name: results
osType: Linux
restartPolicy: Never
ipAddress:
type: Public
ports:
- port: 80
dnsNameLabel: gatling-test
volumes:
- name: conf
azureFile:
sharename: conf
storageAccountName: stgatling
storageAccountKey: <insert-storage-key-here>
- name: user-files
azureFile:
sharename: user-files
storageAccountName: stgatling
storageAccountKey: <insert-storage-key-here>
- name: results
azureFile:
sharename: results
storageAccountName: stgatling
storageAccountKey: <insert-storage-key-here>
tags: {}
type: Microsoft.ContainerInstance/containerGroups
This will create and start your container group. What should happen at this point is Gatling will run but then stop very quickly as there are no simulations to run.
Uploading the configuration
So we need to go ahead and upload those to the file share user-files
. There are a number of ways you can go about this depending on what OS you are using, starting off with mounting the Azure File Shares we made earlier:
Windows - https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-windows
Linux - https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-linux
macOS - https://docs.microsoft.com/en-us/azure/storage/files/storage-how-to-use-files-mac
Alternatively you could upload the files through Azure Storage Explorer or through the portal.
Our folder had a simulations
and resources
folder directly under it.
(Ignore the bad naming)
Finally, we're going to want to take the default gatling.conf and edit two lines.
First one to edit is runDescription
to be anything that isn't a blank string. Whatever your run is going to be!
Secondly, edit simulationClass
to be the fully qualified class name of simulation you wish to run. For us this was apimsimulations.ApiMSimulation
(we were testing out an API Management integration).
Now upload that into the conf
file share.
Running the simulation
You're now all set to run the simulation.
az container start \
--resource-group gatling
--name gatling-aci
Container start up should be fairly quick and it will compile your simulation and run it. The easiest way to see what it is doing is just to check the logs through the portal, or you can get these from the cli.
az container logs \
--resource-group gatling
--name gatling-aci
At the end it should say that it has generated the report(s)
Reports generated in 2s.
Please open the following file: /opt/gatling/results/apimsimulation-20190912115211503/index.html
Global: count of failed requests is 0.0 : true
Global: mean of response time is less than 500.0 : true
And then you can retrieve those from the file share using the same method as you did to upload the files.
TODO
There are quite a few things I'd like to work out how to do, some of which I think would be fairly simple:
- Running Gatling in non-interactive mode. This should just be editing the startup command (entrypoint) for the container instance to include the simulation you wish to run. This could allow for a level of automation...
- Working out how to get the container to stop once it has finished running.
- Automating this whole process. You could use an Azure Function to create a container on demand, mounting onto the file shares. It then would just need to specify a simulation to run in non-interactive mode, wait for completion and stop / delete the container.
For a first attempt though, pretty happy with how far I managed to get it. Little cheaper and somewhat easier than spinning up a whole VM to do this!
Top comments (0)