I recently wrote the article Handling API validation with OpenAPI (Swagger) documents in NodeJS, which went into how to pass on the work of input validation to the OpenAPI spec. This follows on, showing how to lighten the testing load and ensure your API is producing exactly the output you’ve painstakingly documented.
Faster testing by relying on the OpenAPI spec as a single source of truth.
There is nothing, nothing, more predictable than API documentation being wrong.
It’s hard to keep that document up to date with all the other pressures of having to, you know, maintain the API. It’s simpler to push the fix or feature and then update the doc. Eventually.
I would be lying if I was looking for the solution to this exact problem, but I’ve discovered one as a by-product of two other things:
- Using the API spec document for validation. We covered this in https://medium.com/@Scampiuk/handling-api-validation-with-openapi-swagger-documents-in-nodejs-1f09c133d4d2
- Using the API spec document for testing. ( This guide )
This little duo means the API spec has to be up to date, else you can’t pass any of your tests. Nice, hu?
We’re going to start with as simple as a test application as possible:
npm install express
Let’s run it…
chris@chris-laptop:~/Projects/openapi-testing$ curl localhost:3000
{"version":"1.0.0"}
Ok so that’s simple and working, let’s create a spec that defines this, rather limited, API. Using OpenAPI 3 spec, we’ll be quite verbose in the way we build the objects up so we can re-use them in the future:
We can see that our GET / endpoint needs to return an object with the a property called version which has a pattern of ^\d.\d.\d$ , and it requires a header called X-Request-Id which is a UUID.
But our current endpoint doesn’t fulfill this criteria! We’ve created the thing we hate, the thing worst than no API documentation: bad API documentation. The solution? Tests.
npm install supertest chai mocha --save-dev
Once we have that installed, let’s create a nice simple test
Then in package.json , under the scripts block, add
"test": "./node\_modules/.bin/mocha --exit --timeout 10000"
This will run our test we’ve just created, exit once it’s done, have a sane time-out time.
We’ve expelled some effort to test this endpoint, but the tests are a false positive — we know the spec requires the X-Request-Id to be defined, and our test doesn’t cover that.
We’re going to look at the same tooling as we used in the previous guide, express-openapi-validate . This thing is going to ingest our spec file, and in the same way we used it previously to validate the input to an API, we’re going to use it to validate the output of the API.
npm install express-openapi-validate js-yaml app-root-path --save-dev
And now we’re going to change the index.spec.js around a bit, taking out the explicit definition of what we expect in the endpoint, and adding in the OpenApiValidator…
and run the test again…
There! It failed this time, and told us why it failed: response.headers should have required property "x-request-id"
Note we’ve not had to define that in the test: in fact we’ve taken out code for testing what shape the response is, it’s taken the spec and worked out what’s required for a GET / request. Let’s fix the endpoint.
npm install faker
( if you’ve not looked at faker before, I strongly recommend it, I’m abusing it here slightly but it’s a fantastic fake data generator))
We changed the response to set the X-Request-Id header with a UUID, and now the tests pass.
What happens if we break the format of version? We’ll change the request to send x1.0.0 instead, which doesn’t match the pattern for Version …
Tests fail, because you’re sending the wrong value.
This is crazy powerful. Now, because you’ve defined things in your spec file correctly, you can re-use patterns in your API and ensure that the spec is being fulfilled on your tests, while updating all your tests if you update the spec file. You write less lines in your tests, focus on putting the effort into the spec file (because that’s now driving your tests…) and things become simpler.
In conclusion
Using the OpenAPI spec to control how data gets into your API, and using it to build your tests around, means that it becomes the single source of truth about your API. Sure, there are ways of cheating this and not documenting all the objects, or not testing endpoints, but why do that?
By combining these two approaches, we’ve found that workflow on the API now starts with the OpenAPI specification, then building tests, then implementing the endpoints. TDD becomes almost the de facto way of approaching development. While before API development may have started by firing up Postman and thrashing through some ideas, now it’s all tested by this near-magic combination of supertest, mocha, chai, and OpenApiValidator.
There are a couple of things missing from this set-up which I’m still working on:
- I would like code coverage reports via nyc to ensure that all the endpoints and response codes defined in the OpenAPI spec document are implemented
- I would like the test validation to error if there are objects or properties in the API responses that are not documented — I just can’t work that one out at the moment.
I would love to hear how you use this in your projects! Get me on https://twitter.com/Scampiuk
Top comments (3)
Hi, for point 2 of what your working on, is this error in operation?
As you can simply validate your output against your schema to ensure the output expectancies are as required. Anything foreign you set it to throw.
I have a working example somewhere I can get for you, that you should be able to integrate into your existing process.
Really nice, have been thinking about the same thing recently, specifically in the context of consumer driven contract testing.
There's something you've not covered though: all the write operations, i.e. POST, PUT, PATCH, and maybe even DELETE.
Valid point, I may ammended this or follow it up with covering that and different response codes also