Ever wondered how far we can or should take testing? What about scaling tests to millions of cases? I know I have.
I've been using, contributing and creating libraries related to testing for the past 2 years and would like to share some insights.
Please note that in this post I won't go into much detail of the code, as I would like to extend this topic over multiple posts.
What should we test?
Everything. Test it all. We need confidence in 100% of the software, so we need 100% coverage for any code that the user directly or indirectly interacts with.
Your users and customers don't want to deal with bugs, and proper testing can minimize those chances.
Overview
I'll be using the following tools throughout these sessions:
Name & Link | Purpose |
---|---|
typescript | Writing code and tests with types |
jest | Test runner, assertion library |
fp-ts-quickcheck | Property Based Testing |
fast-check | Property + Model Based Testing |
wdio | End to end testing framework |
Types of testing
All automated testing I've seen are covered by the following combinations of characteristics:
- Level:
- Unit
- Integration
- End to end
- Bases:
- Static
- Property Based
- Model Based
The higher the number, the more effort is required and confidence is gained, where the max is a model based end to end test.
These two characteristics are mutually inclusive. A <level>
test can always be a type of <base>
test.
Levels
Unit Testing
The smallest thing to test. This is usually a function, testing if the specified inputs provide the resulting output.
We can test side effects and contracts between modules by mocking their interfaces, but this should be used as a last resort.
Integration Testing
A black box, where we can only see the input and we observe what happens.
At a module level, we would test the functions provided by the module and assert if the behaviour worked as expected.
It's likely that inside the function, there is a lot that happens. It doesn't matter what happens inside the function, because these will be tested in unit tests.
End to End testing
The entry point is at the highest level in the system as possible. In an application used by users, it's testing that interacting with the software results in it's natural environment provides the expected output.
Bases
This is my favourite and most interesting part of testing, and where most of the weight of scaling and confidence appears.
Static
Nothing is generated for the test.
This style of testing uses hard coded data as the input to the test. It's super useful when creating a base for your test, or when there is a particular use case.
This is the testing we all know and have probably used.
Property Based Testing
The information is generated for the test.
Example below generates a kebab-case word, and runs many different variations. It stops when one case fails, and this library will even try find the smallest failing case.
import * as fc from 'fast-check'
const isKebabCase = (string: string): boolean =>
{ /* ... implementation */ }
const kebabCase = fc
.array(fc.lower({ minLength: 1 }), { minLength: 1 })
.map(array => array.join("-"))
test('should return true when the input is kebab case', () => {
fc.assert(fc.property(kebabCase, (kebab: string) => {
const result = isKebabCase(kebab)
expect(result).toBeTruthy()
}))
})
Model based testing
The information and usage of the tool is generated for the test.
For example if you have a class with 5 methods, where:
- 1 method is a constructor
- 3 methods are combinators
- 1 method is a destructor
We would be to start with the constructor first, use any number of combinators any number of times, then finish with a destructor.
This is expected behaviour by the consumer, and we should be able to test that too.
In Summary
Let's put the possible back in testing all possible use cases.
It may seem tedious and take a long time, but it does not have to be that way. With the right tools and techniques, we can create thorough and purposeful coverage.
Top comments (0)