This is going to be a mini post and I am curious about your experiences.
When I started coding professionally in 2010 there was a swarm of manual testers and manual test leads and managers overseeing all sorts of testing, but mostly regression testing. I had to automate some of these tedious manual tasks, so releases could be faster.
The world moved on from that mostly: for engineers there was this new requirement of writing unit and end-to-end tests and companies had less and less manual testers.
Personally I have changed jobs 7 years later and my next employer had exactly 0 manual testers. We were required to write a solid amount of tests, we had monitoring in place in case something went wrong and we had a proper CI/CD setup, that enabled quick rollbacks the moment something went bananas.
And yet I saw a number of problems. When I started as a test automation engineer I was 100% dedicating my time to how to test effectively, what to test, when to avoid writing expensive and brittle test suites. I have learnt testing theory as well (👆 it does exist!).
From the moment we pushed responsibility of testing from a separate dedicated team to the same developers who also write feature/production code we created an epidemic of testing Dunning-Kruger effect: people think they did a great job because they don't know how many layers they were missing.
After this strong statement I am going to list a couple of things anyone with a testing background would understand:
- test blindness: the closer the tester is to the actual author of the code, the less likely they will find a bug; the actual developer of the code performs worst, because they work from muscle memory, while an outside, independent team might even catch usability problems (fresh set of eyes)
- lot from the wrong: in the test pyramid the amount of tests are based on what is easy or what the developer can write: you only learned about unit tests? You will only write unit tests! You have installed an end-to-end test framework? Now you have unit and end-to-end tests, but no integration or contract tests.
- reality 101 fails: all the tests are green, backend gets deployed and frontend is broken; but it is not an issue, it was just a "human mistake", coming from "bad communication"
- unnecessary testing: tests that just check if mocks were called, tests that assert things that TypeScript does out of the box etc.
- ghosting responsibilities: team A has test suites for code of team A, team B has tests for code of team B, team A changes something, team B dependency gets broken, but hey, all tests were green! In other words teams are not going to write tests for other teams out-of-the-box.
Discussion
In my experience raising these concerns as just another developer never worked: managers and team leads heard the words, but they did not have relevant experience to understand the problem. They checked the backlog, they mentally checked if the CTO would be grumpy and then it was quietly ignored, who needs more tasks, right?
I am stating here you cannot save alone a company from testing fails when you are also fighting on the fronts of feature development.
Let's discuss two options here:
Educating the developers
One thing can be done here is to educate developers on the test pyramid, about the right layer of test for the right task and so on. But honestly I think it is more like a people issue.
There is mental load: you write your own code, you have a responsibility to:
- write clean, easy-to-understand code
- write a solution that is not hard to change when it inevitably comes
- think all the ways it can break
- and then spend time sitting down and evaluating what testing layer you should use, how it will affect other teams, etc.
Obviously knowing a lot about testing will decrease the mental load and reworks, however the more you put on the devs the less productive they can get from the continuous context switching.
Or should we bring back test managers
?
Instead of having a swarm of underpaid manual testers, what if we hire very skilled testing engineers/managers, who are verifying the test suites, who provide best practices, who can set requirements for inter-team testing responsibilities, and so on.
In application security it is known you cannot only depend on developer education solely. You gotta have security experts in the company, sometimes you even hire them externally to find cracks in your current system and then dedicate time and effort to fix those mistakes.
Top comments (2)
Manual tester will find 100x more bugs than developer could even imagine, that's just insane what these guys can find sometimes. Manual tester is a must have for a product when quality matters more than speed of development. Quite often for new projects, speed is the main priority. Depends on a project.
Many developers would be happy if you take tests away from them, but I would be sad, it's one of my favorite part and I can't express how much it helps to work on features.
Writing test is mental unload:
Writing tests is not always simple, but often it is way simpler than supporting complex code, and when you deal with complexity, you're resting while writing test, you're resting mentally when seeing passed tests.
The mental load you have mentioned is what it looks like when we don't write test.
No confidence, mind is blowing. If you were lucky and didn't add any bug - not writing tests is saving time, so that's sometimes acceptable when speed is in highest priority, project is not too complex, no strong requirements for quality.
Thank you for your well-written comment!
I will emphasize, that I do not mean some test folks will write tests instead of devs! Developers are still writing tests but there is a test manager, that kinda like a coach comes and checks if the right kind of tests were written for the right problems, makes sure best practices are reused, that not every team needs to figure out from scratch how to cover layers.
Regarding the statement tests automatically results in code quality: I would be careful with it. There is somebody who is more eloquent on this topic: Testing numbs us to loss of intellectual control by George Fairbanks
I believe in the right test at the right time. For some tasks, like transforming data I love to write TDD-style tests, since the input and the output is clear, and I can play with my code to find the most expressive version knowing that regardless how many times I change my mind about it the input-output will be the same. This is truly mental unload.
On the other hand, for a UI prototype that by law of nature will change the moment the first product manager/beta customer touches it I find writing an exhausting test suite counterproductive. Once the requirements are tested by reality, you should cover it with tests.