Today i come across this tweet:
which reads
Does anyone have any written articles or anything telling what strategies they have used to introduce their colleagues to the wonderful art of testing? In cases where they say "I don't agree with tests."
this part:
"I don't agree with tests."
oh my dear, this isn't about an active stance. Considering the generation tools we have today, I'm afraid...
It's a skill issue
I've been there.
- You know the theory: tests are good for the codebase.
- They act as documentation for devs.
- They offer coverage.
- They give peace of mind.
But here's the real problem. It's not about a "stance." It's just that you don't know how to write the test.
And you've tried with the official docs, which look something like this:
function foo() { return 42; }
expect(foo()).toBe(42)
You already know how to test a simple "input-output" function.
But in the real world, things are more complicated...
- You want to test a React component, but it depends on 4 contexts, page location, and a router?
- You want to test a Node.js object, but it depends on 8 external libraries, has a messy internal state, and needs some obscure pre-fetching from Draculaβs castle?
- You want to test a Python class, but itβs tied to RabbitMQ, and getting the system ready is a nightmare?
The first step is admitting: it's a skill issue.
And that's ok
Testing is a skill. And it's different from regular coding.
Start small, break down the dependencies, and isolate the parts you can control.
Control is the key:
The moment you can test your "component" is the moment you fully control your code.
*"component" as an abstract term, your class, your module, your system...
you know when i did finally understand-understand (like really understand) React contexts ? when i was able to test them!
Testing a React Context Provider
Manuel Artero Anguita π¨ γ» Feb 20 '22
Another related post:
Testing a React Custom Hook
Manuel Artero Anguita π¨ γ» Feb 13 '23
Mocking libraries exist for a reason. Master the tools available, whether itβs Jest, Playwright, Pytest...
Remember, it's a skill. And like any skill, it can be learned.
Top comments (81)
I've been a professional developer for almost 30 years and have VERY rarely used any form of automated testing.
I've always thought (and seen plenty of evidence) that automated testing very much encourages a siloisation of code knowledge, and really isn't good overall for teams working on a project. To my mind, if you are working on some code - you should UNDERSTAND that code and the ramifications of modifying it. Automated testing encourages a hands off approach and a reliance on the assumption that the tests are 'correct' and up to date - keeping the code being tested very much as a black box, with no real understanding being gained.
Knowledge siloization is bad, but I think the amount of time required to test a software product without automated testing is also bad.
Without any automated tests, people have to ensure an app works. I've seen different people have different thresholds for how much manual testing they consider to be easy. Some people see no issue with complicated and convoluted manual testing setups, like creating 500 line JSON payloads by hand to submit to REST endpoints. In fact, this also can lead to siloization because the knowledge for the test needs to be passed to the next person. If it's not written down and the person leaves, the knowledge gets lost.
One of the benefits of automated tests is the ability to focus on very small pieces of code. I think that at a certain point, the amount of time spent on regression testing can get to be completely unreasonable when everything is done by manual testing.
In fact, I've seen entire teams dedicated to manual testing struggle with keeping track of their tests. Eventually they got to the point where they HAD to come up with an automated solution because they were wasting so much time testing the same things over and over for regression.
Perhaps you can get by without automated testing on some projects, but for massive projects with hundreds of contributors and mission critical operations it's just not possible. At least, that has been my experience.
@code42cate commented about how failure to understand code before changing is a cultural problem. I agree because regardless of manual testing or automated, if people are blindly changing things that's a project management failure rooted in a dysfunctional culture.
Good take
Do you expect every developer, no matter the maturity, to know every single line of code in a project with hundreds of thousands or millions of lines of code?
No, but it's a great idea to have them actually understand the particular code they're working with, and its relationship to the project. Automated testing works against this
I don't see how that is working against it to be honest. If you are blindly changing code just because tests pass, thats not the tests fault but an engineering culture fault imo.
I agree. You change the code knowing full well how it works and the implications. Unit testing is in place for regression testing of the things you may not have considered. Often it's something small and overlooked, but you didn't need a full time team to catch the issue after you've merged
Who ever said "blindly"?
Nobody.
Huh? Not understanding code = blind. Am I missing something here?
I have a counter argument.
In my current company we have a complex price calculator written by a couple of people who have left the company long ago. There is about 1 developer left who understands the price calculation to some degree, and that's it. It's a huge, hot mess and nobody dares touching it because nobody knows what potential ramifications it could have. I dubbed it Frankenstein's Monster.
The learning I took from this is:
And this isn't the only piece of code without tests. Automated testing does make sense in many scenarios. I agree with one thing though - You should not have tests for the sake of having tests, this is nonsense. But I would love to finally put this monster to rest.
I wasnβt expecting this post to grave attention. I agree. Having test just for the sake of having tests isnβt the point. ππ
I agree. One possible definition of Legacy Code is code that is not tested and is therefore hard to change, see the book from Michael C. Feathers. You just can't be confident that any change has no undesired side-effects whereas with tests you can at least guarantee that a certain contract still holds.
You are exactly the type of programmer this post is talking about. Having tests doesn't mean that you trust them blindly. It means that you have a way to verify that the code does what you think it does, including edge cases and errors, even if you refactor it, and they should fail if you change the behavior of your code. You have to understand the code AND the tests, and maintain them accordingly. If you don't see value in the tests, I agree with the title of the post: it's a skill problem.
I just refactored some code I wrote a couple of years ago and I love that I codified all the edge cases I wanted it to support so that I can be reasonably confident that even with a redesign, the outputs only changed in the way I expected them to.
This π
Building tests increases the confidence for new joiners to get their hands dirty by having proper documentation and guardrails - and a lot of people learn faster by doing instead of reading.
Having this big hurdle to understand everything before being able to to contribute just silos the knowledge to the people that have created it.
There used to be a great article 'don't be a Rick' which I concur - make the barrier to entry as low as possible and get more minds to work on the problem.
Totally backwards, sorry. By testing code you are providing documented, digestible examples of expcected use context and correct system behaviour to people who DO NOT already have that knowledge internally.
If tests need to be updated, they should be. That is equivalent to updating documentation, with the benefit that tests demonstrate where they are wrong by failing. Documentation doesn't do that.
I may not have '30 years' experience, but most of the people you are working with won't either. If you're not testing because of fears of siloization, that's basically equivalent to being the silo, but not knowing it.
I experience almost the exact opposite. When people make changes that break tests, they must then dive in to figure out why their changes broke the tests, and learn more about the system. Tests help them detect and understand how their changes affect the wider system, including parts that they may have incorrectly felt isolated from while working towards their own goal.
But I think part of your point is that tests can make you feel overly secure if you make a change and all tests pass, which I would agree with. No number/quality of tests can free us from doing our due diligence.
Ultimately though I find a solid test suite is much more effective at ensuring quality, compared to projects without tests where developers have to just "stay on their toes", so to speak. The codebases I work in that have automated tests are much more robust, well-organized, and easier to onboard into than the ones that don't. Devs feel much less release anxiety too, because these projects are empirically less likely to break in prod (though it still does happen of course).
I really struggle to understand your argument on how automated testing leads to developers lack of understanding of the codebase.
The tests aren't supposed to pass if someone made a mistake, and in order to fix them, one needs to understand what's wrong.
I see where you are coming from tho. Taking units in isolation leads ignoring adjacent units. However, that's not an excuse! Proper unit testing is supposed to document the code in a very detailed way.
Modularity and parallelism is not siloisation. 30 years developing and you figure you have everything you need. Meanwhile the entire industry shifted under your feet.
Modern design patterns are mandated by volume, and enhanced with portability. You need to reduce the dependency footprint to achieve that.
By automated testing, do you mean like any of unit, integration, e2e tests? If so, isn't automated testing's main value is to replace manual testing, which takes hours of man power and probably does not covering all scenarios such as regression testing? How can you make an argument against automation of manual testing? Unless you're saying that manual testing is also optional.
That's probably the worst take I've read for not implementing automated testing. You try coming into a large, new project and understanding every part of it before making changes.
Automated testing -- presuming you have enough of it -- allows a developer to make changes to implementation with increased confidence that the change won't break anything. If you have coverage data as well (which you should, it's 2024 after all) then you can prove that your code was run as part of the tests.
Is automated testing perfect? No, you have to have enough of it, not every edge case will be thought of and 100% coverage with all tests passing doesn't automatically mean the software is bug free, but it certainly helps and honestly -- it's really not that difficult to implement.
These days if I see a project without automated testing I assume it either isn't a very important project or it's a skill issue.
IME the junior devs are going to roll in blindly wrecking everything in their path, regardless of if you wrote tests or not, spilling their spaghettis everywhere. Tests, which almost all of them written by me, are about the only thing that protects the company from their endless flood of bugs and regressions.
So the difference is whether you want a test suite to find the bugs in 10 minutes or a QA guy to find the bugs in a week or a customer to be the one to discover the bugs at release time.
Interesting point, in a relevant sized codebase what is your approach to tackle code refactoring or changes in logic that span across multiple modules ensuring that everything still works as intended. Also how do you define works as intended for code written by someone else?
In most classes I attend there highly experienced well spoken guy who will make a counter argument for every best practice. Don't be that guy
One of the best ways to UNDERSTAND the code, if not the best, is to write or read a test. You can save yourself some minutes of testing/test reading by executing the code in your head of course, again and again (in allusion to saving minutes of reading a documentation by hour-long debugging).
So your argument is that you've sucked for 30 years, so you're going to continue to?
It's often an issue of priorities and technical debt. I remember middle managers actively prohibiting refactoring and anything connected to it for the sake of quickly shipping features, and I still repeat this pattern in my self-employed business, depending on the customers' priorities and budget, or knowingly increase technical debt to push some urgent quick fixes.
Sometimes it also seems like a vicious circle: before modifying legacy code safely, we should add tests, but before we can add tests, we need to modify the code to make it testable, at least when it comes to unit tests. So we have to approach everything with end-to-end-tests from the outside in many iterations, none of which will be free of risk. And that's a very special skill of testing and handling legacy applications.
You wrote "unit" tests in your headline and then you talk about complex e2e testing.
Second - it is not only a matter of skill, it is also a matter of time. If you come across a large legacy codebase without tests to a team not used to write them, you will very likely never establish them. Because client will keep firing new requirements and you will never have a chance to stop and fix the tech debt.
Disagree. I used to think this way (lack of time) but with current code generation tools⦠you can step up the thing blazing fast
About the definition of βunitβ. Do not stick to the preconception of unit=class , whatβs a unit of logic. Letβs redefine unit to a βlogical piece that works togetherβ and there you go π
We may debate over how doable is to introduce automated testing into project without them, but the definitions of "unit" (independend isolated functions) and "e2e" (interactions between different parts of the system) is pretty much well established and those two should be distinguished.
Iβm challenging you to think about the difference between unit and integration π
We all agree with the definition of e2eπ
Well...
In my previous post I roughly used is a an equivalent. If you want to be precise, integration is "between two (or a few) parts of the system" and e2e should mean "the whole system".
But I am quite happy with the definition from Vue.js docs - they recognize unit, component (only makes sense in the component-based frameworks though) and e2e. We can discuss more subtle division or even different categories, but the more I am thinking about it, I tend to think it is basically "test without dependencies" (unit) and "test with (mocked) dependencies" (e2e/integration/whatever).
No. Stick to the accepted terms and donβt go off and try to reinvent things. Do not overload the field.
If you're writing new code in a legacy code base then you can at least write tests to cover your new changes. At that point it becomes easier to extend coverage to the rest.
This is a great take and very succinctly put! The tradeoff between writing tests and productivity is not as important as making sure your devs know how to test. Furthermore, knowing how to test will actually improve how you design your components for better maintainability. Your three examples of dependencies making it hard to test is a great example.of this. Lastly, what to test for is probably the most important question and has been discussed a lot.
Thanks! Really appreciated
My thoughts exactly! I would advise the said developer to leave their employer for one who does not allow a merge until tests have passed, if they care about advancing their career in finite time that is.
Agreed! More often than not, it's a lack of skills but also a lack of tooling, or caused by the development culture and leadership of an organization.
Good point in my experience unit testing applies cleanly to OO centric languages. One of the reasons I'm not a fan of Python is that much of the code I see is not testable.
The problem is that a lot of this testing with the mocks and such it may require is harder than development so when you're done implementing the functionality you're not even half done, which is kind of crazy.
agree.
succinct issue here is to tame the mocking-library/test-runner/testing-framework you're using so it's not that hard.
But it's hard! it is.
I literally just had this exact thought. Like, word for word (how dare you steal my thoughts!). In all seriousness, testing is absolutely a skill like almost everything else in software development. It just takes time to learn.
Not writing tests in a new codebase may be only a knowledge/skill issue (e.g., the team doesnβt know how to write tests or doesnβt understand why they should write them). Not writing tests in a legacy codebase, which has none and was not designed to be testable, may also involve more/deeper issues (e.g., culture, priorities, time to refactor, etcβ¦).
Personally, for me, the 3 key aspects of writing tests are:
Asserting the design of the code (e.g., the API is easy to use, the code is loosely coupled, etcβ¦)
Asserting the behaviour of the code (i.e., the code does what it is/was intended to do)
Acting as documentation for others (be it other devs, or QAs, or POs, etcβ¦)
Quoting "The Pragmatic Programmer" by David Thomas and Andrew Hunt: βTesting is not about finding bugs, it's about getting feedback on your code: aspects of design, the API, coupling, and so on. That means that the major benefits of testing happen when you think about and write the tests, not just when you run them.β
As a full stack developer I agree, BUT the better approach in my opinion is to move the responsibility of creating Unit Tests to the testers as they have the Test Cases and that is their primary responsibility. This ensures the tests are more accurate and not built to have a passing result.
Completely disagree. A developer's primary responsibility is to write code that solves the problem given to them. What's an easy way to prove that it solves the problem? Tests.
The QA engineers are another layer of proving that the code is doing what it's supposed to do but not the only layer
This π
I disagree, if your developers can't take a TDD approach then they don't know what there expected acceptance criteria is. Integration and end to end testing is the responsibility of the testers to ensure regression issues outside of the changed code are identified.
I am a QA Engineer and there are a few factors like time, resources, culture, skill and maturity to consider if you are going to expect unit testing or xDD from a developer or team or company. I advocate for all forms of testing. We are rebuilding a huge legacy system using the strangler fig pattern. Unit testing leads to testable code and less traps. Coupled with a good QA Engineer and Analyst doing different layers of testing you can become quite confident in a system. It is about team cohesion, understanding, skill sharing. You can have a dev, qae, qaa all testing but have no idea what the others are testing and how, but if they all come together and become sort of a hybrid mega testing mind then they will probably gain a wonderful understanding of their system and be confident in it and understand how it can be modified safely. And let's not forget performance, load, chaos, etc. forms of testing that can even further give confidence and understanding of a system.