DEV Community

Cover image for Why Testing After Is a Bad Practice

Why Testing After Is a Bad Practice

Matti Bar-Zeev on January 28, 2022

In this post I will try and give you my 2Cents on why writing tests after you have a so called “working” code is a bad practice, and why you should...
Collapse
 
cjsmocjsmo profile image
Charlie J Smotherman

TDD is like any other tool we use in development, it depends on your use case.

On one hand I can see it being a huge waste of time and on the other I can see how it can be a life saver. For me it depends on the project whether or not to use TDD.

Happy coding

Collapse
 
melli79 profile image
M. Gr.

Whenever I developed with TDD, I had more refactorings and thought deeper about corner cases. Also I had a reliable automated test suite right at the end of development.
But I agree that TDD requires discipline.
I would however not agree that TDD is a waste of time, as Uncle Bob says: The developer is to choose the tools of development and responsible for good automated tests. If your manager decides that you should not do TDD, then you should question his competences.

Collapse
 
raibtoffoletto profile image
Raí B. Toffoletto

Totally agree! As someone who works for a very small company that has the practice of not writing any kind of tests, I have to fall to the practice of writing my test units when I have a bit of free time way after I wrote the code. I can feel I'm biased towards the code written and have to do twice the effort to imagine edge cases or scenarios I haven't thought about before. Hope this will change in the future.
Nice article 🎉

Collapse
 
starswan profile image
Stephen Dicks

I think it's a bit sad that you say 'you have a practice of not writing tests' in a small company. You could change that with your next PR - try writing the tests first. You might just find that the job is quicker (yes really!) because you get fast feedback on whether your code solves the problem or not, rather than testing everything manually.

Collapse
 
raibtoffoletto profile image
Raí B. Toffoletto

It's VERY sad... They basically told me not to 'loose time' with it... I'm trying bit a bit show other ways to them...

Thread Thread
 
mbarzeev profile image
Matti Bar-Zeev • Edited

And if that does not work out I think you should maybe consider your continuing of professional career path with them. If you consider testing and TDD a must-have tool for you to do your work well, you should look for an employer who acknoledges and respects that.

Thread Thread
 
raibtoffoletto profile image
Raí B. Toffoletto

Yes, that's the plan. 😉

Collapse
 
mbarzeev profile image
Matti Bar-Zeev

Thanks!

Collapse
 
jankapunkt profile image
Jan Küster

TDD is a must when requirements are clear and distinct but a waste of time when reqs are fuzzy. First get all requirements together then do the tdd - that's at least my premise.

Collapse
 
mbarzeev profile image
Matti Bar-Zeev

For sure. At least you have a clear indication that the spec you got is not complete very early in the process.

Collapse
 
jankapunkt profile image
Jan Küster

That's nearly every time the case even with true domain experts we often face vague or contradictory requirements.

Collapse
 
ashleyjsheridan profile image
Ashley Sheridan

I think there's a few things conflated here. Writing tests after the code doesn't necessarily mean they're an afterthough, or considered a separate task (e.g. via a separate ticket). Sometimes that can be the case, but the majority of the time when I've written, or seen other developers writing, tests, they're part of the task, but written after the code is at a working point.

Tests should be written as cleanly as any other code, although obviously allowing for any nuances of the test framework being used. I've often written what I thought was clean code for the non-test part, only to find it could be improved because my tests couldn't be written well. I'd suspect the same could be found if tests were written first.

I also don't believe that just because tests are written after that they've immediately written lazily, or begrudgingly, and aren't testing the happy and unhappy paths in the code. Where that does happen, I'd heavily suspect that the developer doing so is just as likely to be writing the rest of the code lazily too, not only the tests.

Finally, I think if you're in a team that would consider functioning code without tests to be ready for production, then writing tests beforehand (which would presumably be taking up the same overall amount of time) would probably have said team questioning why a task was taking so long, and likely prompt them to ask that the tests be left until the end anyway. However, that's just generally a bit of a red flag anyway, and if you're a developer who cares about testing in a team that doesn't, you need to find a better team.

I think it's also worth mentioning that a lot of testing frameworks out there do have code coverage capabilities. I wouldn't rely on only these though, as it's not too difficult to trick them into the appearance of coverage when it's not the case, and this can lead to abuse by those developers who would take the lazy route.

Collapse
 
mbarzeev profile image
Matti Bar-Zeev • Edited

Writing code without having a clear functional spec of what should this code do is hard. Very hard. I need to know what I want the code to do before I jump in and write it. I found that the best way of creating this functional spec is with testing before the code.

Collapse
 
ashleyjsheridan profile image
Ashley Sheridan

How do you begin writing a test for anything if the functional spec of that thing isn't clear?

Personally, I find it easiest to break the business requirements into the smallest unit of work that effectively results in a complete item. Then it becomes pretty simple to break that down across the various code layers as required to ensure an approach that follows clean, SOLID principles. Tests are more easily written for such code.

Thread Thread
 
mbarzeev profile image
Matti Bar-Zeev

How do you begin writing a test for anything if the functional spec of that thing isn't clear?

I don't. Or at least, I strive not to.
I'm saying the same thing, but the flow is test first 🤓

Collapse
 
fredicious profile image
Fred

Awesome article, I have seen the exact situations you describe.
For me TDD is a mindset, it takes a lot of practice to own it, but I would never want go back.
I dream that one day we will just write the tests and the computer will implement the code :)

Collapse
 
mbarzeev profile image
Matti Bar-Zeev

Thanks mate! I think that such tools are emerging as we speak, more in the sense of writing your requirements and watch the code generate

Collapse
 
highperformancecoder profile image
Russell Standish

TDD works well when the behaviour is specified up front. The problem is not a lot of my code is like that. The process of writing the code involves discovering what is required (which is very agile - write the code, get it into the hands of the stakeholder ASAP, revise the code according to feedback).

As much as possible, I do testing at the same time as coding. You need to test the code anyway to verify your solution is doing what it is supposed to do, so why not make a little extra effort and automate those tests. So yeah - having a separate ticket item "Write tests for ..." is a bad move.

I do agree that this process does tend to miss edge cases. Hence, I also try to factor in a period of what I call "coverage testing" at some point during an iteration. Using a code coverage tool (eg gcov) take a close look at the lines of code not covered by the regression suite. Then think about what is involved to exercise those lines of code, and create tests where relevant. It's a great way of catching bugs. Not all lines of code need to be exercised by a test - the "dummy tests" mentioned above - so focussing on a percentage coverage is not really worthwhile. Focus on the lines not covered, and decide then whether testing is appropriate.

Collapse
 
mbarzeev profile image
Matti Bar-Zeev

I agree with most of what you wrote here. As for the "coverage test" period, This sort of things do not apply when praticing TDD for the obvious reason that there is not a line of code which did not start with a test.
What you mentioned about checking put the coverage and writing tests accordingly is also my way when I need to write tests after. How else would you do that, right? ;)

Collapse
 
developergp profile image
GPWebDeveloper

Thanks for sharing.
I understand and agree with all the arguments listed. However I have a hard time imagining and writing a test for code that doesn't yet exist. I imagine the task specification has to be very detailed.

Collapse
 
mbarzeev profile image
Matti Bar-Zeev

Indeed it should. Truth is that starting the actual coding without having a detailed specification can result in a great time waste

Collapse
 
juanlabrada profile image
Juan Labrada

Actually, you can not test your app until it is finished. What you mean is designing the tests based on the app expectations, and that's why many agile practitioners consider TDD as a design technique rather than a testing technique.
It has additional emotional benefits, for example: reducing anxiety since you quickly get feedback about the code you are writing, in contract to testing at the end when you are anxious expecting your job is done and that you are met the deadline, just to realize that it is full of bugs.

Collapse
 
jacekandrzejewski profile image
Jacek Andrzejewski

I disagree that practicing more TDD will help avoid many of issues. Far more issues arise because people use TDD wrong way, like writing tests that give nothing in return, example is testing what happens if you pass incorrect type to a function with typed arguments, or tests that are "tautologies" (use a setter to set value, get value and compare).
Too much mocking happens when your code is overcomplicated, and it can be overcomplicated because you TDD everything and write too many tests and try to make things easily testable and extendable where they don't need to be. Examples are things like creating an interface for one class, where there is a really low chance you will ever need a different class implementing that interface.
Another problem happens because TDD asks you to test units, and every single person defines "unit" differently in their head. I usually don't do a ton of unit tests, mostly because I usually create APIs, so I just write tests for the API and any mistake in unit will come up there too.

TL;DR
Most problems happen because people don't understand what they read or don't think while writing. Using TDD and other practices won't help with those two problems and may make things worse.

Collapse
 
shadrack1701 profile image
Matt Trachsel

Nothing better than fully testing a piece of code which has the likelihood of needing to be refactored from a PR review so that the 5x as much code in the tests you wrote needs re done as well. Then assuming that step only happens once you ship the code with a bunch of low value tech debt tests on a piece of code that will very likely change or have a short life span. Great practice we're encouraging Devs to do.

Collapse
 
ravimashru profile image
Ravi Suresh Mashru

I totally get what you're saying and I also faced the same problem when starting out with TDD. However, it took me a few years to understand that good tests are tied to the behavior. As a result, if something has to refactored because of a PR comment it doesn't break a test - because refactoring is all about changing implementation details, it doesn't change the behavior you expect from a class/function.

Collapse
 
nrcaz profile image
nrcaz

I think you should have emphasized more on this in your post, I personally believe it is one of the most important aspect of TDD and how it becomes an agile practice. Writing test after will be tied to your implementation 100% of the time, just try it, even if you know TDD you will have a hard time writing a test not tied to your implementation after coding it, you are biased, and will be a really hard mental exercise.
Writing the test first challenge the design and the user needs, if you can't write the test because there are too many aspect you don't understand, review the design don't start an implementation that will probably fail to answer the need. Never implement without a test really means never implement when your design is not though out.
As you mentioned since test should not be tied to your implementation but to the users needs you can change your implementation any times you want and your test will even help you do it faster without breaking anything for your users.

Collapse
 
prafful profile image
Prafful Lachhwani

Completely agree, how to overcome the feeling to skip tests for now because you have lots of extra things to do?

At start of the project we've enthusiasm but when you have to meet deadlines you feel like skipping unit tests for now.

Collapse
 
nyambol profile image
Michael Powe

I'm bemused by the implication that people who don't TDD start out writing code without having thought out the design. Is that really a thing? "By the time I learned to talk, I forgot what to say."