DEV Community

Writing 'Testable' Code Feels Wrong

Jon Randy ๐ŸŽ–๏ธ on July 10, 2020

So, as weird as this may seem, I'm currently in the process of adding tests to a project I'm building at work. Why weird? Because I've never done t...
Collapse
 
alekseiberezkin profile image
Aleksei Berezkin

Not all of the code needs to be unit-tested IMO. Good test is when the code is complex but the test is easy. Perfect candidates โ€” algorithms, utils, libs etc. Bad candidates: getters and setters, DTO, and boilerplate business logic.

Automated integration tests are a completely different story, but there is also a room for frustration. For example, Selenium tests usually locate controls by tags or classes; and when a UI is changed, tests may break. However not having tests is also dangerous.

So, it seems there can't be strict rule unfortunately. There must be, well, โ€reasonableโ€ amount of tests, and the measure of โ€œreasonโ€ is very vague.

Collapse
 
bitschupser profile image
Alex Rampp

Hi Jon,

thank you for sharing this thought. Personally, I'm a big advocate of developing test first since it helps me getting clear about the expectations I have to the unit I'm working on. As I mentioned the term 'unit', unit-testing denotes the activity on verifying that this unit does the right thing.

In my opinion, one very interesting question on unit testing is: what is 'a unit'? Is it a function, a class, a module (whatever that is in you architecture and/or technology), or sometimes a whole (micro-)service? I think there is no general answer - it highly depends on the technology, on the architecture and on the environment you're working in.

When it feels wrong adding additional abstractions just for the sake of test-ability, then perhaps the 'unit' was chosen to be too fine grained. I had an interesting discussion on GitHub a few years ago where I refactored a parser module to make it more testable for the price of making it more complicated. At the end, the original author convinced my seeing the parser as a black box and just test that a given input produces the correct data structure. In this example, I had a wrong definition of 'the unit'.

But sometimes there are also cases where test lead to a missing abstraction. Dependencies to external systems, legacy components or network calls are good candidates for this. While coding, it's easy to just open a network socket and write some data to it. But it's a good idea to abstract these details in some small, low-level components. Sometimes tests lead to such refactorings, since code depending on a network socket is very hard to test. This is the core idea of Test Driven Design.

Collapse
 
grahamthedev profile image
GrahamTheDev

As with anything in life (and as Thanos would say if he had been a developer) it is all about balance.

Writing 10 tests for a 1 line random number generator...you probably drank too much of the TDD cool aid!

Writing a financial transaction application that powers a banks infrastructure with a team of senior devs, junior devs, sub contractors and that guy Steve who just seems to love refactoring code despite being a graphic designer? Probably best to have a few tests just to check the basics at least.

I think it is number of devs / average experience * size of project * how critical it is that dictates when to throw tests in (I am sure someone smarter than me could turn that into an actual formula to follow ๐Ÿ˜œ), essentially โ€œhow many people can cock this up, how likely are they to cock it up and if they cock it up how much of a problem is itโ€ should be the key to how heavy your tests should be!

Collapse
 
mxldevs profile image
MxL Devs • Edited

Usually I just treat it as a black box. As long as it does what it's supposed to do, and not do what it's not supposed to do, should be fine?

If the purpose of testing is to make sure that any new changes or new integrations don't break anything by running a set of tests to make sure everything is still good, a black box test can probably serve that purpose.

Collapse
 
rbseaver profile image
Rob Seaver (He/him) • Edited

At first, it did feel awkward. I had been programming for fifteen years and had never written a test. When I was introduced to testing and -- more specifically -- TDD, it was jarring. Now, five years later, I've found that testing does help me think better about design and SOLID principles and if I don't have tests I feel like I'm driving without a seatbelt. It took me about a year to get comfortable with TDD, but after repetition and practice, it feels much more natural now. I empathize with where you're at right now, though. I think over time you'll come to appreciate it, and I wish you good luck on your journey!

*Edit: I just realized that this post is several months old. Man, do I hate being late to the party!

Collapse
 
grahamthedev profile image
GrahamTheDev

Donโ€™t worry I am even later than you, I would just call us โ€œfashionably lateโ€ ๐Ÿ˜„

Collapse
 
marcinwosinek profile image
Marcin Wosinek

I'm maybe from 'TDD cool aid' camp. When I started writing tests, often I had to organize my code in a special way to make it testable. Now, with probably 8 years of doing mostly TDD, I like more the code that has many layers of abstraction that make it testable.

I like the idea of automated testing - delegating to the machine part of our work, and providing the working code & executable documentation for our colleagues or future selves. I agree that testing impacts the code design, and I agree that 'testability' isn't the best metric to evaluate code design quality. But I think that code quality is more an art then science, and in the end we all following personal preference.

I'm open to the idea that making code testable can make it worse; but in the cases I deal on daily basis (business web apps) I think having unit tests is worth the price.

Collapse
 
pentacular profile image
pentacular

I think that to have any meaningful kind of discussion you're going to need to talk about what you mean by "testable", since that's what's driving the changes that you find awkward.

Collapse
 
jonrandy profile image
Jon Randy ๐ŸŽ–๏ธ

Testable using automated test frameworks

Collapse
 
pentacular profile image
pentacular

That's not very meaningful. :)

A test is just a small application that does something and sees if it got what it expected.

Parts of your code that have no API will be hard to test -- you'll need to simulate a human user.
(But is this useful to you? Hard to know, since you won't talk about your testing requirements)

Parts of your code that have an API will be straight-forward to test -- you can just call them.

But sometimes code operates in a larger environment.
So testing writing to a database will require setting up a database to write to.

The usual approach here is to set up a virtual database of some kind -- perhaps a mock, fake, or light weight implementation.

If your API doesn't allow the caller to supply the databases they want to operate on, then you'll find testing those difficult.

(And for database, substitute any other kind of significant external resource).

And that's pretty much all there is to it.

If your code has APIs, and a way to supply external dependencies, it should be straight-forward to test.

But, again, you need to think about what kinds of tests are actually useful for your use-case.
Unit tests? Regression tests? Integration tests? End-to-end tests? QA tests? Monitoring?

Tests are a cost, so you should concentrate on tests with high utility, and minimize tests with low utility.

So you must actually think about what your testing requirements are, and what it will mean for your code to be testable.

Collapse
 
diana75082290 profile image
Diana

Thank you, interesting article. I'll save it.