DEV Community

Cover image for 100% Code Coverage is a Lie šŸŽÆ

100% Code Coverage is a Lie šŸŽÆ

Leonardo Montini on February 09, 2023

On a project I finally hit 100% Code Coverage šŸŽÆ what could go wrong now? I tested ALL lines of my code, there are no bugs! Well... not really. If ...
Collapse
 
cicirello profile image
Vincent A. Cicirello

I certainly agree with the essence of your post. High coverage isn't sufficient. Tests must include edge cases. High coverage without the right assertions tells us only that lines were executed but not if they are correct. Etc. Etc.

There's one thing that I always see in posts like this concerning test coverage and that is your example of getters, setters, empty constructors:

Imagine adding tests for simple getters and setters or an empty constructor with no logic. Do they increase the coverage? Yes. Do they add value? Nope.

If one assumes that tests for other things will cover these, but report shows they were untouched by tests, then do you need these getters, setters, etc at all? That is a question that should be asked if you have methods, etc that you don't feel should be tested.

If you need them, then you need to test them. Does that empty constructor initialize the object correctly with the specified default behavior? Does that object behave as specified if initialized with no parameters? Does that setter actually set (e.g., maybe someone forgot to implement it and it has an empty body)? Does that setter set correctly (e.g. maybe it must compute something first)? Will it continue to set correctly if class changes in future releases (e.g. now it is simple this.x = x but later someone has reason to change class fields to eliminate x and define a y instead, thus requiring that setter to become y = f(x))? If you are testing that setter to begin with, you can detect a regression if one occurs during such a refactoring. Same potential issues with untested getters.

Collapse
 
srodrigodev profile image
Sergio Rodrigo

Agree with this. However, it usually turns into a excuse to write untested code either by bypassing TDD or just out of laziness. Testing 100% of behaviours (as opposed to just lines of code) should be the goal. You don't need to gest getters (they'll be tested indirectly when testing other code anyway), but pushing untested code branches is not cool. I'd say 95% of the time, this argument turns into a excuse to not be professional, even if the underlying principle is true.

Collapse
 
lexlohr profile image
Alex Lohr

Code coverage only tells you that code has run during tests. It's mostly useful to find out which branches of your code are untouched by tests so that you may consider if it is worth testing them.

It could very well be that a part of the code runs, but there are no assertions to cover the result, which means you get 100% coverage and 0% confidence. What you actually want is to get out the most confidence from the fewest tests possible, so don't test what is a) already known to work (e.g. that an event triggers a handler, you already know that) and b) irrelevant to the outcome of your use case.

Collapse
 
alexr profile image
Alex (The Engineering Bolt) āš” • Edited

Test coverage on its own is not a good tool or metric. Using it as part of TDD would allow an engineer to write better code and forces them to think about edge cases. It's the process not the goal of covering with tests.

When adding tests you should balance with unit, functional and integration tests to make sure that e2e app behaviours are captured.

Collapse
 
eljayadobe profile image
Eljay-Adobe

"Code Coverage is a tool, not a goal."

That is a quotable quote!

I worked on a big project that was at 73% code coverage with unit tests. Us devs were very happy with that. There were some folks (not devs) who were using the code coverage as a metric, and wanted it to be higher.

That made no sense to us devs.

The value of doing test-driven development is that the unit tests are a forcing function to make the code follow most of the SOLID principles. It makes the code avoid hidden dependencies, and instead use dependency injection or parameter passing. It makes the code more robust, more malleable, reduces accidental complication (one kind of technical debt), higher cohesion & lower coupling. Highly coupled code is not unit testable code.

In my opinion, the value of SOLID is that OOP has some deficiencies. Over time, those deficiencies were noted and SOLID was devised as countermeasures to shore-up were OOP was lacking.

The primary value of TDD is that it forces SOLID.

The secondary value of TDD is that it allows aggressive refactoring, with confidence.

The tertiary value of TDD is that, as an artifact, there is a test suite that should pass 100% of the time reliably, and run very quickly (a few seconds). Which ensures basic correctness of the code. And if it doesn't pass there is either regression, or a non-regression bug in the code, or some of the tests in the test suite no longer jibe with the code (a bug in the tests).

Collapse
 
ravavyr profile image
Ravavyr

fully agreed, but then again writing tests is so nice...
And then you see a form and just hit the submit button without entering anything and watch it either

  • submit empty data
  • error out without a nice human friendly message
  • do nothing, no response, nada

When devs can't be bothered to even write basic error responses and basic form validation because [html5 has required!]....psh, tests are pointless.

Collapse
 
ant_f_dev profile image
Anthony Fung

Tests are good, but cover only what is needed. I've seen some people bend the code completely out of shape for the sake of saying that it was built with TDD. It made the code more difficult to follow, and it didn't actually test the scenario properly.

Another downside of too many tests is that the code becomes very difficult to modify if requirements change.

Collapse
 
liamjoneslucout profile image
liam-jones-lucout

Agreed that 100% coverage is not 100% confidence, however it's a good place to start, and better than 90% coverage. I consult, and every single place that I've been that has a quality gate below 100% coverage magically manages to leave the most complicated code untested.

For stuff not worth testing I usually mandate comment labels to turn off coverage checks for those lines. That way the untested lines are explicit, and the coverage gate can stay at 100%, forcing developers to either write tests, which can be reviewed for efficacy, or declare a line not worth testing, which can be questioned.

Spotting something that isn't there on a PR without running it is usually very difficult, especially of a branch isn't tested or something like that.

Collapse
 
kinsondigital profile image
KinsonDigital

I think it depends. If the goal is to get that extra green, then no. Don't do it. But I don't do it for that. I do it to test data as well as to make sure nothing has changed with the getters and setters.

Collapse
 
raguay profile image
Richard Guay

I'm a programmer with an ASIC design background as well. I once designed an ASIC chip that had 100% test vector coverage (that means every electrical route in the ASIC chip was tested). But, chips would still fail due to other problems (mostly chip mounting issues).

The same thing is in software. Even with every line of code covered, there are ways to break almost any software there is. That is because it would be impossible to test every way code could be used. As the author said, test coverage is a tool, but it should never be the goal.

I've never seen an article or book about creating test coverage for user found errors in the code. That would be the best types of test to write: actual areas of failure in the past to insure that they don't come back in future versions. Spending more time of these types of test would be of more value.

Collapse
 
polterguy profile image
Thomas Hansen

We have 100% on (some of) our projects. If you're writing library code it's arguably a must ...

But I see your point ...

Collapse
 
damianreloaded profile image
DamianReloaded • Edited

Tests are useful to detect when changes to the code break dependencies that don't necessarily break the build. The most clear example would be database interaction when changes to the code and to the database are done separately.

Collapse
 
richardforshaw profile image
Richard Forshaw • Edited

I agree with this. The key sentence is:

"The goal of tests is to ensure that the code works as expected, not to increase the coverage."

Everyone should live by this. In fact philosophies such as TDD and BDD enforce this by defining what the functionality is first and then writing a test which will verify that the code meets this expectation. Simply adding a test to increase physical code coverage does not usually map to a code function (although sometimes it will)

However I don't 100% agree with this:

"If you're not testing the business logic, you're not testing the code."

You can test code without testing business logic. Programming languages contain limitations, just as much as the functionality you are trying to implement also comes with constraints. This is especially true if you are implementing a stand-alone API which does not control what it receives in its inputs.

E.g. if the database as a maximum commit size then the stakeholders should be informed how this impacts the user, and there may need to be a limit on what the user can do in one go.