DEV Community

Cover image for How much code coverage is enough?
Stephen Leyva (He/Him)
Stephen Leyva (He/Him)

Posted on

How much code coverage is enough?

I have always thought that code coverage can be looked at in a similar way as availability According to the SRE handbook, a system should be as available as it needs to be anything more is wasted effort for the value returned. In a similar way it may be possible to test 100% of your code but does it really return value for the effort it takes to implement? Google even aims for 85%. It is also worth noting language plays a big part in how practical and easy it is to implement tests. So I leave it to you how do you decide what coverage may be appropriate for a certain project?

Top comments (13)

Collapse
 
quii profile image
Chris James

I would look at it at a different angle

Are you getting lots of defects?

Then your testing strategy probably isn't good enough.

Do you feel confident to refactor and change your system

Then your testing strategy probably isn't good enough.

Trying to aim for a number is not going to help you. Tests are a means to an end, measure those ends instead.

Collapse
 
justintervala profile image
JustinTervala

I like this idea. Aiming for a high coverage is good, but it's equally if not more important that your tests are flexible and assert the right conditions. Simply because your test has hit a line of code doesn't imply that it has meaningfully tested that line of code. That being said, there is no chance that a line of code is tested if it isn't covered, so any changes to it could result in defects or issues refactoring.

Collapse
 
srleyva profile image
Stephen Leyva (He/Him)

I like this idea. Many people get caught up in numbers game and may even loose sight of testing the correct thing. Measuring the end game is what's important.

Collapse
 
mortoray profile image
edA‑qa mort‑ora‑y

Coverage is not a good metric if followed strictly. It ignores the significance of the code being covered, giving equal value to places that are complex and those that are trivial. It creates a bogus priotization for what to test.

In the general sense of the word "coverage" we do want "full" coverage. You should test all your code to some degree.

Here are some practical tips:

  • Ensure common use-case paths are covered. If it's written in a high-level feature then it should somehow be tested.
  • Avoid testing unexpected error conditions throughout the code. That is, if you have error detection code, and you do because you're doing defensive coding, it's a very low value proposition to ensure those checks work.
  • Avoid testing trivial code directly, assume transitive testing of classes will cover them, or even just rely on code review to catch obvious mistakes.
  • The extent of coverage relates to the complexity of the module. An algorithm that took a week to get right requires enough tests to verify its correctness. Some unavoidable boilerplate code may not need much, if any at all.
Collapse
 
ben profile image
Ben Halpern

You're going to get diminishing returns after a point, so 85-95 strikes me as a goodish number but I think a quality can't really be measured too much by a number. It helps you get there but can't be the be-all-end-all.

Collapse
 
webreaper profile image
Mark Otway

Don't concentrate on coverage numbers. If your most complex, fragile and/or critical code path is covered with multiple good quality tests that's way better than if you've spent days eeking out some extra coverage for all your getters and setters and generated code.

What you're looking for with test coverage isn't a number, or completeness, it's a confidence to change stuff and know that if you break something, your tests will flag it up to you. So test stuff that matters and that is likely to be impacted by future changes.

I've got projects with 50% coverage that are tested awesomely and give huge confidence and change agility. I've also seen projects with > 90% coverage that break every release because most of the tests are worthless.

Collapse
 
dmfay profile image
Dian Fay

It's got to be a little al dente but still cooked through.

Collapse
 
hilaberger92 profile image
Hila Berger

I think that high coverage is always a good thing, but since time is always lacking, the most important thing is that the main methods will be covered.
For example, if you have a code that is only 60% covered, but the "small" methods are not covered - that's ok, since the chances for bugs caused by these methods are lower, and even if a bug occurs - it's easier to find. Do you agree?

Collapse
 
sqlrob profile image
Robert Myers

You also want to be careful of exactly what 100% means.

You can get 100% coverage by automatically generating tests. That doesn't tell you anything about the correctness of those tests, just that the current code passes.

Collapse
 
adambullmer profile image
Adam Bullmer

I've actually realized a benefit to 100% coverage that you cannot get with anything less. If you maintain 100%, that means you will never introduce untested code. Otherwise youigbt select and add code, maintain overall coverage percent and no one will notice if unexercosed code is introduced.

Of course quality and fragility of tests are also important factors in successful tests. Meaningful assertions to utilize this coverage are also necessary or else you arent fully reaping the benefits of all this coverage. 100% coverage is generally coaxed out one way or another, be it in your automatic unit/integration/e2e tests, or through manual QA.

It definitely takes a lot of work to get here, and isn't recommended for time constrained projects. I do agree with the other comments here where it isn't necessary, but I did feel compelled to highlight the other side of the argument.

Collapse
 
prestontim profile image
Tim Preston

The way I look at it, with our huge amount of legacy code: Something > Nothing.

Collapse
 
joshualjohnson profile image
Joshua Johnson

Also be careful because you can have 100% coverage with bad tests. This is not helpful either.