DEV Community

ca55idy
ca55idy

Posted on

Is it possible to get 100% test coverage

Top comments (3)

Collapse
 
ahferroin7 profile image
Austin S. Hemmelgarn

Yes, it's technically possible.

The better question though is: Is it worth the effort?

The problem with going for 100% test coverage is that it quite often ends up warping the design of the code you're trying to test, or making the tests confusing in some way.

As a quick example, consider a simple replace-by-rename routine for writing out whole files atomically (in simplified Python 3 here, but language doesn't really matter):

import os

def atomic_write_file(path, data):
    path = os.path.abspath(path)
    tmp = os.path.join(os.path.dirname(path), "." + os.path.basename(path) + ".tmp")

    try:
        with open(tmp, mode="w") as file:
            file.write(data)
    except (IOError, OSError):
        print("Failed to write to temporary file!")
        return false

    try:
        os.rename(tmp, path)
    except OSError:
        print("Failed to rename temporary file to final destination!")
        return false

    return true

This seems simple to test, except there are a lot of possible conditions you need to be testing if you want 100% coverage:

  • Succeeds if the destination and temporary file don't exist.
  • Succeeds if the destination exists, but the temporary file doesn't.
  • Succeeds if the destination doesn't exist, but the temporary file does.
  • Succeeds if both the destination and temporary file exist.
  • Fails if it can't write to the temporary file due to an IO error.
  • Fails if it can't write to the temporary file due to an OS error.
  • Fails if the temporary file doesn't exist, but the rename fails.
  • Fails if the temporary file does exist and the rename fails.

All of the success conditions are easy to check, but you have to remember to check them. The failure conditions though, are trickier. You either have to override both the open() and rename() functions in some way for the test so that they throw appropriate errors, or you have to do creative things in the test case that rely on filesystem semantics and the exact behavior of the language the code is written in.

Realistically though, you probably don't need to care about differentiating those last two failure cases if you're properly checking all the success cases (because you'll already be testing write paths for the temporary file in the success cases). Additionally, if your app is going to bail anyway any time this routine fails, you may not even need to test the failure cases at all, and it may just be better to remove the try/except clauses and have the caller handle the exceptions (which then makes it easy to test the caller in the failure cases).


Something else to consider though. A lot of code coverage tools I've seen focus on either making sure every function is tested or making sure every statement is run during tests. Both of those are OK, but they're also not what you should generally be checking. What matters if you're looking at code coverage is what paths through your code are being tested (so, edges on the control flow graph). That will inherently cover every statement, and it will provide coverage info in a much more useful way than looking at functions or statements.

Collapse
 
sturdy_dev profile image
Carson Sturtevant

Possible? Yes. Something to focus on? Meh. There's most likely plenty of code in your projects that simply doesn't warrant automated testing. I would instead focus on having well-written tests for the the most crucial and/or delicate parts of your app.

Collapse
 
sonnk profile image
Nguyen Kim Son

Yes it's possible but getting 100% coverage should not be the only goal. In a big project, it might make more sense to identify the most important user scenarios and add tests for these scenarios, instead of covering all possible cases.

Btw there are different types of coverage (Function, Statement, Edge, etc).