DEV Community

Yuri Costa
Yuri Costa

Posted on • Edited on

#8 Testing Strategies - Tips from The Clean Coder

This is the eighth article from the series "Tips from The Clean Coder". Here we gathered and summarized the main tips from the eighth chapter.

Professional developers test their code. But testing is not simply a matter of writing a few unit tests or a few acceptance tests. Writing these tests is a good thing, but it is far from sufficient. What every professional development team needs is a good testing strategy.

QA should find nothing

I've said this before, and I'll say it again. Despite the fact that your company may have a separate QA group to test the software, it should be the goal of the development group that QA finds nothing wrong.

Of course, it's not likely that this goal will be constantly achieved. Still, every time QA finds something the development team should react in horror. They should ask themselves how it happened and take steps to prevent it in the future.

QA

QA is part of the team

The previous section might have made it seem that QA and Development are at odds with each other, that their relationship is adversarial. This is not the intent. Rather, QA and Development should be working together to ensure the quality of the system. The best role for the QA part of the team is to act as specifiers and characterizers.

QA as specifiers

It should be QA's role to work with business to create the automated acceptance tests that become the true specification and requirements document for the system.

QA as characterizers

The other role for QA is to use the discipline of exploratory testing to characterize the true behavior of the running system and report that behavior back to development and business.

The Test Automation Pyramid

As good as it is to have a suite of unit and acceptance tests, we also need higher-level tests to ensure that QA finds nothing. The figure below shows the Test Automation Pyramid, a graphical depiction of the kinds of tests that a professional development organization needs.

Test Automation Pyramid

Unit tests

These tests are written by programmers, for programmers, in the programming language(s) of the system. The intent is to specify the system at the lowest level. They're written before the production code as a way to specify what they are about to write.

They provide as close to 100% coverage as is practical. Generally, this number should be somewhere in the 90s. And it should be true coverage as opposed to false tests that execute code without asserting its behavior.

Compoment tests

Generally, they are written against individual components of the system. These components encapsulated the business rules, so the tests for those components are the acceptance tests for those business rules.

A component test wraps a component. It passes input data into the component and gathers output data from it. It tests that the output matches the input. Any other components are decoupled from the test using appropriate mocking and test-doubling techniques.

Component test

Component tests are written by QA and Business with assistance from Development. They are directed more towards happy-path situations and very obvious corner, boundary, and alternate-path cases. The vast majority of unhappy-path cases are covered by unit tests and are meaningless at the level of component tests.

Integration tests

These tests only have meaning for larger systems that have many components. These tests assemble groups of components and test how they communicate with each other.

Integration test

Integration tests are typically written by the system architects, or lead designers, of the system. The tests ensure that the architectural structure of the system is sound.

They are typically not executed as part of the CI suite, because they often have longer runtimes. Instead, these tests are run periodically (nightly, weekly, etc.) as deemed necessary by their authors.

System tests

These are automated tests that execute against the entire integrated system. They are the ultimate integration tests. They do not test business rules directly. Rather, they test that the system has been wired together correctly and its parts interoperate according to plan.

System testing

They're written by the system architects and technical leads. We'd expect to see throughput and performance tests in this suite.

Their intent is not to ensure correct system behavior, but correct system construction.

Manual exploration tests

This is where humans put their hands on the keyboard and their eyes on the screens. These tests are not automated, nor are they scripted. The intent of these tests is to explore the system for unexpected behavior while confirming the expected ones.

Toward that end we need human brains, with human creativity, working to investigate and explore the system. Creating a written test plan for this kind of testing defeats the purpose.

Manual tests

Some teams will have specialists do this work. Other teams will simply declare a day or two of "bug hunting" in which as many people as possible. The goal is to ensure that the system behaves well under human operation and to creatively find as many "peculiarities" as possible.

Conclusion

TDD is a powerful discipline, and Acceptance Tests are valuable ways to express and enforce requirements. But they are only part of a total testing strategy.

Development teams need to work hand in hand with QA to create a hierarchy of unit, component, integration, system, and exploratory tests. These tests should be run as frequently as possible to provide maximum feedback and to ensure that the system remains continuously clean.

Next article: #9 Time Management (part 1)
Previous article: #7 Acceptance Testing

Top comments (0)