DEV Community

Nikolaos Gkogktzilas
Nikolaos Gkogktzilas

Posted on

Automated Software Testing Strategy

As a software developer, I worked in different companies with different software testing processes. In most cases there wasn't a specific/documented way of testing... so the what/how of the process was up to the individual developer. As in most cases when there is no enforced or at least documented policies, things start to derail after a while.

In bad test stacks you would see things like:

  • Duplicate or semi-duplicate tests. Mindlessly adding test cases here and there without a plan can have the result of duplicate tests.You could argue that this is at least better that no tests, but we for sure can do better.
  • Different type of tests, crammed in the same TestClass. Testing both for unit and integration cases or both system and unit cases. Yes, in all these tests you are testing the same class, but in a different level.
  • T*ests took too long as the whole suite (even known slow type tests) were running all together*. This can make local development very slow and boring. A small separation between fast and slow tests can go a long way, but you need to have these types of tests separated in order to run them separetely (hint to previous point!!)
  • Tests had to be run in a specific order. So, adding a new test was like playing inverted Jenga, if the test was not added carefully at a place where it fits with the rest, then the whole thing would crash and you would have to debug the tests and find the correct order of tests.
  • Not feeling confident on our test suite or that it actually means anything that tests are passing. This is the worst feeling when being a software developer. It brings in stress and anxiety when it should be the other way around. Your tests should give you confidence, minimisie the risk and fear of deployment, thus resulting in a more continous deployment setup. You probably have heard something similar to "no deployments on Fridays" !!!
  • People get discouraged and stop writing quality test or any tests at all, due to all this chaos of the previous points. This is when the shit hits the fan, there is no willingness to improve the testing phase and the whole thing just keeps going aimlessly until all devs get burned out and abort the ship.

So how should we test then? 

DISCLAIMER NOTES !!!!

  • This example strategy for testing is mainly focused, based on my previous experiences, on your average saas-like business software. Any other more niche or special product will most probably deviate from this. But you can extrapolate and make your own strategy based on the basic idea.
  • Sadly, I haven't managed to put this strategy in an actuall production product. So the discussion will be on theoretical basis for me. Hopefully, at some point, I might be lucky enough to put it in practise and then i can let you know how it went :P
  • Obviously, the idea of splitting your types of tests is not some new groundbreaking idea that you never heard before. I am just trying to simplify the idea, so that hopefully it makes it easier to explain (or sell) to non-technical people on why this is something we need for a successfull product. Also, for us as developers to actually implement such a testing stack with a fairly success rate, as it is very easy to be carried away by pressure and deadlines and fall back to bad testing practices.

Introduction

The whole idea of the strategy is to split your tests based on 

  • How low or high level they are. Examples:
    • Unit tests are low level tests
    • System tests, are higher tests
  • How technical or business related they are.
    • Integration tests are technical
    • BDD tests are business tests

In this way, we want to test our software on different levels and based on different value outcomes. At the same time, these value outcomes could even have different stakeholders who will be in charge (or care more) to improve the tests results of that stage. Examples:

  • Programmers will probably care more for unit - system tests
  • Product people will care more for BDD tests
  • Infrastructure people will care more for performance tests 

Once more, this is just an example. Depending on your case, you might have 3 devs overlooking the whole test stack or you might have a team for each stage. The following test stack is an example stack that i would like to see in some of the prejects that I have worked in the past.

The test stack

As you can see from the staircase graph, you go from low level tests to higher level tests and from technical to more business related type of tests.

Testing Staircase Graph
Let's see what each type/step stands for:

Unit Tests

Unit Testing is a type of software testing where individual units or components of a software are tested. The purpose is to validate that each unit of the software code performs as expected. Tests isolate a section of code and verify its correctness. A unit may be an individual function, method, procedure, module, or object.

USAGE:

  • Developers receive fast feedback on code quality through regular execution of unit tests.
  • Unit tests force developers to work on the code instead of just writing it. In other words, the developer must constantly rethink their own methodology and optimize the written code after receiving feedback from the unit test.
  • Runs each test case in an isolated manner, with “stubs” or “mocks” used to simulate external dependencies. This ensures the unit tests only considers the functionality of the current unit under test.
  • Unit tests enable high test coverage.
  • Runs frequently and early in the development lifecycle.

Integration Tests

Integration Testing is defined as a type of testing where software modules are integrated logically and tested as a group. A typical software project consists of multiple software modules, coded by different programmers. The purpose of this level of testing is to expose defects in the interaction between these software modules when they are integrated.Additionally, interaction with dependencies are also tested (database, files , apis etc.).

USAGE:

  • Ensures that every integrated module/dependency functions correctly
  • Usage of "real-like" data when integrating (data fixtures, test files, mocked or dev apis etc.)

Smoke Tests

Smoke tests are a subset of system test cases that cover the most important functionality of a component or system,
used to aid assessment of whether main functions of the software appear to work correctly. I call it an "optimisation" as they are also used as a fail-safe, if these tests fail then we don't proceed running the full sustem tests which probably will be a slower process. As this stage tests the most important functionality, I have also seen these test run against production environments, either after each deployment or in a cron way. When used in this way, they might also be called "Live Tests".

USAGE:

  • To determine if a computer program should be subjected to further, more fine-grained testing like system/bdd/performance tests which are generally more time-consuming tests.
  • To determine if a computer program state at production is still as expected by running these tests in an interval (live-tests).

System Tests

System testing tests the entire system, seeing if the system works in harmony with all the integrated modules and components. In my "book", you perform these tests in a "black-box" way where you test from the outside looking in without really using the knowledge of how the internals work. Thie stage of testing lies in the technical tests area, which means that you should try to stick at technical testing (ex. authentication, validation, CRUD requests, error handling etc) 

note: System Tests can mean many-many things for different teams. As mentioned in the disclaimer earlier, I mainly focus on your average saas-like business software. So in this scenario these tests would mean testing functionality through API interactions.

USAGE:

  • Verify thorough testing of every input in the application to check for desired outputs.
  • Usage of "real-like" data like in integration tests (data fixtures, test files, mocked or dev apis etc.)

BDD Tests

BDD uses human-readable descriptions of software user requirements as the basis for software tests.
Each test is based on a user story. It's one step higher that system tests, as in these BDD test we will test the same way
as in System Tests (API calls) but in a format of user stories (scenarios) to assert curtain stories/flows work as expected.

USAGE:

  • A team using BDD should be able to provide a significant portion of “functional documentation” in the form of User Stories augmented with executable scenarios or examples.
  • It smartly helps a non-technical person to understand the automated test easily (human-readable descriptions).
  • Like Domain Driven Design (DDD), an early step in BDD is the definition of a shared vocabulary between stakeholders, domain experts, and engineers.

Performance Tests

So far, we tested that everything is running correctly tech-wise and feature-wise. Things are looking great and we are happy !!! But, are return these correct responses... in a timely manner?! What if there were 5 more users using our service... or 10. How many concurrent users or requests can we handle before we start noticing delays in response time. So this is what we try to cover with these tests, that even though we return correct results, we also return them with the expected response time

USAGE:

  • Some of the metrics to test for are :
    • Response Time: Time taken for the API to return a response
    • Throughput: Number of requests processed per unit time.
    • Concurrency: Number of simultaneous users or requests the API can handle.

Conclusion

What we covered in this short text is an example strategy/pipeline/process (call it what you like) of testing software on different levels (low - high) and different value outcomes (tech - business), along with some examples of how things end up when there is not a good test stack process defined. Obviously, this is a 10k feet view of a test stack! Each individuall step will require a lot more work,documentation and planning to actually be implemented correctly in your test stack. However, having an initial plan in mind will probably bring you far better results than by just winging it !!!

Top comments (0)