DEV Community

Cover image for Listening to the tests to improve a design
Manuel Rivero
Manuel Rivero

Posted on • Edited on • Originally published at codesai.com

Listening to the tests to improve a design

Introduction.

Recently in the B2B team at LIFULL Connect, we improved the validation of the clicks our API receive using a service that detects whether the clicks were made by a bot or a human being.

So we used TDD to add this new validation to the previously existing validation that checked if the click contained all mandatory information. This was the resulting code:

<?php
namespace Trovit\B2B\AdClick\Domain;

class ClickValidation
{
  private $botClickDetector;
  private $domainLogger;
  private $paramsValidator;

  public function __construct(
    BotClickDetector $botClickDetector,
    DomainLogger $domainLogger,
    ClickParamsValidator $paramsValidator
  )
  {
    $this->botClickDetector = $botClickDetector;
    $this->domainLogger = $domainLogger;
    $this->paramsValidator = $paramsValidator;
  }

  public function isValid(array $click)
  {
    if (!$this->paramsValidator->isValid($click)) {
      return false;
    }

    $isBotClick = $this->botClickDetector->isBot($click['userIp']);
    if ($isBotClick) {
      $this->domainLogger->logBotClick($click);
      return false;
    }
    return true;
  }
}
Enter fullscreen mode Exit fullscreen mode

and these were its tests:

<?php
namespace Trovit\B2B\AdClick\Tests\unit\Domain;

use PHPUnit\Framework\TestCase;
use Prophecy\Argument;
use Trovit\B2B\AdClick\Domain\BotClickDetector;
use Trovit\B2B\AdClick\Domain\ClickParamsValidator;
use Trovit\B2B\AdClick\Domain\ClickValidation;
use Trovit\B2B\AdClick\Domain\DomainLogger;
use Trovit\B2B\AdClick\Tests\helper\ClickRawDataBuilder;

class ClickValidationTest extends TestCase
{
  private $click;
  private $botDetector;
  private $domainLogger;
  private $clickValidation;
  private $paramsValidator;

  protected function setUp()
  {
    $this->click = ClickRawDataBuilder::clickMandatoryRawData()->build();
    $this->domainLogger = $this->prophesize(DomainLogger::class);
    $this->botDetector = $this->prophesize(BotClickDetector::class);
    $this->paramsValidator = $this->prophesize(ClickParamsValidator::class);
    $this->clickValidation = new ClickValidation(
      $this->botDetector->reveal(),
      $this->domainLogger->reveal(),
      $this->paramsValidator->reveal()
    );
  }

  /** @test */
  public function a_click_by_a_bot_is_not_valid_and_is_logged()
  {
    $this->paramsValidator->isValid(Argument::any())->willReturn(true);
    $this->botDetector->isBot($this->click['userIp'])->willReturn(true);

    $isValid = $this->clickValidation->isValid($this->click);

    $this->assertFalse($isValid);
    $this->domainLogger->logBotClick($this->click)->shouldHaveBeenCalledOnce();
  }

  /** @test */
  public function a_click_by_human_is_valid()
  {
    $this->paramsValidator->isValid(Argument::any())->willReturn(true);
    $this->botDetector->isBot($this->click['userIp'])->willReturn(false);

    $isValid = $this->clickValidation->isValid($this->click);

    $this->assertTrue($isValid);
    $this->domainLogger->logBotClick($this->click)->shouldNotHaveBeenCalled();
  }

  /** @test */
  public function a_click_missing_a_mandatory_param_is_not_valid_but_is_not_logged()
  {
    $this->paramsValidator->isValid(Argument::any())->willReturn(false);
    $isValid = $this->clickValidation->isValid($this->click);

    $this->assertFalse($isValid);
    $this->domainLogger->logBotClick($this->click)->shouldNotHaveBeenCalled();
  }
}
Enter fullscreen mode Exit fullscreen mode

The problem with these tests is that they know too much. They are coupled to many implementation details. They not only know the concrete validations we apply to a click and the order in which they are applied, but also details about what gets logged when a concrete validations fails. There are multiple axes of change that will make these tests break. The tests are fragile against those axes of changes and, as such, they might become a future maintenance burden, in case changes along those axes are required.

So what might we do about that fragility when any of those changes come?

Improving the design to have less fragile tests.

As we said before the test fragility was hinting to a design problem in the ClickValidation code. The problem is that it's concentrating too much knowledge because it's written in a procedural style in which it is querying every concrete validation to know if the click is ok, combining the result of all those validations and knowing when to log validation failures. Those are too many responsibilities for ClickValidation and is the cause of the fragility in the tests.

We can revert this situation by changing to a more object-oriented implementation in which responsibilities are better distributed. Let's see how that design might look:

1. Removing knowledge about logging.

After this change, ClickValidation will know nothing about looging. We can use the same technique to avoid knowing about any similar side-effects which concrete validations might produce.

First we create an interface, ClickValidator, that any object that validates clicks should implement:

<?php
namespace Trovit\B2B\AdClick\Domain;

interface ClickValidator {
  function isValid(array $click);
}
Enter fullscreen mode Exit fullscreen mode

Next we create a new class NoBotClickValidator that wraps the BotClickDetector and adapts[1] it to implement the ClickValidator interface. This wrapper also enrichs BotClickDetector's' behavior by taking charge of logging in case the click is not valid.

<?php
namespace Trovit\B2B\AdClick\Domain;

class NoBotClickValidator implements ClickValidator
{
  private $botClickDetector;
  private $domainLogger;

  public function __construct(
    BotClickDetector $botClickDetector,
    DomainLogger $domainLogger
  )
  {
    $this->botClickDetector = $botClickDetector;
    $this->domainLogger = $domainLogger;
  }

  public function isValid(array $click)
  {
    $clickMadeByBot = $this->botClickDetector->isBot($click['userIp']);
    if ($clickMadeByBot) {
      $this->domainLogger->logBotClick($click);
      return false;
    }
    return true;
  }
}
Enter fullscreen mode Exit fullscreen mode

These are the tests of NoBotClickValidator that takes care of the delegation to BotClickDetector and the logging:

<?php
namespace Trovit\B2B\AdClick\Tests\unit\Domain;

use PHPUnit\Framework\TestCase;
use Trovit\B2B\AdClick\Domain\BotClickDetector;
use Trovit\B2B\AdClick\Domain\NoBotClickValidator;
use Trovit\B2B\AdClick\Domain\DomainLogger;

class NoBotClickValidatorTest extends TestCase
{
  private $click;
  private $botDetector;
  private $domainLogger;

  protected function setUp()
  {
    $this->click = ['userIp' => 'someUserIp'];
    $this->domainLogger = $this->prophesize(DomainLogger::class);
    $this->botDetector = $this->prophesize(BotClickDetector::class);
    $this->noBotClickValidator = new NoBotClickValidator(
      $this->botDetector->reveal(),
      $this->domainLogger->reveal()
    );
  }

  /** @test */
  public function when_bot_is_detected_click_is_invalid_and_problem_is_logged()
  {
    $this->botDetector->isBot($this->click['userIp'])->willReturn(true);

    $isValid = $this->noBotClickValidator->isValid($this->click);

    $this->assertFalse($isValid);
    $this->domainLogger->logBotClick($this->click)->shouldHaveBeenCalledOnce();
  }

  /** @test */
  public function when_no_bot_is_detected_click_is_valid_and_nothing_is_logged()
  {
    $this->botDetector->isBot($this->click['userIp'])->willReturn(false);

    $isValid = $this->noBotClickValidator->isValid($this->click);

    $this->assertTrue($isValid);
    $this->domainLogger->logBotClick($this->click)->shouldNotHaveBeenCalled();
  }
}
Enter fullscreen mode Exit fullscreen mode

If we used NoBotClickValidator in ClickValidation, we'd remove all knowledge about logging from ClickValidation.

<?php
namespace Trovit\B2B\AdClick\Domain;

class ClickValidation
{
  private $noBotClickValidator;
  private $paramsValidator;

  public function __construct(
    ClickValidator $noBotClickValidator,
    ClickValidator $paramsValidator
  )
  {
    $this->noBotClickValidator = $noBotClickValidator;
    $this->paramsValidator = $paramsValidator;
  }

  public function isValid(array $click)
  {
    return $this->paramsValidator->isValid($click) &&
      $noBotClickValidator->isValid($click);
  }
}
Enter fullscreen mode Exit fullscreen mode

Of course, that knowledge would also disappear from its tests. By using the ClickValidator interface for all concrete validations and wrapping validations with side-effects like logging, we'd make ClickValidation tests robust to changes involving some of the possible axis of change that were making them fragile:

  1. Changing the interface of any of the individual validations.
  2. Adding side-effects to any of the validations.

2. Another improvement: don't use test doubles when it's not worth it[2].

There's another way to make ClickValidation tests less fragile.

If we have a look at ClickParamsValidator and BotClickDetector (I can't show their code here for security reasons), they have very different natures. ClickParamsValidator has no collaborators, no state and a very simple logic, whereas BotClickDetector has several collaborators, state and a complicated validation logic.

Stubbing ClickParamsValidator in ClickValidation tests is not giving us any benefit over directly using it, and it's producing coupling between the tests and the code.

On the contrary, stubbing NoBotClickValidator (which wraps BotClickDetector) is really worth it, because, even though it also produces coupling, it makes ClickValidation tests much simpler.

Using a test double when you'd be better of using the real collaborator is a weakness in the design of the test, rather than in the code to be tested.

These would be the tests for the ClickValidation code with no logging knowledge, after applying this idea of not using test doubles for everything:

<?php
namespace Trovit\B2B\AdClick\Tests\unit\Domain;

use PHPUnit\Framework\TestCase;
use Trovit\B2B\AdClick\Domain\ClickParamsValidator;
use Trovit\B2B\AdClick\Domain\NoBotClickValidator;
use Trovit\B2B\AdClick\Domain\ClickValidation;
use Trovit\B2B\AdClick\Tests\helper\ClickRawDataBuilder;

class ClickValidationTest extends TestCase
{
  private $noBotClickValidator;
  private $clickValidation;

  protected function setUp()
  {
    $this->noBotClickValidator = $this->prophesize(NoBotClickValidator::class);
    $this->clickValidation = new ClickValidation(
      $this->noBotClickValidator->reveal(),
      new ClickParamsValidator()
    );
  }

  /** @test */
  public function a_click_lacking_any_mandatory_parameter_is_not_valid()
  {
    $clickLackingParameters = $this->clickWithAllParameters()
      ->withoutCountry->build();
    $this->botDetector->noBotClickValidator($clickLackingParameters)
      ->willReturn(true);

    $isValid = $this->clickValidation->isValid($clickLackingParameters);

    $this->assertFalse($isValid);
  }

  /** @test */
  public function a_click_made_by_a_bot_is_not_valid()
  {
    $clickWithAllParameters = $this->clickWithAllParameters()->build();
    $this->noBotClickValidator->isBot($clickWithAllParameters)
      ->willReturn(false);

    $isValid = $this->clickValidation->isValid($clickWithAllParameters);

    $this->assertFalse($isValid);
  }

  /** @test */
  public function a_click_with_all_parameters_and_made_by_a_human_is_valid()
  {
    $clickWithAllParameters = $this->clickWithAllParameters()->build();
    $this->botDetector->noBotClickValidator($clickWithAllParameters)
      ->willReturn(true);

    $isValid = $this->clickValidation->isValid($clickWithAllParameters);

    $this->assertTrue($isValid);
  }

  private function clickWithAllParameters() {
    return ClickRawDataBuilder::clickMandatoryRawData();
  }
}
Enter fullscreen mode Exit fullscreen mode

Notice how the tests now use the real ClickParamsValidator and how that reduces the coupling with the production code and makes the set up simpler.

3. Removing knowledge about the concrete sequence of validations.

After this change, the new design will compose validations in a way that will result in ClickValidation being only in charge of combining the result of a given sequence of validations.

First we refactor the click validation so that the validation is now done by composing several validations:

<?php
namespace Trovit\B2B\AdClick\Domain;

class ClickValidation
{
  private $validators

  public function __construct(array $validators)
  {
    $this->validators = $validators;
  }

  public function isValid(array $click)
  {
    foreach ($this->validators as $validator) {
      if (!$validator->isValid($click)) {
        return false;
      }
    }
    return true;
  }
}
Enter fullscreen mode Exit fullscreen mode

The new validation code has several advantages over the previous one:

  • It does not depend on concrete validations any more
  • It does not depend on the order in which the validations are made.

It has only one responsibility: it applies several validations in sequence, if all of them are valid, it will accept the click, but if any given validation fails, it will reject the click and stop applying the rest of the validations. If you think about it, it's behaving like an and operator.

We may write these tests for this new version of the click validation:

<?php
namespace Trovit\B2B\AdClick\Tests\unit\Domain;

use PHPUnit\Framework\TestCase;
use Prophecy\Argument;

use Trovit\B2B\AdClick\Domain\ClickValidation;
use Trovit\B2B\AdClick\Domain\ClickValidator;

class ClickValidationTest extends TestCase
{
  private $click;
  private $validator1;
  private $validator2;
  private $clickValidation;

  protected function setUp()
  {
    $this->click = [];
    $this->validator1 = $this->prophesize(ClickValidator::class);
    $this->validator2 = $this->prophesize(ClickValidator::class);
    $this->clickValidation = new ClickValidation(
      [$this->validator1->reveal(), $this->validator2->reveal()]
    );
  }

  /** @test */
  public function when_any_validator_returns_false_the_click_is_not_valid()
  {
    $this->validator1->isValid($this->click)->willReturn(false);
    $this->validator2->isValid($this->click)->willReturn(true);

    $isValid = $this->clickValidation->isValid($this->click);

    $this->assertFalse($isValid);
  }

  /** @test */
  public function when_all_validators_return_truse_the_click_is_valid()
  {
    $this->validator1->isValid($this->click)->willReturn(true);
    $this->validator2->isValid($this->click)->willReturn(true);

    $isValid = $this->clickValidation->isValid($this->click);

    $this->assertTrue($isValid);
  }
}
Enter fullscreen mode Exit fullscreen mode

These tests are robust to the changes making the initial version of the tests fragile that we described in the introduction:

  1. Changing the interface of any of the individual validations.
  2. Adding side-effects to any of the validations.
  3. Adding more validations.
  4. Changing the order of the validation.

However, this version of ClickValidationTest is so general and flexible, that using it, our tests would stop knowing which validations, and in which order, are applied to the clicks[3]. That sequence of validations is a business rule and, as such, we should protect it. We might keep this version of ClickValidationTest only if we had some outer test protecting the desired sequence of validations.

This other version of the tests, on the other hand, keeps protecting the business rule:

<?php

namespace Trovit\B2B\AdClick\Tests\unit\Domain;

use PHPUnit\Framework\TestCase;
use Trovit\B2B\AdClick\Domain\ClickValidation;
use Trovit\B2B\AdClick\Domain\ClickParamsValidator;
use Trovit\B2B\AdClick\Domain\NoBotClickValidator;
use Trovit\B2B\AdClick\Tests\helper\ClickRawDataBuilder;

class ClickValidationTest extends TestCase
{
  private $noBotClickValidator;
  private $clickValidation;

  protected function setUp()
  {
    $this->noBotClickValidator = $this->prophesize(NoBotClickValidator::class);
    $this->clickValidation = new ClickValidation(
      [new ClickParamsValidator(), $this->noBotClickValidator->reveal()]
    );
  }

  /** @test */
  public function a_click_lacking_any_mandatory_parameter_is_not_valid()
  {
    $clickLackingParameters = $this->clickWithAllParameters()
      ->withoutCountry()->build();
    $this->noBotClickValidator->isValid($clickLackingParameters)
      ->willReturn(true);

    $isValid = $this->clickValidation->isValid($clickLackingParameters);

    $this->assertFalse($isValid);
  }

  /** @test */
  public function a_click_by_a_bot_is_not_valid()
  {
    $clickWithAllParameters = $this->clickWithAllParameters()->build();
    $this->noBotClickValidator->isValid($clickWithAllParameters)
      ->willReturn(false);

    $isValid = $this->clickValidation->isValid($clickWithAllParameters);

    $this->assertFalse($isValid);
  }

  /** @test */
  public function a_click_by_a_human_with_all_parameters_is_valid()
  {
    $clickWithAllParameters = $this->clickWithAllParameters()->build();
    $this->noBotClickValidator->isValid($clickWithAllParameters)
      ->willReturn(true);

    $isValid = $this->clickValidation->isValid($clickWithAllParameters);

    $this->assertTrue($isValid);
  }

  private function clickWithAllParameters() {
    return ClickRawDataBuilder::clickMandatoryRawData();
  }
}
Enter fullscreen mode Exit fullscreen mode

Notice how this version of the tests keeps in its setup the knowledge of which sequence of validations should be used, and how it only uses test doubles for NoBotClickValidator.

4. Avoid exposing internals.

The fact that we're injecting into ClickValidation an object, ClickParamsValidator, that we realized we didn't need to double, it's a smell which points to the possibility that ClickParamsValidator is an internal detail of ClickValidation instead of its peer. So by injecting it, we're coupling ClickValidation users, or at least the code that creates it, to an internal detail of ClickValidation: ClickParamsValidator.

A better version of this code would hide ClickParamsValidator by instantiating it inside ClickValidation's constructor:

<?php
namespace Trovit\B2B\AdClick\Domain;

class ClickValidation
{
  private $validators

  public function __construct(ClickValidator $noBotClickValidator)
  {
    $this->validators = [new ClickParamsValidator(), $noBotClickValidator];
  }

  public function isValid(array $click)
  {
    foreach ($this->validators as $validator) {
      if (!$validator->isValid($click)) {
        return false;
      }
    }
    return true;
  }
}
Enter fullscreen mode Exit fullscreen mode

With this change ClickValidation recovers the knowledge of the sequence of validations which in the previous section was located in the code that created ClickValidation.

There are some stereotypes that can help us identify real collaborators (peers)[4]:

  1. Dependencies: services that the object needs from its environment so that it can fulfill its responsibilities.
  2. Notifications: other parts of the system that need to know when the object changes state or performs an action.
  3. Adjustments or Policies: objects that tweak or adapt the object's behaviour to the needs of the system.

Following these stereotypes, we could argue that NoBotClickValidator is also an internal detail of ClickValidation and shouldn't be exposed to the tests by injecting it. Hiding it we'd arrive to this other version of ClickValidation:

<?php
namespace Trovit\B2B\AdClick\Domain;

class ClickValidation
{
  private $validators

  public function __construct(
    DomainLogger $domainLogger, 
    Clock $clock,
    ClickPersistence $clickPersistence)
  {
    $this->validators = [
      new ClickParamsValidator(), 
      NoBotClickValidator::create($clock, $clickPersistence, $domainLogger)];
  }

  public function isValid(array $click)
  {
    foreach ($this->validators as $validator) {
      if (!$validator->isValid($click)) {
        return false;
      }
    }
    return true;
  }
}
Enter fullscreen mode Exit fullscreen mode

in which we have to inject the real dependencies of the validation, and no internal details are exposed to the client code. This version is very similar to the one we'd have got using tests doubles only for infrastructure.

The advantage of this version would be that its tests would know the least possible about ClickValidation. They'd know only ClickValidation's boundaries marked by the ports injected through its constructor, and ClickValidation`'s public API. That will reduce the coupling between tests and production code, and facilitate refactorings of the validation logic.

The drawback is that the combinations of test cases in ClickValidationTest would grow, and may of those test cases would talk about situations happening in the validation boundaries that might be far apart from ClickValidation's callers. This might make the tests hard to understand, specially if some of the validations have a complex logic. When this problem gets severe, we may reduce it by injecting and use test doubles for very complex validators, this is a trade-off in which we decide to accept some coupling with the internal of ClickValidation in order to improve the understandability of its tests. In our case, the bot detection was one of those complex components, so we decided to test it separately, and inject it in ClickValidation so we could double it in ClickValidation's tests, which is why we kept the penultimate version of ClickValidation in which we were injecting the click-not-made-by-a-bot validation.

Conclusion.

In this post, we tried to play with an example to show how listening to the tests[5] we can detect possible design problems, and how we can use that feedback to improve both the design of our code and its tests, when changes that expose those design problems are required.

In this case, the initial tests were fragile because the production code was procedural and had too many responsibilities. The tests were fragile also because they were using test doubles for some collaborators when it wasn't worth to do it.

Then we showed how refactoring the original code to be more object-oriented and separating better its responsibilities, could
remove some of the fragility of the tests. We also showed how reducing the use of test doubles only to those collaborators that really needs to be substituted can improve the tests and reduce their fragility. Finally, we showed how we can go too far in trying to make the tests flexible and robust, and accidentally stop protecting a business rule, and how a less flexible version of the tests can fix that.

When faced with fragility due to coupling between tests and the code being tested caused by using test doubles, it's easy and very usual to "blame the mocks", but, we believe, it would be more productive to listen to the tests to notice which improvements in our design they are suggesting. If we act on this feedback the tests doubles give us about our design, we can use tests doubles in our advantage, as powerful feedback tools[6],
that help us improve our designs, instead of just suffering and blaming them.

Acknowledgements.

Many thanks to my Codesai colleagues Alfredo Casado, Fran Reyes, Antonio de la Torre and Manuel Tordesillas, and to my Aprendices colleagues Paulo Clavijo, Álvaro García and Fermin Saez for their feedback on the post, and to my colleagues at LIFULL Connect for all the mobs we enjoy together.

Footnotes:

[1] See the Adapter or wrapper pattern.

[2] See Test Smell: Everything is mocked by Steve Freeman where he talks about things you shouldn't be substituting with tests doubles.

[3] Thanks Alfredo Casado for detecting that problem in the first version of the post.

[4] From Growing Object-Oriented Software, Guided by Tests > Chapter 6, Object-Oriented Style > Object Peer Stereotypes, page 52. You can also read about these stereotypes in a post by Steve Freeman: Object Collaboration Stereotypes.

[5] Difficulties in testing might be a hint of design problems. Have a look at this interesting series of posts about listening to the tests by Steve Freeman.

[6] According to Nat Pryce mocks were designed as a feedback tool for designing OO code following the 'Tell, Don't Ask' principle: "In my opinion it's better to focus on the benefits of different design styles in different contexts (there are usually many in the same system) and what that implies for modularisation and inter-module interfaces. Different design styles have different techniques that are
most applicable for test-driving code written in those styles, and there are different tools that help you with those techniques. Those tools should give useful feedback about the external and internal quality of the system so that programmers can 'listen to the tests'. That's what we -- with the help of many vocal users over many years -- designed jMock to do for 'Tell, Don't Ask' object-oriented design." (from a conversation in Growing Object-Oriented Software Google Group).

I think that if your design follows a different OO style, it might be preferable to stick to a classical TDD style which nearly limits the use of test doubles only to infrastructure and undesirable side-effects.

Top comments (0)