DEV Community

Anton Sizikov
Anton Sizikov

Posted on

You are mocking it wrong.

Well, probably you are not, but let me grumble a little bit anyway.

Mockingbird picture
Mockingbird knows how to mock.

I've been working with various code bases throughout of my career, and there is one pattern which I see rather often. As you may have already guessed it's the unit-tests and mocking I'm going to talk about here. To give it a nice catchy start, I'd claim here, that mocks should be used when you have to, but not when you can.

I'll give a few examples of somewhat useless and even harmful tests.

All the examples are going to be made up, but I hope you'd get the point.

So, let's start with the typical Greeter example. This is a "Hello, world" of Unit Testing. In one way or another, this sample gets repeated in all the articles, posts and books dedicated to unit-tests and mocking frameworks.

IGreeter{
   string Greet(string name, string title);
}

public class Greeter: IGreeter{
  public string Greet(string name, string title){
    return "Hello," + title + " " + name;
  }
}

public class Client{
  private IGreeter _greeter;
  public Client(IGreeter greeter){
    _greeter = greeter;
  }

  public string FormatPageHeader(string name, string title){
    return "<h>" +  _greeter(name, title) + "</h>";
  }  
}

 public class ClientTests {
    pubic void Test(){
        var mock = new Mock<IGreeter>();
        mock.Setup(greeter => greeter.Greet("John", "Mr.")).Returns("Hello, Mr. John");
        var result = new Client(mock.Object).FormatPageHeader("John", "Mr.");
        Assert.AreEquals(result, "<h>Hello, Mr. John</h>"); //All good here, what does it test, tho?
    } 
 }
Enter fullscreen mode Exit fullscreen mode

So far so good. Tests are green. The Greeter interface isn't perfect, though. Two strings as a parameter? So easy to mess up, isn't it? Most probably you have a method like that in your project.

Ok then, imagine that you decided to make this method less error prone. You don't have much time for a proper refactoring because you have a feature to work on. Let's just swap the parameters. It's much more natural for a human to put the title before the name. Depends on the culture, I know.

("John", "Mr.") is more awkward compared to ("Mr.", "John") and ("Dr.", "Smith").

So, we'll end up with the greeter like that:

//Version 2
public class Greeter2 : IGreeter{
  public string Greet(string title, string name){
    return "Hello, " + title + " " + name;
  }
} 

IGreeter{
   string Greet(string title, string name);
}
Enter fullscreen mode Exit fullscreen mode

add., commit, push. Our beloved build server will pick the changes up, will run the tests and will fail of course. We forgot to update the test for the IGreeter implementation. Once they fixed we're good, aren't we?

Not really, remember the Client class? Now it's incorrect, we still pass the title and the name in the old order, even though our test claims that everything is fine.

This is fine

Here is the paradox, we introduced mocking so that we can test our class in isolation, but we had to put some code to make that mock return something. It's still the logic, isn't it? Now IGreeter has at least two implementations. Most probably in your codebase, you have tens of implementations for the mocked class. Just because it's a very much reused dependency, and you have to mock it all over again.

We can improve that test, tune it, but the only way we can make it fail is to repeat the Greeter logic in our mock set up. But wait a minute, if we have the same implementation, why don't we reuse the existing code?

The more complex your mocked object is the more complex your mock setup becomes. If that's not the case, most probably you're testing just the interaction with your dumb mock, i.e. you're not testing anything. Just like the ClientTest above. The only positive outcome is the extra 20c which your company pays to Amazon for that CPU time you waisted on the build server.

It's not rare to have more than one dependency. I can imagine that our Client could use some IHmlRenderer which would consume the IGreeter result. You would mock that one too, right?

Sweet as. You now have a test which verifies how two mocks are integrated with each other. How does that prove your code correctness, thought? I don't know.

I'm going to step back now, and talk about it from another point of view. What does the typical system look like? I assume, it's not the bunch of isolated classes, but it's more like a spiderweb of dependencies. If you look at type dependency diagram you would see a group of clusters, where each cluster is a somewhat tightly coupled set of classes. This is how we manage complexity, we break down one large class into small pieces, but those pieces would never be used in isolation. Each of them would be responsible for a little piece of work. You would write a separate test suite for them, mock out all the dependencies (they all belong to the same cluster).

How is that different to the attempt to test a private method?

That little convenience wrapper for the standard library class is nothing more than an implementation detail.

But integration tests are slow...

That would be an expected note. If we come back to our ClientTest. What performs better, the test with mock, or the test with the concrete Greeter implementation? I guess the answer is obvious. We shouldn't forget that that test doesn't prove anything. It is slower, it allocates more, and it's wrong. I'd say it's harmful.

So you're going to send out those emails every time you test?

Ok, that is a good question. Remember I said that we should mock when we have to, not when we can? That's exactly the right situation for the test isolation.

You don't want to make HTTP calls while running your unit tests suite, you don't want to send out emails, neither do I.

Hey, I tried not to mock, and I got sick of initing all the dependencies

This is where it gets painful. I saw systems like that, it's a nightmare to maintain. Manually setting up the dependency of the dependency is really not the way to go. It's hard, and everyone would avoid writing a new test. But that's an easy task to solve.

How do we build a dependency tree in the runtime? I hope you're using the DI framework which wires up interfaces and implementations together, it knows how to create a new dependency, it knows when and how to rebuild a dependency tree. If you're building a web service you should reset the state between requests, so keep as many dependencies request scoped as it's possible.

The same pattern is applicable for your test suite. Everything is request scoped and stateless already, so treat each test as a 'request' and let the DI container to build the system under test for you.

Of course, you would have to set up the DI container differently.

Module (or cluster) level test would build the dependency tree for that module but would mock out the rest of the world. Like no HTTP calls, no DB, no emails, you've got the point.

At this point, you would realize that you don't need that test which verifies that your utility class can split the string, but you would be sure that your MortgageCalculator does the job. After all, that is your business rule, and that is the feature you're building. Unless you are the low-level framework developer of course. Most of us are not, though.

Once you've got all the clusters well tested, all the interfaces established you may try to break it down a little bit. Or you may want to leave it as is, it's up to you.

Imagine you want to extract some logic into the separate class. Now your system has a new dependency, but that logic has already been tested. You don't need to set up a new mock, you don't need to repeat the logic of the extracted class in your test set up, you don't need to update the test to instantiate the tested class. It's just there, and your safety net still works. If you extract and modify the logic, you would break the test. Do you see the beauty? Extracting the class and introducing a new dependency is as easy as moving the logic into a private method.

Summary

The approach above would give you the following benefits:

  • Fewer meaningful, useless and harmful tests
  • No need to maintain duplicated implementation (mock set ups)
  • Feature driven tests, tests that verify that the cluster does the job
  • An easy to refactor code base
  • You can improve module level access (no need to make the class public just for the testing)

From my experience tests, based on that cluster approach, are much more reliable. When they break, they actually mean that the system is dysfunctional, when they pass you can be sure that you did not break the logic.

We still have to run integration tests to be sure that the third party integrations are working, that the system's runtime configuration is valid.

I am more than open to any criticism, feel free to tear that post apart. :)

Crossposting from my personal blog

Top comments (13)

Collapse
 
scottshipp profile image
scottshipp

I usually find a fundamental misunderstanding of mocking. Use of mocks isn't a problem. Poor use of mocks is a problem. Mocks aren't there so you can test. Mocks are there so you can test only what you want to test. They're there so you can isolate the thing being tested from the thing(s) not being tested.

So what do you want to test?

That changes with the context. Unit tests? I want to test a "unit" of code. What is "unit"? That varies. Integration tests? I want to test the integration of two or more things. System test? I want to test the system! End-to-end, I might be testing more than the system.

Back to the important point. It doesn't matter what size test I have, I still need to isolate the thing I'm testing. Mocks (or stubs or dummies or fakes or etc.) are one way to do it.

The key is: any time a mock isn't used to isolate the subject under test it's being used wrong. Every article I've ever seen about mocks gets this wrong. They always show an example of a mock completely isolating away the subject under test (the "doesn't test anything" ClientTests shown above), isolating the wrong things, or isolating in the wrong way.

Check out Gerard Meszaros' xUnit Test Patterns book. It will change your life.

Collapse
 
asizikov profile image
Anton Sizikov

Yup, totally agree. Mocking is a great tool. It's misused in one way or another in almost every codebase I've ever worked with.

Collapse
 
idanarye profile image
Idan Arye

You don't want to make HTTP calls while running your unit tests suite, you don't want to send out emails, neither do I.

Why wouldn't you though? I mean, obviously you wouldn't want to make HTTP calls to a production server and you wouldn't want to send emails to the actual clients - but what's wrong with sending mails to some testing account or to your own dummy mail server like FakeSMTP, and verifying that you receive these mails?

Collapse
 
asizikov profile image
Anton Sizikov

I actually have a set of end-to-end tests where we send emails and verify their content. There is a problem - emails go through the MailChimp and on test environment it may take quite some time.

Anyway, that's not the point of the article :) I see the value in sending the email out as well I see the value in a quick test run. We have to be practical and try to find the balance.

Mocking out the MailChimp client/queue manager/database/http call/ domain boundary is something we have to do if we want to run tests often, and be able to execute the test suite offline (I work on a plane sometimes :) ). I just want people to stop overusing mocks.

Collapse
 
idanarye profile image
Idan Arye

I agree - my philosophy is that when you need to mock you should consider making it an integration test.

My point is that even when you absolutely can't do the same thing the actual production program will do(e.g. - sending emails), you don't always have to do a traditional mock objects. Maybe going through an actual mail service makes the cycle too long, but if you set up your own local "mail service"(with FakeSMTP or something similar) you can send and verify the emails quickly enough, and it should be much easier to do than to use a mock email client - and also be a better test since it tests against a real API.

Another example - instead of mocking your model classes, set up a small database server. If your ORM is opaque enough you may even use an in-process database like SQLite or H2. It should have orders of magnitude less entries than a real production database(or even a testing database!) so it should run fast enough and fit within the memory limits of whatever machine you use for testing.

Thread Thread
 
asizikov profile image
Anton Sizikov

When SQLite db is a great choice for in-memory test database (I have about 7k tests like that in my current project, love it so much, but that's a different story), replacing the mail service is not always a good solution. When you test your code against the service which is very different from the one you use in production, what do you really test?

That's the same trade-off: do we verify an integration with the test service, or with the mock? If the test service is close enough to the system you have in prod, it is fast enough, then I would say go for it.

Collapse
 
belinde profile image
Franco Traversaro

Do you REALLY have written a unit test that send mail and assertPopServerHasMail()? :-/

Collapse
 
jillesvangurp profile image
Jilles van Gurp

In my view overly complicated test code (with or without mocks) is a design smell of the underlying system not being very testable. There are a few common patterns here but it always boils down to violations of the SOLID principles. If you need a lot of mocks to test a method, maybe it has too many side effects? Maybe the class has poor cohesiveness and tight coupling? Fix the underlying problems by refactoring until the test becomes the one or two liner it should have been.

In the case of the email service, you'll almost certainly want to add some abstractions in between where you are sending stuff and how you are sending it. What you want to test here (in a unit test) is that "an email got sent".

Another problem is mixing unit and integration tests. Mocks are best used for unit tests exclusively. Integration tests are slow anyway, so you don't save a lot of time by mocking things and you are reducing their limited coverage by making them less realistic. Make the most of your integration tests by maximizing coverage and realism so you find all those issues that happen in production systems before you ship. Use unit test for spotting logic bugs.

Think realistic user scenarios when doing integration tests. Coming back to the email service, you'll want to test that e.g. a user signs up via some API, clicks a link received via the email that was sent, and then successfully activates. That's a scenario and it will only work if everything lines up perfectly. That's why it is called an integration test. Now if you fake everything and grab the activation code by poking around in the database or from some mock that was technically a waste of time. That will never happen in production and the obvious failure scenarios usually revolve around email issues.

A good unit test requires mocks because fundamentally you should not even want to test stuff outside the scope of the unit under test. Only if the only point of the unit is the side effect with a dependency should you begin to consider using stuff like parameter inspection. Integration tests are the opposite: you are testing system behavior. If you change the system, what are you testing really?

If writing good tests is hard, that's your real problem: write testable code and life gets a lot easier when writing tests.

Collapse
 
asizikov profile image
Anton Sizikov • Edited

Thanks for the comment.

Mocks and SOLID are not synonyms. Not using mocks in tests does not mean the system is badly designed. Lots of mocks is not a symptom of a good design either.

If you were able to replace the dependency with the mock it does not mean that the test became a good one. It just became a test which verifies the interaction with the mock. If your mock behavior does not affect the class logic/results why do you even need this dependency?

Imagine that you have a class with no dependencies. It does some calculations. You have a test which verifies that. I believe that's called a unit test.

Next day you decided to extract the actual calculation into a separate class. You know, like a one-shortcut action in your IDE. And boom, your test is now a bad and slow integration test. Now you have to mock that dependency. It's ok, you will write a new unit test for the new class, rewrite all the existing tests for the old one.

The day after you realized that that wasn't a right decision. Will that be a simple task to bring everything back? (yes, you can revert the commit, but that's not always a good option).

My point is that the excessive use of mocks made your system very resistant to refactoring. It's just way to expensive to constantly improve and modify your code. And that leads to a bad design. And that makes unit tests with mocks very expensive.

Collapse
 
bohdanstupak1 profile image
Bohdan Stupak • Edited

Thank you for great read which I mostly agree. But I would argue even further. In this pretty straightforward example, you might discard the notion of IGreeter at all and interact directly with the concrete implementation. IGreeter itself looks like a case of test induced damage. Also, you might want to check out this question and specifically answer of Mark Seeman who mentions using coarse-grained dependency injection.
Regarding the subject itself for sure we're all aware that unit-test are quite fragile but on the contrary, they're fast and allow us to test business logic without relying on some volatile context as DBs, SMTP servers or etc. But for sure integration test allow us to test all moving parts. That's why gods of enterprise (he-he :D) gave us test pyramid

Collapse
 
asizikov profile image
Anton Sizikov

Thanks for the links.

I kind of see the problem with that pyramid. We've been told so many times that integration tests are slow, that we take that for granted.

The Greeter example is made up, but it shows that such test with the production dependency it's at least not slower than the test with mocked dependency. Mocking frameworks do quite some heavy lifting behind the scenes. And the Greeter implementation is dumb and simple. Much simpler than the mocked version.

There is also, a terminology confusion which I encounter very often. When there is a mock involved it's called a unit test, but when there is a runtime dependency injected it's an integration test. I'm trying to question that. I see them both as integration tests. The first one tests the integration with the mocked dependency, another one tests the integration with the actual implementation. When the non-mocked version performs in a similar way (or better), why would we spend time and resources on building and maintaining all the mocks?

Not every dependency makes HTTP calls.
Obviously, when the class depends on a heavy IO operation, you have to isolate it in order to improve the performance of your test suite. At least that's the part we all agree on :)

Collapse
 
bschatz profile image
bschatz

There is nothing wrong in your example.

With the test, you make sure that the greeter is used by the client, no more no less. (Unit tests can't replace integration tests).
But to emphasize this in the test, I usually introduce a constant in the test that doesn't output a text that a real implementation would do, e.g.:


public class ClientTests 
{
    pubic void Test()
    {
        const string TEXT_FROM_GREETER="Some nice formatted text from Greeter");

        var mock = new Mock<IGreeter>();
        mock.Setup(greeter => greeter.Greet("John", "Mr.")).Returns(TEXT_FROM_GREETER);
        var result = new Client(mock.Object).FormatPageHeader("John", "Mr.");
        Assert.AreEquals(result, TEXT_FROM_GREETER); 
    } 
}
Collapse
 
iyedbennour profile image
Iyed Bennour • Edited

I think the mock example you gave for the IGreeter interface is quite misleading as it's not what I consider a mock: it's a full fledged implementation of the interface implemented with help of a mock. Mocks are useful to test how the - System Under Test - behaves WITH RESPECT TO an interface ,NOT an implementation of it. In other words, how your SUT reacts to the behavior of its dependencies. Mocks help you simulate (or mock) your dependencies at the interface level (read as API level): returned values/errors or exception thrown irrespective of the input. The problem(s) in the example you gave is not the use of the mock, it's the code itself that could be easily fixed to be more testable.