I'm preparing a workshop about unit testing, and I need your help so that I can focus on what matters for the most.
What are your biggest challenges when it comes to unit testing?
Thanks for your help!
I'm preparing a workshop about unit testing, and I need your help so that I can focus on what matters for the most.
What are your biggest challenges when it comes to unit testing?
Thanks for your help!
For further actions, you may consider blocking this person and/or reporting abuse
Caleb Zhao -
Adam Gardner -
Benoit Averty -
Vansh Vardhan Singh -
Top comments (38)
A slightly different angle. Non-technical. Explaining the value of unit testing to clients who have a budget is probably the most difficult part for me. It’s hard enough convincing people that actually testing is important. Let alone unit testing!
Your testing should be included in your development effort. If there is a client that outright asks you to quote them without unit/integration/acceptance testing, you should refuse them a support contract.
And probably just refuse their business outright.
When I hired a contractor to do my front stairs, the contract he provided me with did not specifically include the time he will take to do the necessary measurements. It is always assumed, whenever we are building anything, that we must take the time to measure everything first.
So there is no need to explicitly state in your agreement/contract that you need extra time for writing unit tests (i.e. for measuring before you cut). Anyone who complains about a craftsman spending some time measuring is quite clueless, i.e. as you say, we should refuse to do business with such foolish 'customers'.
I suppose you could refuse their business or refuse to support them, but I'm a bit more interested in educating and sharing the value of testing.
It's a concept that doesn't have a lot of direct parallels in other fields. If your plumber suggested installing a test unit in your toilet to make sure they did the job right you'd probably be a little dubious. 😁
Most don't outright ask for a quote without those things, they just don't understand what they are or why they're paying for them.
I'd challenge the idea that other fields don't test. That toilet was meticulously tested in development. The flange the plumber is installing was tested against multiple toilet models. When the plumber finishes their installation, they're going to turn the water back on and flush the toilet a few times.
All different kinds of testing : unit, integration, and acceptance or e2e.
Interestingly, I had this exact same conversation with someone else after this discussion. We came to a similar conclusion.
Unit tests are like machines that test other machines. It's like that Ikea drawer they claim you can open and close over 1 million times. How do they know you can open and close it over 1 million times? They have a machine that opens and closes the drawer a million times.
Correct. Perhaps it's the development community which is behind on the notion of wrapping in testing with their day to day work. Seemingly everyone else does it on the regular.
I agree with you. Whenever someone says let's add a task in the user story for unit testing, I ask not to. It should be part of that other task called development.
This is generally what we do, but it's not uncommon for a potential client to ask why we quote much higher than someone else who just didn't include testing or unit tests in their quote.
Being able to explain the importance and the direct value they get from it is an important part of helping them make an informed decision.
One issue with unit testing is the tendency to tie the test to the result of the code, not the code itself. In other words, testing that a markdown filter produces the actual content in your filesystem as intended. It's less intuitive, but more durable, to test the code against a mocked markdown file, in that case. Content may change at any moment, and it doing so shouldn't break the test, because the code has not changed.
This doesn't make any sense. If you are unit testing code by supplying input (domain) values and then checking/verifying/asserting the output (range) values, and if when tests run repeatedly you get different output values with identical input values, you know that something is terribly wrong with your design. You should never see that scenario, ever!
I think you are responding to a different comment, since you are not responding to what I wrote. My point is exactly when input changes, it shouldn't break the test.
Sorry, probably misread your comment. Still not sure I understand how is it possible that a change in the input value breaks the test? Please explain.
If a test's expectation is linked to the exact desired output, then changing the input, for example, the wording in the hypothetical markdown file, will break the test. In my experience, this is something not to do, because it adds unnecessary fragility to the unit test. That's what I was saying.
Have a great weekend!
Agree. That's a problem in general with unit testing infinitely variable textual values. It's almost never a case for unit testing if you end up trying.
etc..
"How does dependency injection works?"
You want to avoid tight coupling, so instead of touching the implemented class in your code, you only work with the interface. Interface is at a higher level of abstraction, therefore less tightly coupled with your code. In addition to that, you never want to instantiate the dependency in you code (that would result in tight coupling). Instead, you expect the instantiated dependency to be 'injected' into your code in the form of an interface. That way, you shield your code from dependencies.
"How can it be used for mocking, (example: mocking a file read/write or a network request to avoid using real I/O)"
If you use the class that is responsible for doing the I/O precessing, you have tightly coupled your logic with that class. Instead of manipulating the instantiated class in your code, send messages to the interface. For example, instead of directly invoking methods on your ORM class, use the IRepo interface. That way, you can in your unit tests replace the concrete class that is actually accessing the I/O with a fake class that is an impostor, or a stand-in for the real class.
"What to test? Only public methods, also private methods?"
Do not couple your tests with the code structure. Only couple your tests with the expected code behaviour. One of the biggest mistakes in software engineering is tightly coupling one's tests to the code structure. It is a very costly mistake that results in poor code quality and amasses a lot of technical debt.
"Should you use defensive programming? Make some function check for all possible wrong inputs, meaning you should also unit test all these code paths?"
No. Never do that. You only test the expected outputs. And those expectations must be documented in the acceptance criteria of the user story you're working on.
"What is code coverage? should you aim for some specific number of line or branches coverage?"
Code coverage conveys very little useful information. Instead of using the code coverage metric, switch to using the percentage of killed mutants in your code. Use mutation testing -- it is your best friend.
"Where is the frontier between unit test / integration test?"
The frontier between unit testing and integration testing is computer memory. Unit tests must only run in memory. When it comes to testing delivery channels or any other underlying computing infrastructure (disc I/O, databases, screens, browsers, or any other commodities), use integration testing.
"Should you use TDD? Always code the tests first?"
Yes. Do not test code, code to the test. That is the best way to go. Skipping on TDD is equivalent to the problem with doing open heart surgery and skipping on washing your hands because, hey, the patient needs urgent intervention. Skipping on writing unit tests first is a lame excuse.
"How to test mathematical functions where you can have a wide range of parameters and edge cases?"
Whenever you write any block of code, you are doing it because you have previously formulated some kind of an expectation. You expect that some event will be handled by your code and that event will include some values, and those values will get transformed somehow. You cannot start coding without having such expectations. The expectations may be only in your head, or maybe written down in some formal spec. Whatever it is, you must first write the unit test that describes the expectations. Once you do that, you write the code to satisfy those expectations. No need to do anything else (observe and respect the YAGNI principle).
"How to handle flaky tests (tests that don't always pass when you restart them, probably because of some environment/context dependency)?"
Flaky tests are an indication of tightly coupled code. You need to refactor relentlessly.
"How to properly manage environment setup/teardown to garantee test independence (tests should be run in any random order and still pass)"
Unit tests must never be dependent on the environment. They only run in memory, so they must always produce exact same results, regardless in which environment they run.
My biggest issue is that it feels like I'm being redundant and that it takes twice as long to code and write tests, tests which, as someone mentioned, the client will most likely not see the value in and want to pass on
Unit testing in software engineering is the equivalent of double-entry practices in accounting. If you hire an accountant and ask them to itemize the invoice they send you, and in the invoice you notice that they are doing the double-entry practice, you are not going to deny the value of entering details twice and refuse to pay them for their work. Because that is a proven practice that works, is practices all over the world, and ensures quality of accounting. Why would building software all of a sudden be exempt from the requirements for quality?
See I’m not familiar with that practice either. But I think once more experience is gained , the value of testing becomes more apparent. I think it’s hard to remember what it’s like to not see value in something once you do. Thanks for your response!
In simpler terms, writing unit tests first is the equivalent of measuring before you cut. If you hire a contractor and they start measuring before they begin to cut (whatever the task at hand might be), you would look foolish if you start to complain about why are they wasting their time on measuring.
The same applies to software development. No one has a valid argument complaining that you writing a unit test (you measuring before you cut) is a wasted effort. Those arguments simply do not hold water under any circumstances.
Makes sense.
One of the current pains we have in a project is the amount of mocks some classes need to actually be tested.
Is it desirable to not use mocks and use actual classes implementations? Then the test aren't in isolation tho.
Too many mocks can be a code smell. It could be your classes have too many dependencies or too many coupled pieces.
I also don't mock databases and will use a test one for each test run instead.
It could also be that you need more utility methods for testing.
This is where I think Unit Testings biggest strength is. If they are hard to write, it indicates problems in your code. In this case, overly complex classes and possibly a break in single responsibility.
After 30 years programing, cobol, clipper, vb, c#, Python. To me is very dificult test first code after break. Almost impossible. Buy I will again and again. In Portuguese we have a song "we know de lession, just need to learning them"
unit tests often test incidental as opposed to essential qualities of code. The more fine-grained a test is the more likely it tests incidentals. "Mockist" code seems to fall victim to this, except when used to stub out an external dependency.
valuable tests are super hard to deal with because they tend to include a lot of dependencies.
tests that communicate the "what" of what they are testing, but not the "why". this makes it really hard to know when a certain test can be changed or is no longer needed.
Can you define what you mean by "refactoring"? Because the point of unit tests are to prove that your refactoring was completed successfully.
Given that the test passes initially, when you refactor the implementation, then the test should still pass. A refactoring doesn't change the implementation, it just enhances it.
I get what he means. If you change something you believe to be inconsequential, your tests will ensure that. However, if you update a function and change its output and forget to fix a particular function, tests will make your mistakes evident.
Most probably he means that the tests are strictly coupled with the production code implementation. Tests are exercising the implementation in a given moment, not the API, the behaviour. Seems like the problem of Test contra-variance.
I know what unit testing is, why I should do it. This has been harped over a billion times, so don't start with that. Also don't tie it in with TDD, or CI/CD. Unit testing is more broadly applicable than these and I don't need a new way of thinking about programming.
I want a scientific method of programming
I want to know where to start. Which of the 3 different frameworks for <language of choice> should I use? How do I construct a test. What's mocking, why do we mock rather than test actual data? Can I use unit testing to guard against regressions? What's a good strategy?
I recommend you start here: opensource.com/article/19/10/test-...
then work your way backwards by clicking on the link to the previous article. The series covers unit testing, TDD and mutation testing pretty thoroughly, and adds a lot of scientific explanation to the proceedings.
My biggest challenge has been to get some code that has been untested and which has most methods as private. I've had countless discussion on how to approach this, none of them conclusive.
Depending on your language you can change them from private to protected and create subclasses for testing.
Or create tests that will invoke the private methods from the public ones. But it could also be other code smells hard to say.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.