TL;DR: I don't need it, and you probably don't either. I'll explain below.
As we know of course, Go ships with a built-in unit testing framework in its standard library and toolchain, as explained in the testing
package documentation. Assuming you have some code to test:
package add
func Add(x, y int) int {
return x + y
}
The documentation shows a test like:
package add
import "testing"
func TestAdd(t *testing.T) {
got := Add(1, 2)
if got != 4 {
t.Errorf("Add(1, 2) = %d; want 4", got)
}
}
And if the test fails you get an error like: Add(1, 2) = 3; want 4
.
Of course, as soon as people saw this, the third-party assertion helper libraries started appearing. The most popular one seems to be testify (although I've never used it). Personally, I thought that the explicit check would be good enough for me, but it's true that after writing a bunch of tests, the boilerplate does seem unnecessarily verbose.
But do we really need a third-party library to abstract it? Pre-Go generics, I might have said yes. But post-generics, I think it's pretty simple to write a helper function directly:
package assert
import "testing"
func Equal[V comparable](t *testing.T, got, expected V) {
t.Helper()
if expected != got {
t.Errorf(`assert.Equal(
t,
got:
%v
,
expected:
%v
)`, got, expected)
}
}
As you can see, generics are the key here, especially the fact that we specify that types must be comparable. Before generics, we would have had to use things like reflection, or even just not use a helper function at all. To my eyes, many pre-generics Go code patterns seem to be about avoiding things that would have required generics to express safely. Eg, we were using a generic comparison operator got != 4
directly instead of encapsulating the comparison in a function call that would have needed generics.
So, how does this look like in practice? Here's the above test case rewritten to use the helper:
func TestAdd(t *testing.T) {
assert.Equal(t, Add(1, 2), 4)
}
Much nicer, I think, even though we lose the ability to format a custom error message. Speaking of error message, let's look at that:
$ go test gt/add
--- FAIL: TestAdd (0.00s)
add_test.go:9: assert.Equal(
t,
got:
3
,
expected:
4
)
FAIL
FAIL gt/add 0.119s
FAIL
Sure, we don't have a custom error message that tells us what operation was performed here, but we do have the exact line number in the test so we can just see for ourselves. And also, VSCode and I assume other good editors can show test results inline:
With a good editor showing inline results, it's pretty obvious what the code under test did and what result it expected.
API design considerations
Deciding on the right API and the right output is a little tricky, but it's worth taking the time to do it right. The function signature is important:
func Equal[V comparable](t *testing.T, got, expected V)
Notice that we pass in the actual ie 'got' value first, and the expected value second. The reason for this interconnects with my testing philosophy: one test function should try (as hard as possible) to assert one thing. Often, we need to test more complex data. In these cases, I believe the best approach is to snapshot the data as a string and assert on the string. Eg:
assert.Equal(
t,
dataframe1.Join(dataframe2).String(),
`---------
| a | b |
---------
| 1 | 2 |`,
)
This allows us to elegantly capture the entire 'behaviour' of the code under test, and update it quickly in the future if needed. If one day the dataframe result changes, the test failure output will show the new value and we can update it with a simple copy-paste. Think of it as a proto-snapshot testing style. Maybe in the future editor support tools will even be able to offer a one-click way to update the expected value.
Finally, note the layout of the failure message: assert.Equal(t, got: x, expected: y)
. This is deliberately chosen to teach the user how to call this helper even if they don't start by reading its documentation. By just looking at the error, they learn that the 'actual' value is the second argument, and the expected value is the third. This is informative while also being fairly succinct.
Conclusion
As we see here, it doesn't take many lines of code to write a very useful test assertion helper directly on top of the standard library, thanks to Go generics. In my opinion this covers 99% of Go unit testing needs. The remaining 1% is left as an exercise for the reader!
Top comments (17)
My philosophy is to not introduce any external dependency until I absolutely need it. Creating a small assertion helper function that you have total control over is better than pulling in a dependency that you have no control over. Until the project absolutely needs all the extra features that an external dependency provides, it's more maintainable to roll out simple functionality yourself.
I wrote my assertions too until I moved to another project and had to copy-paste them all. Testify is great, it's worth it.
Why would you care about adding an external dependency for your tests? They won't even be included in your runtime binaries.
The point is that your own implementation will cover a lot of test cases (99%) but when you get to that 1%, then you have to add the external dependency anyway so all the work you did before goes in the trash.
It's not just about how heavy the runtime binary is, it's also about how much third-party code I am depending on. By introducing a new dependency, I am now relying on a third party's unpaid efforts. If I can avoid that with less than 15 lines of code, why shouldn't I?
Sure, I might end up needing more advanced assertion libraries. But why borrow problems from the future? If I need something in the future, I'll use it. And nothing is going to go 'in the trash' because I'll just keep using my own assertion whenever possible anyway.
You say "By introducing a new dependency, I am now relying on a third party's unpaid efforts."
which effort? What fact do you have to back it up? Maybe I am wrong, but you make strong points and they need proof.
What facts and proof are you looking for? When you use an OSS library maintained by someone, that person has to spend some effort to maintain the project–writing code, triaging issues, responding to queries, doing code reviews, tagging releases. The OSS doesn't magically maintain itself out of thin air.
So you don’t rely on OSS at all?
Of course I rely on OSS, as almost every developer does. But I want to be able to pick and choose which OSS I rely on and I want to reduce my dependency surface area as much as possible. If I can get 90% of the benefit of a third-party library with less than 15 lines of hand-written code that I know like the back of my hand, why shouldn't I just do that?
So, you don't use third-party assertion library just because you wrote new one by yourself
I thought the article was about using
fmt.Println + // want: whatever
But no 🤔
it is a not true that assert.Equal cover 99% of Go unit testing needs! Let's compare it with for example "assert.Panic": only 1% test case used "assert.Panic"...
However 99% of the projects need "assert.Panic", evens for 1 or 2 calls (similar to other assert.True, assert.Nil, assert.Empty...)
when your co-worker comes to your project and need "assert.Panic". He/She will conveniently uses the testify/assert and throw away your "home made" assert. As 99% Go project needs "assert.Panic, assert.True.." There 99% probability that your assert will be throw away...
your point of reducing dependency surface is reasonable but choose assert library to show case the point is just a bad example
Dunno what to tell you. The Go standard library's
testing
package doesn't provide an assertion for 'panic', and I've never needed one. Or in other words, I've never needed to trigger or test for a panic. It's a crash condition; testing for it is imho missing the point. But I have found that almost every useful assertion can be expressed in terms of comparing two strrings, which covers my 99% cases.If your application have never call the "panic()" function then it is fine, you don't need "assert.panic()" in the tests.
However if your app calls panic() somewhere, and you didn't have test to cover this call then you are missing the test, it shows that Your tests covered only the "happy path", you didn't write test for the edge cases which make your application call the panic function and crashed.
Most of my application call panic somewhere and I have test to cover this case. The test make sure that when the situation happen, then the application would panic instead of running in an unpredictable condition.
No, I don't directly call
Panic()
in my applications and I don't think it's a good practice to do it either in application code. Panic should immediately crash the app; the fact that some libraries catch panics is a design decision that they made and it is their responsibility to ensure they are using it safely. I have no control over that and I don't test code I don't control.If I made a library that caught panics, then I would test it as such.
it seems that you wrote a tiny testify by yourself. why?
By writing your own thing instead of using well-known, quasi-standard way of testing, you also hurt uniformity and ease of collaboration since people have to learn your API instead.
So just to clarify, are you making both arguments at the same time:
?