Introduction
Testing is a crucial aspect of software development that helps ensure the quality and reliability of your code. In Go, testing is built into the language through the testing
package, which makes it easy to write and execute tests for your programs.
Tests are essential because they:
- Validate the correctness of your code, ensuring that it behaves as expected under various conditions.
- Provide a safety net for future modifications, allowing you to make changes with confidence that you won't inadvertently break existing functionality.
- Act as documentation complement for other developers, demonstrating how the code is supposed to work and serving as a guide for using and modifying it.
- Facilitate better code design, as writing tests often encourages developers to create modular and reusable components.
For Go developers, incorporating tests into the development process can lead to more reliable and maintainable code, ultimately improving the overall quality of your applications.
In this article, you'll learn how to write and run table-driven tests and benchmarks in Go. If you're new to testing or feel uncertain about the topic, read the Introduction to Testing in Go topic on Hyperskill, which covers the basics of tests in Go.
Table-Driven Tests 📋🧪
Table-driven tests are a typical pattern in Go for testing multiple input and output cases using a single test function. Instead of writing separate test functions for each case, you can define a table (a slice of structs) that includes the input values, expected output, and an optional description for each test case. You can then loop through the table and execute the test function for each case.
Compared to individual unit tests, table-driven tests offer several advantages:
- They reduce code duplication and make your test suite more maintainable.
- They make it easy to add new test cases, as you simply need to extend the table.
- They provide a clear overview of the various input-output combinations being tested.
Let's get started! First, create a new Go project named example
and initialize Go modules via the following commands:
mkdir example && cd example
go mod init example
Then create a new file main.go and within it, let's write the code of the DiscountedPrice()
function that calculates the discounted price of a product:
// main.go
package main
import "fmt"
// DiscountedPrice calculates the discounted price of a product,
// given its original price and discount percentage.
func DiscountedPrice(price, discountPercent float64) (float64, error) {
switch {
case discountPercent == 0:
return price, nil
case discountPercent < 0:
return 0, fmt.Errorf(
"invalid negative discount percentage: %.2f",
discountPercent)
case discountPercent > 100:
return 0, fmt.Errorf(
"invalid discount percentage greater than 100: %.2f",
discountPercent)
default:
discount := price * (discountPercent / 100)
return price - discount, nil
}
}
The next step is to create a new file main_test.go, and within it, write a table-driven test for the DiscountedPrice()
function:
// main_test.go
package main
import "testing"
func TestDiscountedPrice(t *testing.T) {
testCases := []struct {
price float64
discountPercent float64
expected float64
expectError bool
desc string
}{
{100.0, 0.0, 100.0, false, "no discount"},
{100.0, 50.0, 50.0, false, "50% discount"},
{100.0, 100.0, 0.0, false, "100% discount"},
{100.0, 110.0, 0.0, true, "discount greater than 100%"},
}
for _, tc := range testCases {
t.Run(tc.desc, func(t *testing.T) {
result, err := DiscountedPrice(tc.price, tc.discountPercent)
if tc.expectError && err == nil {
t.Errorf(
"DiscountedPrice(%.2f, %.2f) should return an error",
tc.price, tc.discountPercent)
}
if !tc.expectError && err != nil {
t.Errorf(
"DiscountedPrice(%.2f, %.2f) returned an error: %v",
tc.price, tc.discountPercent, err)
}
if !tc.expectError && result != tc.expected {
t.Errorf(
"DiscountedPrice(%.2f, %.2f) = %.2f; want %.2f",
tc.price, tc.discountPercent, result, tc.expected)
}
})
}
}
Running tests 👨🔬🖥️👩🔬
To run the tests, you can use the go test
command to execute all test functions in the package and report the results. By default, go test
runs all tests for the current package. However, you can also provide a package name, a directory, or a list of packages to test multiple packages simultaneously.
After executing the go test
command, you should see the following output:
> go test
PASS
ok example 0.236s
PASS
indicates all the tests have passed, ok
confirms that the text execution was successful, example
is the name of the Go module being tested, and 0.236s
is the duration of the test execution in seconds.
Even though all tests passed, how can you be sure there were no untested code paths left? The answer is simple you will need to check the test coverage.
Running tests with coverage 📄🔍
Test coverage is a metric that measures the proportion of your code that is exercised by your test suite. High test coverage indicates that your tests are comprehensive, while low test coverage suggests that some parts of your code may not be adequately tested. By tracking test coverage, you can identify untested portions of your code and write additional tests to improve the reliability of your application.
To run the tests with coverage, use the go test -cover
command. It will execute all test functions in the package, report the results, and provide a coverage percentage indicating the proportion of your code exercised by your tests.
> go test -cover
PASS
coverage: 85.7% of statements
ok example 0.253s
After examining the output, the coverage is only 85.7%
, which indicates that one of the code paths wasn't tested. Upon closer inspection, you might notice the absence of a test case for the "negative discount" scenario.
To improve test coverage, you can add the "negative discount" test case below to the testCases
slice:
{100.0, -10.0, 0.0, true, "negative discount"},
Then run go test -cover
once again, and you should get 100%
coverage:
> go test -cover
PASS
coverage: 100.0% of statements
ok example 0.250s
It's important to remember that while achieving high test coverage is a good practice, the primary focus should be on writing meaningful tests that cover a wide range of inputs and edge cases rather than just aiming for 100%
coverage.
Benchmarking 📊
Now that you're familiar with testing let's move on to benchmarking. Benchmarking is a valuable technique to measure the performance of your code, helping you identify bottlenecks and optimize your functions for better efficiency.
Benchmarks can be beneficial for developers in Go when comparing the performance of different implementations, optimizing code for specific use cases, or determining the impact of a change in the codebase.
For instance, suppose you want to compare the efficiency of various approaches to a common problem, like string concatenation, to optimize your code's performance and minimize memory allocations. By benchmarking different methods, you can make more informed decisions about the best way to implement a particular feature or operation in your application.
Let's write benchmarks to compare the performance of three different string concatenation methods:
- Using
strings.Builder
and itsWriteString()
method. - Using the
+=
operator for string concatenation. - Using
fmt.Sprintf()
for string concatenation.
First, create a new file called benchmarks_test.go and add to it the following code:
// benchmarks_test.go
package main
import (
"fmt"
"strings"
"testing"
)
func BenchmarkStringBuilderConcatenation(b *testing.B) {
for i := 0; i < b.N; i++ {
var sb strings.Builder
for j := 0; j < 1000; j++ {
sb.WriteString("h")
}
_ = sb.String()
}
}
func BenchmarkStringConcatenation(b *testing.B) {
for i := 0; i < b.N; i++ {
var s string
for j := 0; j < 1000; j++ {
s += "h"
}
}
}
func BenchmarkFmtSprintfConcatenation(b *testing.B) {
for i := 0; i < b.N; i++ {
var s string
for j := 0; j < 1000; j++ {
s = fmt.Sprintf("%s%s", s, "h")
}
}
}
Running benchmarks and comparing results 🚀📈
You can use the command go test -bench .
to run benchmarks. This command will execute all benchmark functions in your code once. Nonetheless, remember that benchmark outcomes may be prone to variations.
In order to achieve precise and dependable results, it's crucial to carry out benchmark tests repeatedly and evaluate the results. This process lets you detect discrepancies and better comprehend your code's performance attributes.
You can run benchmarks multiple times using the -count
flag. Additionally, you can include the -benchmem
flag to gather memory allocation statistics for each benchmark.
Let's go ahead and run the benchmarks with -count 10
and -count 20
to compare the performance:
go test -bench . -benchmem -count 10 > 10_runs_bench.txt
go test -bench . -benchmem -count 20 > 20_runs_bench.txt
Now you have the results from two sets of benchmarks with 10 and 20 runs each. To compare these results, you can use benchstat
, a command-line tool that helps analyze and compare benchmark results:
go install golang.org/x/perf/cmd/benchstat@latest
After installing benchstat
run the following command to compare the results:
> benchstat 10_runs_bench.txt 20_runs_bench.txt
goos: windows
goarch: amd64
pkg: example
│ 10_runs_bench.txt │ 20_runs_bench.txt │
│ sec/op │ sec/op vs base │
StringBuilderConcatenation-12 2.385µ ± 2% 2.315µ ± 1% -2.96% (p=0.000 n=10+20)
StringConcatenation-12 171.2µ ± 2% 163.6µ ± 3% -4.46% (p=0.000 n=10+20)
FmtSprintfConcatenation-12 241.2µ ± 5% 240.5µ ± 2% ~ (p=0.619 n=10+20)
geomean 46.18µ 44.99µ -2.58%
│ 10_runs_bench.txt │ 20_runs_bench.txt │
│ B/op │ B/op vs base │
StringBuilderConcatenation-12 3.242Ki ± 0% 3.242Ki ± 0% ~ (p=1.000 n=10+20) ¹
StringConcatenation-12 517.8Ki ± 0% 517.8Ki ± 0% ~ (p=1.000 n=10+20)
FmtSprintfConcatenation-12 533.8Ki ± 0% 533.8Ki ± 0% +0.00% (p=0.034 n=10+20)
geomean 96.42Ki 96.42Ki +0.00%
¹ all samples are equal
│ 10_runs_bench.txt │ 20_runs_bench.txt │
│ allocs/op │ allocs/op vs base │
StringBuilderConcatenation-12 9.000 ± 0% 9.000 ± 0% ~ (p=1.000 n=10+20) ¹
StringConcatenation-12 999.0 ± 0% 999.0 ± 0% ~ (p=1.000 n=10+20) ¹
FmtSprintfConcatenation-12 1.998k ± 0% 1.998k ± 0% ~ (p=1.000 n=10+20) ¹
geomean 261.9 261.9 +0.00%
¹ all samples are equal
You'll see a table with multiple columns when comparing benchmark results using the benchstat
command. Let's break down each column from left to right:
Function name: The name of the benchmark function, such as StringBuilderConcatenation-12
.
Time per operation: The average time it took to complete one operation for each benchmark file, represented in seconds or microseconds (e.g., 2.385µ ± 2%
for 10_runs_bench.txt and 2.315µ ± 1%
for 20_runs_bench.txt). The percentage value represents the standard deviation, which indicates the variability of the execution time.
Time difference: The percentage difference in the average execution time between the two benchmark files (e.g., -2.96%
). The p-value (e.g., p=0.000
) helps determine if the difference is statistically significant. A p-value of 0.05
or lower typically indicates statistical significance, and a negative value indicates that the second benchmark file has faster execution times.
Bytes allocated per operation: The average number of bytes allocated per operation for each benchmark file (e.g., 3.242Ki ± 0%
for both files). The percentage value represents the standard deviation.
Memory difference: The proportionate disparity in the mean memory allocation between the two benchmark files (e.g., ~ (p=1.000 n=10+20)
). The p-value aids in establishing the statistical significance of the difference.
Number of memory allocations per operation: The mean number of memory allocations per operation for each benchmark file (e.g., 9.000 ± 0%
for both files). The percentage figure denotes the standard deviation.
Allocation discrepancy: The percentage divergence in the average number of memory allocations between the two benchmark files (e.g., ~ (p=1.000 n=10+20)
). The p-value assists in determining the statistical significance of the difference.
Geomean: The geomean
row at the bottom of each table displays the geometric mean of the values for each column, providing an overall summary of the benchmark results.
Finally, based on the benchmark results, we can determine that:
Using a string builder with the
WriteString()
method is the best choice for performance and memory efficiency, mainly when dealing with numerous concatenations or resource-limited environments.Using string concatenation with the
+=
operator andfmt.Sprintf()
is slower and less memory-efficient than using a string builder. They may be suitable for more straightforward tasks where performance is not a top priority or readability and ease of use are more important.
Wrapping up 📝
To sump up, testing and benchmarking are essential aspects of Go development, ensuring code reliability, efficiency and quality.
If you want to keep learning about Go software quality, you can take a look at some other topics with great theory and practical problems on Hyperskill:
Soon, you'll also be able to learn more about software quality, testing, and benchmarking in the upcoming Go for Developers track that is in the works right now, so keep an eye on the Hyperskill blog for future announcements!
And if you want to learn the fundamentals of the Go programming language along with essential Computer Science concepts 💻, you can start your learning journey today with the Introduction to Go track on Hyperskill!
Let us know in the comments below if you have any questions or feedback regarding this blog. You can also follow us on social media to stay up-to-date with our latest articles and projects. We are on Reddit, LinkedIn, and Facebook.
Thank you for reading, and keep on coding!
Top comments (0)