Let's start by understanding what testing is. There is a testing pyramid (unit tests, integration and end-2-end testing)
Horizontal - the number of tests. Vertical - maintain cost.
Unit tests test the a+b function, where we describe the positive and negative scenario. There is a good video for unit tests. This topic is not in the context of our article.
E2e tests are usually written by testers, they are needed to test the full flow within a story. For example, we send a request to a raised server, the server processes it (goes to other services, goes to the database, to the radish, etc.). For us, the service is a black box. We are testing responses to our requests.
The most effective approach in practice, however, remains the following scheme (testing Trophy):
Integration tests - checking the integration of the tested service with other services. We fully check the flow of a particular service.
Integration tests become the foundation of all development testing. They are usually written by the developers themselves. The purpose of this article is to show you how easy and efficient it is to write integration tests.
Integration tests on the example of an http server that interacts with the database and goes to other services.
First, let's describe the service itself. Service in Github. The service uses gorilla/mux and Postgres. It implements a clean architecture and has such file structure:
❯ tree user_service
user_service
├── api
│ └── user.go
├── billing
│ ├── api.go
│ └── client.go
├── cmd
│ └── main.go
├── docker-compose.yml
├── domain
│ └── user.go
├── handler
│ └── handler.go
├── migrate
│ ├── migrate.go
│ └── migrations
│ └── 20220612163022_create_users.sql
├── server
│ └── server.go
├── storage
│ └── storage.go
└── use_case
└── use_case.go
10 directories, 12 files
Step 1. Testing createUser
The method for creating a record about a new user in the database contains the main logic in the repository layer(we simply pass payload from handler through use_case to repository) and looks like this:
func (s *storage) CreateUser(ctx context.Context, name string) (domain.User, error) {
query := `INSERT INTO users (name) VALUES ($1) RETURNING id, name, balance, created_at, updated_at`
res, err := s.db.QueryxContext(ctx, query, name)
if err != nil {
return domain.User{}, err
}
defer res.Close()
if !res.Next() {
return domain.User{}, IncorrectQueryResponse
}
var resUser domain.User
if err := res.StructScan(&resUser); err != nil {
return domain.User{}, err
}
return resUser, nil
}
Now let's write a simple test. (test won't work)
func TestCreateUser(t *testing.T) {
// copy from main
repo, err := storage.New(dbDsn)
require.NoError(t, err)
useCase := use_case.New(repo, nil)
h := handler.New(useCase)
///
requestBody := `{"name": "test_name"}`
// use httptest
srv := httptest.NewServer(server.New("", h).Router)
_, err = srv.Client().Post(srv.URL+"/users", "", bytes.NewBufferString(requestBody))
require.NoError(t, err)
}
The test raises the service itself and calls request handler(using httptest)
We run the test and see that it does not work, since there is no connection to the database. So, we need to create a connection to the database. Moreover, we need to make sure that the database is automatically up and running when the test is launched and there is no need to do it manually. This is where the popular testContainers tool and its Go implementation can help.
Step 2. Create testcontainer with Postgres
Let's describe how the request to launch the database in the container will look like:
req := testcontainers.ContainerRequest{
Env: map[string]string{
"POSTGRES_USER": "user",
"POSTGRES_PASSWORD": "password",
"POSTGRES_DB": "postgres",
},
ExposedPorts: []string{"5432/tcp"},
Image: "postgres:14.3",
WaitingFor: wait.ForExec([]string{"pg_isready"}).
WithPollInterval(2 * time.Second).
WithExitCodeMatcher(func(exitCode int) bool {
return exitCode == 0
}),
}
testcontainers.ContainerRequest
describes what our Docker container will look like.
Let's take a look at our docker-compose.yml:
version: '3.8'
services:
db:
image: postgres:14.3
ports:
- "5432:5432"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
- POSTGRES_DB=postgres
healthcheck:
test: [ "CMD-SHELL", "pg_isready" ]
interval: 2s
we can see that ContainerRequest almost completely repeats the description in docker-compose file:
-
Env
- environment variables, in this case the user, password, and DB schema name. Same as docker-compose. -
ExposedPorts
- analogue to commanddocker run -p <port>
, calling it,dockerd
will map selected port<port>
inside container with randomly selected available port on the host. -
Image
- Docker image and it's tag. -
WaitingFor
- launch waiting strategy, similar to healthcheck. With it, we check that the container is up and running.
container, err := testcontainers.GenericContainer(ctx,
testcontainers.GenericContainerRequest{
ContainerRequest: req,
Started:true,
}
)
testcontainers.GenericContainer
creates the container.
-
Started: true
means we need to wait until a container will start. If you remove this parameter or set tofalse
testcontainers won't wait for our condition in described inwaitingFor
field of the ContainerRequest and test will fail.
Finally let's describe the structure of a Postgres container itself:
type PostgreSQLContainer struct{
testcontainers.Container
MappedPort string
Host string
}
Besides testcontainers.Container
in our structure will be stored external host an port of a Docker container with Postgres. Will can get them in a such way:
mappedPort, err := container.MappedPort(ctx, "5432")
host, err := container.Host(ctx)
Also let's writ helper function which will return real DSN address for the connection to Postgres:
// GetDSN returns DB connection URL.
func (c PostgreSQLContainer) GetDSN() string {
return fmt.Sprintf("postgres://%s:%s@%s:%s/%s?sslmode=disable", "user", "password", c.Host, c.MappedPort, "postgres")
}
That is all for configuration and running our testcontainer. Whole code looks like:
package step_2
import (
"context"
"fmt"
_ "github.com/jackc/pgx/v4/stdlib"
_ "github.com/lib/pq"
"github.com/testcontainers/testcontainers-go"
"github.com/testcontainers/testcontainers-go/wait"
"time"
)
type PostgreSQLContainer struct {
testcontainers.Container
MappedPort string
Host string
}
func (c PostgreSQLContainer) GetDSN() string {
return fmt.Sprintf("postgres://%s:%s@%s:%s/%s?sslmode=disable", "user", "password", c.Host, c.MappedPort, "postgres_test")
}
func NewPostgreSQLContainer(ctx context.Context) (*PostgreSQLContainer, error) {
req := testcontainers.ContainerRequest{
Env: map[string]string{
"POSTGRES_USER": "user",
"POSTGRES_PASSWORD": "password",
"POSTGRES_DB": "postgres_test",
},
ExposedPorts: []string{"5432/tcp"},
Image: "postgres:14.3",
WaitingFor: wait.ForExec([]string{"pg_isready"}).
WithPollInterval(1 * time.Second).
WithExitCodeMatcher(func(exitCode int) bool {
return exitCode == 0
}),
}
container, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{
ContainerRequest: req,
Started: true,
})
if err != nil {
return nil, err
}
host, err := container.Host(ctx)
if err != nil {
return nil, err
}
mappedPort, err := container.MappedPort(ctx, "5432")
if err != nil {
return nil, err
}
return &PostgreSQLContainer{
Container: container,
MappedPort: mappedPort.Port(),
Host: host,
}, nil
}
Step 2.1. Refactoring
Let's not hardcode user and password and provide possibility to change configuration let's add structure for the config and use pattern options.
package step_2_1_improved_psql_container
import (
"context"
"fmt"
"github.com/docker/go-connections/nat"
_ "github.com/jackc/pgx/v4/stdlib"
_ "github.com/lib/pq"
"github.com/testcontainers/testcontainers-go"
"github.com/testcontainers/testcontainers-go/wait"
)
type (
PostgreSQLContainer struct {
testcontainers.Container
//add Config
Config PostgreSQLContainerConfig
}
//also add options pattern method
PostgreSQLContainerOption func(c *PostgreSQLContainerConfig)
PostgreSQLContainerConfig struct {
ImageTag string
User string
Password string
MappedPort string
Database string
Host string
}
)
func (c PostgreSQLContainer) GetDSN() string {
return fmt.Sprintf("postgres://%s:%s@%s:%s/%s?sslmode=disable", c.Config.User, c.Config.Password, c.Config.Host, c.Config.MappedPort, c.Config.Database)
}
func NewPostgreSQLContainer(ctx context.Context, opts ...PostgreSQLContainerOption) (*PostgreSQLContainer, error) {
const (
psqlImage = "postgres"
psqlPort = "5432"
)
config := PostgreSQLContainerConfig{
ImageTag: "11.5",
User: "user",
Password: "password",
Database: "db_test",
}
//handle possible options
for _, opt := range opts {
opt(&config)
}
containerPort := psqlPort + "/tcp"
req := testcontainers.GenericContainerRequest{
ContainerRequest: testcontainers.ContainerRequest{
Env: map[string]string{
"POSTGRES_USER": config.User,
"POSTGRES_PASSWORD": config.Password,
"POSTGRES_DB": config.Database,
},
ExposedPorts: []string{
containerPort,
},
Image: fmt.Sprintf("%s:%s", psqlImage, config.ImageTag),
WaitingFor: wait.ForListeningPort(nat.Port(containerPort)),
},
Started: true,
}
container, err := testcontainers.GenericContainer(ctx, req)
if err != nil {
return nil, fmt.Errorf("getting request provider: %w", err)
}
host, err := container.Host(ctx)
if err != nil {
return nil, fmt.Errorf("getting host for: %w", err)
}
mappedPort, err := container.MappedPort(ctx, nat.Port(containerPort))
if err != nil {
return nil, fmt.Errorf("getting mapped port for (%s): %w", containerPort, err)
}
config.MappedPort = mappedPort.Port()
config.Host = host
fmt.Println("Host:", config.Host, config.MappedPort)
return &PostgreSQLContainer{
Container: container,
Config: config,
}, nil
}
Step 3. Migrations
Let's run test once again. We will see another error. DB is stand up and running, but insert is not working since there is no schema. To solve this issue let's run migration script. We will do it on each test run. Just copy code from cmd/main.go into our test.
// run migrations
err = migrate.Migrate(psqlContainer.GetDSN(), migrate.Migrations)
require.NoError(t, err)
Run test once again and see that it finally passed.
Step 4. Test getUser
Test for getUser will look like this:
func TestGetUser(t *testing.T) {
//---------------- common part for all tests
ctx, ctxCancel := context.WithTimeout(context.Background(), 30*time.Second)
defer ctxCancel()
psqlContainer, err := step2.NewPostgreSQLContainer(ctx)
defer psqlContainer.Terminate(context.Background())
require.NoError(t, err)
err = migrate.Migrate(psqlContainer.GetDSN(), migrate.Migrations)
require.NoError(t, err)
repo, err := storage.New(psqlContainer.GetDSN())
require.NoError(t, err)
useCase := use_case.New(repo, nil)
h := handler.New(useCase)
srv := httptest.NewServer(server.New("", h).Router)
//------------------------------------------------
// test body of below ----------------------------
res, err := srv.Client().Get(srv.URL + "/users/1")
require.NoError(t, err)
defer res.Body.Close()
require.Equal(t, http.StatusOK, res.StatusCode)
// check response
response := api.GetUserResponse{}
err = json.NewDecoder(res.Body).Decode(&response)
require.NoError(t, err)
// id maybe any
// so we will check each field separately
assert.Equal(t, 1, response.ID)
assert.Equal(t, "test_name", response.Name)
assert.Equal(t, "0", response.Balance.String())
}
The problem with testing getUser is that we need to have a record about that user in our DB. Of course we can solve it simply running getUser right after createUser sequentially. But this approach is an anti-pattern since each test should work in isolation and test only the requested functionality.
Step 5. Testfixtures
To solve the problem we will use testfixtures. Create a folders fixtures
и fixtures/storage
and put a file users.yaml
inside:
- id: 1
name: test_name
balance: 0
Now install the library go get "github.com/go-testfixtures/testfixtures/v3"
and add this code after a common part and before we call get.
db, err := sql.Open("postgres", psqlContainer.GetDSN())
require.NoError(t, err)
fixtures, err := testfixtures.New(
testfixtures.Database(db),
testfixtures.Dialect("postgres"),
testfixtures.Directory("fixtures/storage"),
)
require.NoError(t, err)
require.NoError(t, fixtures.Load())
Run test once again and see that is passed.
Step 6. Testsuite
As you may notice each test has a common part. To optimise our code and avoid duplication we will use testsuites from the testify library. This tool helps us to describe actions we need to take before each test.
Let's create a structure for our TestSuite:
type TestSuite struct {
suite.Suite
psqlContainer *step2.PostgreSQLContainer
server *httptest.Server
}
Now let's describe special method SetupSuite()
which will run before launching each of the test of this TestSuite. Move the common part inside of it:
func (s *TestSuite) SetupSuite() {
// create db container
ctx, ctxCancel := context.WithTimeout(context.Background(), 30*time.Second)
defer ctxCancel()
psqlContainer, err := step2.NewPostgreSQLContainer(ctx)
s.Require().NoError(err)
s.psqlContainer = psqlContainer
//
// run migrations
err = migrate.Migrate(psqlContainer.GetDSN(), migrate.Migrations)
s.Require().NoError(err)
//
// copy from main
repo, err := storage.New(psqlContainer.GetDSN())
s.Require().NoError(err)
useCase := use_case.New(repo, nil)
h := handler.New(useCase)
///
// use httptest
s.server = httptest.NewServer(server.New("", h).Router)
//
}
Also let's describe TearDownSuite()
method, which will be executed after all tests from the TestSuite will be done. To avoid memory leak let's terminate our container:
func (s *TestSuite) TearDownSuite() {
ctx, ctxCancel := context.WithTimeout(context.Background(), 5*time.Second)
defer ctxCancel()
s.Require().NoError(s.psqlContainer.Terminate(ctx))
s.server.Close()
}
Last thing we need for our TestSuite is a test function which will take t *testing.T
argument and inject it in the TestSuite:
func TestSuite_Run(t *testing.T) {
suite.Run(t, new(TestSuite))
}
Awesome. TestSuite described and able to work. Now all our tests contain only testing methods, check of the return values and fixtures where needed:
func (s *TestSuite) TestCreateUser() {
requestBody := `{"name": "test_name"}`
res, err := s.server.Client().Post(s.server.URL+"/users", "", bytes.NewBufferString(requestBody))
s.Require().NoError(err)
defer res.Body.Close()
s.Require().Equal(http.StatusOK, res.StatusCode)
// check response
response := api.CreateUserResponse{}
err = json.NewDecoder(res.Body).Decode(&response)
s.Require().NoError(err)
// id maybe any
// so we will check each field separately
s.Assert().Equal("test_name", response.Name)
s.Assert().Equal("0", response.Balance.String())
}
func (s *TestSuite) TestGetUser() {
// create fixtures
db, err := sql.Open("postgres", s.psqlContainer.GetDSN())
s.Require().NoError(err)
fixtures, err := testfixtures.New(
testfixtures.Database(db),
testfixtures.Dialect("postgres"),
testfixtures.Directory("../step_5_add_testfixtures/fixtures/storage"),
)
s.Require().NoError(err)
s.Require().NoError(fixtures.Load())
//
res, err := s.server.Client().Get(s.server.URL + "/users/1")
s.Require().NoError(err)
defer res.Body.Close()
s.Require().Equal(http.StatusOK, res.StatusCode)
// check response
response := api.GetUserResponse{}
err = json.NewDecoder(res.Body).Decode(&response)
s.Require().NoError(err)
// so we will check each field separately
s.Assert().Equal(1, response.ID)
s.Assert().Equal("test_name", response.Name)
s.Assert().Equal("0", response.Balance.String())
}
Step 7. UpdateUserBalance test
Method updateUserBalance firstly calling external service Billing, request some information and based on it update the balance. Let's write test:
func (s *TestSuite) TestDepositBalance() {
// create fixtures
db, err := sql.Open("postgres", s.psqlContainer.GetDSN())
s.Require().NoError(err)
fixtures, err := testfixtures.New(
testfixtures.Database(db),
testfixtures.Dialect("postgres"),
testfixtures.Directory("../step_5/fixtures/storage"),
)
s.Require().NoError(err)
s.Require().NoError(fixtures.Load())
//
requestBody := `{"id": 1, "amount": "100"}`
res, err := s.server.Client().Post(s.server.URL+"/users/deposit", "", bytes.NewBufferString(requestBody))
s.Require().NoError(err)
defer res.Body.Close()
s.Require().Equal(http.StatusOK, res.StatusCode)
// check response
response := api.GetUserResponse{}
err = json.NewDecoder(res.Body).Decode(&response)
s.Require().NoError(err)
s.Assert().Equal(1, response.ID)
s.Assert().Equal("test_name", response.Name)
s.Assert().Equal("100", response.Balance.String())
}
And this test won't work too :) The problem now is that we need to go to the external server. Because we are writing test for the certain handler in our User service we don't need to test the external service. The only thing we need is to supply integration with it API. In other words, we need to mock a call to the external service and provide a response.
Step 8. httpmock
We will use httpmock for this purpose. Inside function setupSuite()
, where we created useCase and provided nil
as a billingClient now we will pass mocked http client:
func (s *TestSuite) SetupSuite() {
//...
mockClient := &http.Client{}
httpmock.ActivateNonDefault(mockClient)
billingClient := billing.New(mockClient, billingAddr)
useCase := use_case.New(repo, billingClient)
//...
}
At the end of the function TearDownSuite()
lets deactivate mocked httpClient:
func (s *TestSuite) TearDownSuite() {
//...
httpmock.DeactivateAndReset()
}
Now let's mock a call to the external service:
httpmock.RegisterResponder(
http.MethodPost,
billingAddr+"/deposit",
httpmock.NewStringResponder(http.StatusOK, ""),
)
And now the test will finally passed.
Step 9. API fixtures
As we can see, all our tests come down to filling requests and checking responses. We can also optimize this by moving the request and response structures into separate files.
In the fixtures directory create a new one /api
and a file fixtures.go
:
package fixtures
import (
"embed"
)
//go:embed fixtures
var Fixtures embed.FS
We will use go:embed FS
, you can read more about it here or here. Long story short, it allows you to get path from the file it contains without In a nutshell, it allows you to get the path to the file in which the structure is located, without having to write the full path to the file, which is often problematic.
Also let's add a structure to this file:
type FixtureLoader struct {
t *testing.T
currentPath fs.FS
}
func NewFixtureLoader(t *testing.T, fixturePath fs.FS) *FixtureLoader {
return &FixtureLoader{
t: t,
currentPath: fixturePath,
}
}
Let's also add 2 methods to it. The first one reads the contents of the file and returns a string with the contents:
func (l *FixtureLoader) LoadString(path string) string {
file, err := l.currentPath.Open(path)
require.NoError(l.t, err)
defer file.Close()
data, err := io.ReadAll(file)
require.NoError(l.t, err)
return string(data)
}
The second method uses the first one to get the string and then parses the template using the standard library html/template:
func (l *FixtureLoader) LoadTemplate(path string, data any) string {
tempData := l.LoadString(path)
temp, err := template.New(path).Parse(tempData)
require.NoError(l.t, err)
buf := bytes.Buffer{}
err = temp.Execute(&buf, data)
require.NoError(l.t, err)
return buf.String()
}
We will also create two files inside the folder /api
:
create_user_request.json
{
"name": "test_name"
}
create_user_response.json.temp
{
"id": {{.id}},
"name": "test_name",
"balance": "0"
}
In the test itself, we will parse these files and compare actual with expected. To do this, we will create two helper functions that will help compare 2 sets of json data:
func JSONEq(t *testing.T, expected, actual any) bool {
return assert.JSONEq(t, jsonMarshal(t, expected), jsonMarshal(t, actual))
}
func jsonMarshal(t *testing.T, data any) string {
switch v := data.(type) {
case string:
return v
case []byte:
return string(v)
case io.Reader:
data, err := io.ReadAll(v)
require.NoError(t, err)
return string(data)
default:
res, err := json.Marshal(v)
require.NoError(t, err)
return string(res)
}
}
And the test for createUser
:
func (s *TestSuite) TestCreateUser() {
requestBody := s.loader.LoadString("fixtures/api/create_user_request.json")
res, err := s.server.Client().Post(s.server.URL+"/users", "", bytes.NewBufferString(requestBody))
s.Require().NoError(err)
defer res.Body.Close()
s.Require().Equal(http.StatusOK, res.StatusCode)
response := api.CreateUserResponse{}
err = json.NewDecoder(res.Body).Decode(&response)
s.Require().NoError(err)
expected := s.loader.LoadTemplate("fixtures/api/create_user_response.json.temp", map[string]interface{}{
"id": response.ID,
})
JSONEq(s.T(), expected, response)
}
As a result, the content of our tests has been greatly reduced.
To write new tests, we will only need to mock calls to external services, put data into the database if necessary and describe the request and response structures in separate files, while minimally changing the code of the tests themselves.
Conclusion
As a result, we see that writing integration tests has become quite simple and comparable to writing unit tests. But, there is much more practical benefit from them, since we test our functionality completely, not only individual calls, but also the conversion of entities, working with the database and interacting with the external API.
Co-author: Andrey Lukin
All code samples can be found here
Top comments (4)
Rare article covering deep testing, thanks!
Thanks also for using httpmock I maintain, in the same vein I enjoin you to try go-testdeep instead of testify, it allows:
tdsuite
, almost no differences between normal Test function and test suite method contrary to testify;tdhttp
;td
package and its JSON operatorsJSON
,SuperJSONOf
,SubJSONOf
andJSONPointer
, avoiding to create your own boilerplate code in each repo.Regards,
Max.
I just realized I already commented your last post :)
Thanks Max! Will do in the next article!
Great article. Thanks.