Our team is a couple of months into developing a new application, and our suite of unit 240 tests takes 46 seconds to run. That duration is not excessive yet, but it’s increasing in proportion to the number of tests. In a couple of months, it’ll take a couple of minutes to run our tests.
We were surprised by this, as Jest is known for its fast performance. However, while Jest reported that each test only took 40ms, the overall run time for each test was closer to 6 seconds.
The integration tests for one of our legacy applications fare even worse, taking around 35 seconds for a single test. This time puts it over the duration where the mind starts to wander, and it’s hard to focus on developing the tests. With each actual test only taking about a second, where is all the extra time going?
Over the past couple of weeks, I’ve fallen down a bit of a rabbit hole trying to figure out why our test suite is so slow. Unfortunately, there are a lot of ideas out there to sort through, and few of them had any impact. Further, there doesn’t even seem to be much of a consensus on how fast our tests should be.
The outcome of this investigation was a reduction of the duration of our unit tests from 46 to 13 seconds. Our integration tests saw a similar improvement, with their duration falling from 35 to 15 seconds. Our pipelines saw even more significant improvements, which I cover in this separate article.
In this article, I want to share the improvements that made the biggest differences, as well as look at some of the possible misconfigurations and misusages of Jest undermining its performance.
While the following example appears simple and like it should run really fast, it hides a surprising but very common configuration that will delay our tests significantly.
// TestComponent.tsx
import {Button} from "@mui/material";
export const TestComponent = () => {
return <Button>Hello World!</Button>;
}
// ComponentB.test.tsx
import React from 'react';
import { render, screen } from '@testing-library/react';
import { TestComponent } from "./TestComponent";
test('TestComponent', () => {
render(<TestComponent />);
expect(screen.getByText("Hello World!")).toBeInTheDocument();
});
And when we run the test, we get the following result:
PASS src/components/testComponent/TestComponent.test.tsx
√ TestComponent - 1 (34 ms)
Test Suites: 1 passed, 1 total
Tests: 1 passed, 1 total
Time: 3.497 s
Before we can start improving the runtime, we need to understand where Jest is spending its time. 34ms to run the test is reasonable, but it’s unclear where the other 3.463 seconds are going. Without understanding what Jest is doing, we risk wasting time trying to optimize the wrong thing. For example, a common suggestion is to improve TypeScript compilation time by switching out ts-jest or babel-jest for a faster compiler. However, because Jest makes heavy use of caching, the impact of TypeScript compilation after the first run is minimal.
1. Jest startup time
When we start a test run, Jest needs to load itself and our test environment (typically jest-environment-jsdom). It builds a map of the dependencies between files, makes some decisions about test ordering, loads plugins, and spins up additional threads. All of this work takes about a second, but it’s entirely up to Jest and largely independent of our application, so there’s little we can do about it. Further, this setup happens once per thread, so it doesn’t scale up as the number of tests and test files increases.
For anyone curious about what Jest is doing when it starts up, there is a detailed video on the topic.
2. Populating the cache
The first time we run tests in our application, Jest will need to take a bit longer as it can’t take advantage of cached data. Jest spends the majority of the first time it runs transpiling TypeScript. After that initial run, there might be a handful of TypeScript files that need retranspiling, but otherwise, Jest primarily uses the cached values. The uncached scenario occurs infrequently and is not a significant factor in optimizing performance.
3. Loading the test file
Before Jest can run a test file, it needs to load or mock all of the dependencies referenced by the test file and setupTests.ts. This step can add substantial overhead to the test runtime and is where we can make significant gains in test performance.
4. Performance of the actual test
Here, our test took only 34ms, and there are few gains to be made in optimizing this further.
Fortunately, we don’t need to guess how much time Jest is spending on each of the above. We can use Chrome’s DevTools to profile our test run and can discover what each run is doing.
First, open DevTools by navigating to chrome:inspect in our browser and clicking “Open dedicated DevTools for Node.”
Then, inside the terminal, run: node --inspect-brk ./node_modules/jest/bin/jest.js src/components/testComponent/TestComponent.test.tsx --runInBand
. Once Chrome hits hit the default breakpoint in DevTools, navigate to the profiler tab and start recording. After the test completes, stop the profiler, view the recording, and select the “chart” view.
A couple of words of caution when interpreting these charts:
- The presence of the profiler will decrease the performance of the test by about 30%. However, it still gives a good indication of where the time is going proportionally.
- The first file to hit a dependency will always perform the worst because Jest will cache that dependency for all other tests on the same thread in the same run (though notably, not between separate runs). If we were to include a second test file that included TestComponent, it would take about half of the time to load its dependencies. However, that’s still time that we could reduce. And, of course, first-time performance matters a lot for the common scenario where we’re only running one file during development.
Barrel files
Now that we have the inspector hooked up, we can immediately see the problem — almost all of our time loading the test file is spent loading the @mui/material library
. Instead of loading only the button component we need, Jest is processing the entire library.
To understand why this is a problem, we need to understand a bit more about Barrel Files — an approach where a bunch of exports are rolled up into a single file, usually called index.ts
. We use barrel files to control the external interface to a component and save the consumer from worrying about a module’s internal structure and implementation. Most libraries typically have a barrel file at their root directory containing everything they export.
// @mui-material/index.ts
export * from './Accordian';
export * from './Alert';
export * from './AppBar';
...
The problem is that Jest has no idea where the component we’re importing is located. The barrel file has intentionally obfuscated that fact. So when Jest hits a barrel file, it must load every export referenced inside it. This behavior quickly gets out of hand for large libraries like @mui/material
. We’re looking for a single button and end up loading hundreds of additional files.
Fortunately, we can easily fix this problem by updating the structure of our imports to tell Jest exactly where to find the Buttoncomponent.
// before
import { Button } from '@mui/material';
// after
import Button from '@mui/material/Button';
Using eslint
, we can add the following rule to our config to stop more of these imports from being added in the future.
rules: {
"no-restricted-imports": [
"error",
{
"name": "@mui/material",
"message": "Please use \"import foo from '@mui/material/foo'\" instead."
}
]
}
I’m picking on @mui/material
here, since it’s a popular and large library. Still, it was far from the only library we were importing in a suboptimal fashion. I also had to go through and fix imports from @mui/material-icons
, lodash-es
, and @mui-x-date-picker
alongside some imports from our internal libraries. Combined, the impact of updating all of these imports added up to around a 50% saving in test duration.
Checking setupTests.ts
There’s a temptation for the file configured against setupFilesAfterEnv
in jest.config.js
file to become a dumping ground. It tends to inherit all sorts of one-offs and edge cases people don’t want in all their test files.
transform: {
"^.+\\.(ts|tsx|js|jsx)$": [
'ts-jest', {
tsconfig: 'tsconfig.json',
isolatedModules: false
},
]
},
I suspect this comes from a misconception that this file runs once before all the tests. However, so that Jest can properly isolate each test file, the contents of this file are actually run before each test file.
We can see the impact of the setupTests.ts
file by looking at the flame charts from the previous step. It might reveal some expensive behavior in setupTests.ts
that can be moved back into the relevant test files.
Remove type-checking from the test runs
If we’re using ts-jest
to compile TypeScript for testing, then its default behavior is for the test run to also run the TypeScript compiler’s type-checks. This behavior is redundant as the TypeScript compiler will already be doing that as part of the build. Including this additional check adds a lot more time to the test run, particularly when Jest doesn’t otherwise need to fire up the TypeScript compiler.
To disable this behavior, we can set the following property in our jest.config.js
file. The isolatedModules property is described in ts-jest’s
documentation.
module.exports = {
transform: {
"^.+\\.(ts|tsx|js|jsx)$": [
'ts-jest', {
tsconfig: 'tsconfig.json',
isolatedModules: false
},
]
},
};
My experience with isolatedModules
has been mixed. Updating this setting has doubled performance in some legacy applications, while in some smaller create-react-app
applications, it hasn’t made a difference. Again, the flame charts let us see the impact of this additional work.
Checking for misconfigurations
Performance improvements don’t have to only come from improvements to the codebase; some of the responsibility lies in how developers are using the tooling. Scripts in package.json
can help save typing, hide complexity, and share the best-possible cli configurations across everyone in the project. But they come with a severe downside, as over time, the team forgets how to use the CLIs of their common tools and puts too much trust in the idea that their existing scripts are already in their most optimal configuration. In most projects I have joined, the scripts in package.json
have had a couple of significant misconfigurations, wasting a lot of time unnecessarily.
People confuse scripts originally intended for their continuous integration pipelines with scripts appropriate for their local development environment. Perhaps the scripts weren’t updated with new features and changes in the tools, or maybe they’ve just always been wrong.
With Jest, there are a couple of flags to avoid for tests running locally:
-
--maxWorkers=2
— limits Jest to running in two threads, useful on a constrained CI build agent but not very useful on our powerful development machines that could be running Jest in 5 or 6 different threads. -
--runInBand
— similarly, this prevents Jest from using threading at all. While there are some situations where we don’t need threading, such as when we’re only running a single test file, Jest is smart enough to figure this out for itself. -
--no-cache
,--cache=false
,--clearCache
— prevents Jest from caching data between runs. Per Jest’s docs, on average, disabling the cache makes Jest at least two times slower. -
--coverage
— most local test runs don’t need to generate code coverage reports. We can save ourselves a couple of seconds by skipping this step when we don’t need it.
Jest has a lot of settings, but the defaults should serve us well most of the time. It is crucial to understand the purpose behind any additional flags for the scripts in our package.json
file.
Default to using watch mode
While we’re all used to watch mode for running our application locally, it isn’t as popular for running tests. This tendency is unfortunate because, like our builds, running our tests in watch mode saves our tooling from having to recompute a lot of data. Most of Jest’s perceived slowness is in its startup time rather than the test execution, which watch mode lets us skip.
I suspect developers often fail to take advantage of watch mode because their IDE’s interface inadvertently encourages themnot to. When we’re working on a test file, we’re used to clicking the little green “Run test” arrows next to each test case to start a test run. They’re convenient and quicker than running all the tests or trying to remember the syntax for running a subset of tests in the CLI. Further, they display the results of the tests within our IDE’s test result panel, which is more useful than logs dumped into the console.
With WebStorm, we can update the run configuration used by the “Run test” shortcut, letting us use them to launch the test in watch mode. We can even update Jest’s run template to default all “Run test” shortcuts to use watch mode.
We don’t need to run all of the tests
I’ve noticed that, unless they’re working on a single test file, developers tend to default to running all of the tests. This behavior is usually redundant, as Jest can figure out the subset of tests it needs to run based on the files that have changed. As our test suite gets grows, running the entire suite becomes unnecessarily time-consuming, though I hope the advice in this article will help limit how out of hand they get.
Rather than calling jest directly, it’s a good idea to use jest --onlyChanged
, or jest --changedSince
. It might not be 100% reliable, but unless we’re committing straight to master, our Continuous Integration pipelines will catch the rare situations where Jest misses a test.
Test suites are rarely static; they grow in size along with our applications. Slow test suites are only going to get slower. Fortunately, with a small amount of work, we can more than halve the duration of each test. Not only does this action save us time now, but it changes the entire trajectory of our test suite’s duration and quality.
Top comments (0)