Have you ever wondered how different browsers stack up when running @playwright/test
tests? Today, we're going to dive into that question.
The Test Setup
To make our comparison fair, we'll run the same test both on a local machine and on a remote Virtual Machine (VM).
Here's what my local machine looks like:
OS: macOS 13.2.1
CPU: (10) arm64 Apple M1 Max
Memory: 64.00 GB
And here are the VM specs:
OS: Linux 5.19 Ubuntu 22.04.2 LTS 22.04.2 LTS (Jammy Jellyfish)
CPU: (8) x64 Intel(R) Xeon(R) Platinum 8481C CPU @ 2.70GHz
Memory: 15.62 GB
We're going to put Chromium, Firefox, Webkit, Google Chrome, and Microsoft Edge to the test and see how they perform.
Here's the TypeScript configuration we're using to set up these browsers for the test:
import { defineConfig, devices } from '@playwright/test';
export default defineConfig({
projects: [
{
name: 'chromium',
use: { ...devices['Desktop Chrome'] },
},
{
name: 'firefox',
use: { ...devices['Desktop Firefox'] },
},
{
name: 'webkit',
use: { ...devices['Desktop Safari'] },
},
{
name: 'Google Chrome',
use: { ...devices['Desktop Chrome'], channel: 'chrome' },
},
{
name: 'Microsoft Edge',
use: { ...devices['Desktop Edge'], channel: 'msedge' },
},
],
testDir: 'tests',
});
Our contestants are:
- Chromium 115.0.5790.24 (playwright build v1067)
- Firefox 113.0 (playwright build v1408)
- Google Chrome 114.0.5735.106
- Microsoft Edge 114.0.1823.43
The Test Case
Our test case? It's a simple one:
test('converts JSON to YAML', async ({ page }) => {
// Read more about blocking Google Analytics in Playwright tests here:
// https://ray.run/blog/blocking-google-analytics-in-playwright-tests
await page.route('https://www.google-analytics.com/g/collect*', (route) => {
route.fulfill({
status: 204,
body: '',
});
});
await page.goto('https://ray.run/tools');
// Testing Codemirror editor is not an easy task.
// We are using a hidden setting to switch to a textarea editor.
await page.evaluate(() => {
localStorage.setItem('RAYRUN_PREFER_TEXTAREA_EDITOR', 'true');
});
await page.getByRole('link', {
name: /JSON to YAML Converter/,
}).first().click();
// This is not strictly necessary but I wanted to add variety to the test.
await page.waitForLoadState('networkidle');
await page.getByRole('textbox', {name: 'Input'}).clear().fill('{ "hello": "world" }');
await expect(page.getByRole('textbox', {name: 'Output'})).toContainText('hello: world');
});
We're:
- Going to https://ray.run/tools
- Finding a link to the "JSON to YAML Converter" tool
- Navigating to the tool page
- Filling in the input
- Checking if the output is YAML
Note: We're intentionally running tests against a remote server instead of a local one to ensure the CPU usage of the web server doesn't skew our test results. Any network overhead should be consistent across all browsers.
Results and Insights
Running the test once might not give us a fair measure due to various factors, so we're going to run each test 100 times using 4 workers:
$ /usr/bin/time -p playwright test --project chromium --repeat-each
100 --workers 4
$ /usr/bin/time -p playwright test --project firefox --repeat-each 100 --workers 4
$ /usr/bin/time -p playwright test --project webkit --repeat-each 100 --workers 4
$ /usr/bin/time -p playwright test --project google-chrome --repeat-each 100 --workers 4
$ /usr/bin/time -p playwright test --project microsoft-edge --repeat-each 100 --workers 4
We ran each command 5 times and reported the average runtime. Here are the results for both local and VM:
Environment | Project | Total Time | Relative Time Difference (%) |
---|---|---|---|
Local | chromium |
76s | - |
Local | firefox |
102s | 34.2% |
Local | webkit |
86s | 13.2% |
Local | google-chrome |
91s | 19.7% |
Local | microsoft-edge |
79s | 3.9% |
Firefox took the longest, around 34.2% more time than the quickest (Chromium). Other browsers like Webkit, Google Chrome, and Microsoft Edge took 13.2%, 19.7%, and 3.9% more time respectively.
Environment | Project | Total Time | Relative Time Difference (%) |
---|---|---|---|
VM | chromium |
89s | - |
VM | firefox |
152s | 70.8% |
VM | webkit |
142s | 59.6% |
VM | google-chrome |
92s | 3.4% |
VM | microsoft-edge |
90s | 1.1% |
Firefox took the longest, around 70.8% more time than the quickest (Chromium). Other browsers like Webkit, Google Chrome, and Microsoft Edge took 59.6%, 3.4%, and 1.1% more time respectively.
Both on the local machine and the VM, Firefox ran significantly slower than the other browsers.
Headed vs Headless
A common belief is that running tests in headless mode speeds them up. That may hold true on a local machine, but what about on a remote VM without a GPU?
Similar to the previous scenario, we ran each test 100 times, 5 test suites each, and averaged the total execution time.
On a VM, running Playwright with X11 is necessary:
xvfb-run --auto-servernum --server-num=1 --server-args='-screen 0, 1920x1080x24' npx playwright test --project google-chrome --repeat-each 100 --headed
Environment | Project | Mode | Total Time | Relative Time Difference (%) |
---|---|---|---|---|
VM | google-chrome |
Headless | 86s | - |
VM | google-chrome |
Headed | 99s | 15.1% |
The headed mode on VM took approximately 15.1% more time than the headless mode on the same machine.
Environment | Project | Mode | Total Time | Relative Time Difference (%) |
---|---|---|---|---|
Local | google-chrome |
Headless | 140s | - |
Local | google-chrome |
Headed | 181s | 29.3% |
The headed mode locally took approximately 29.3% more time than the headless mode on the same machine.
Turns out, running in headless mode indeed speeds up test execution locally. However, this difference vanishes when executing tests on a VM.
Conclusion
So, there you have it! We've demystified the performance comparison of running Playwright tests across different browsers and modes. Happy testing!
Top comments (0)