DEV Community

Cover image for Let's Measure Performance with Playwright Feat: Chrome Trace
Meursyphus
Meursyphus

Posted on

Let's Measure Performance with Playwright Feat: Chrome Trace

Introduction

Hello there. I recently wrote an article on optimizing the performance of my library, Flitter. Every time I measured the library's performance, I had to manually access chrome devtools and press the record button. In this post, we'll look at how to record and track performance for sustainable optimization.

Library Operation

Before explaining the measurement process, let's discuss how the library we're measuring operates.

mesure delay on painting diagrams

Similar to how React divides its process into the “Render Phase” and “Commit Phase,” Flitter operates in two distinct actions: build (mount) and draw.

  • Build (Mount):

    Flitter starts by creating a virtual DOM corresponding to the SVG whenever a state change occurs. If it’s the initial build, instead of the rebuild method, a mount action is invoked.

  • Draw:

    Unlike React, the draw phase in Flitter includes calculating the layout. Once it determines where each SVG element should be positioned, it calls the paint method to reflect the changes in the actual DOM.

chrome devtools mesurements

Let’s Measure with Playwright

Playwright is a Node.js library designed for browser automation. Developed by Microsoft, it supports major browsers like Chrome (Chromium), Firefox, and Safari (Webkit). Though primarily aimed at automating web application testing, Playwright can also be used for web scraping and web UI automation tasks.



    test('Capture performance traces ans save json file on diagram is rendered', async ({
        page,
        browser
    }) => {
        await browser.startTracing(page, {
            path: `./performance-history/${formatDate(new Date())}.json`
        });
        await page.goto('http://localhost:4173/performance/diagram');

        await page.evaluate(() => window.performance.mark('Perf:Started'));
        await page.click('button');
        await page.waitForSelector('svg');
        await page.evaluate(() => window.performance.mark('Perf:Ended'));
        await page.evaluate(() => window.performance.measure('overall', 'Perf:Started', 'Perf:Ended'));

        await browser.stopTracing();
    });



Enter fullscreen mode Exit fullscreen mode

Using Playwright, we can easily extract a chrome trace report. You will see the performance reports accumulate at the specified path.

performance record is stored

Analyzing the Trace Report

A Chrome trace report displays all traceEvents that occurred during a specified interval. Importing this into Chrome DevTools allows us to see the execution times of functions. The Chrome trace viewer interprets trace events to show the start and end times of function call stacks. A traceEvent includes the following items:



type TraceEvent = {
    args: object; // An object containing additional information related to the event
    cat: string; // A string representing the event category
    name: string; // The name of the event
    ph: string; // A string indicating the type of event (e.g., 'M' stands for a Meta event)
    pid: number; // Process ID
    tid: number; // Thread ID
    ts: number; // Timestamp (in microseconds)
    dur?: number; // Duration of the event (in microseconds, optional)
    tdur?: number; // Duration of the event based on thread time (in microseconds, optional)
    tts?: number; // Timestamp based on thread time (in microseconds, optional)
};


Enter fullscreen mode Exit fullscreen mode

CpuProfile Event

We need events where the category (cat) is disabled-by-default-v8.cpu_profiler. These events show how long functions occupy the CPU. Although we cannot determine the event's duration directly, combining the information from samples and timeDeltas allows us to deduce the execution time for each function.



interface ProfileChunk {
  args: {
    data: {
      cpuProfile: {
        nodes: Node[]; // Array of node IDs for each sample
        samples: number[]; // Array of node IDs for each sample
      };
      lines: number[]; // Array of source code line numbers for each sample
      timeDeltas: number[]; // Array of time differences between each sample (in milliseconds)
    };
  };
  cat: string; // Event category
  id: string; // Unique identifier for the profile chunk
  name: string; // Event name
  ph: string; // Event type (P: ProfileChunk)
  pid: number; // Process ID
  tid: number; // Thread ID
  ts: number; // Timestamp (in microseconds)
  tts?: number; // Time since thread start (in microseconds, optional)
};

interface Node {
  callFrame: {
    codeType: string; // Code type (JS: JavaScript)
    columnNumber: number; // Column number in the source code
    functionName: string; // Function name
    lineNumber: number; // Line number in the source code
    scriptId: number; // Script ID
    url: string; // Script URL
  };
  id: number; // Unique identifier of the node
  parent?: number; // ID of the parent node (if not a root node)
};


Enter fullscreen mode Exit fullscreen mode

disabled-by-default-v8.cpu_profiler events occur at regular intervals, storing the current call stack location (node id) in samples and the time spent in each node in timeDeltas. We can infer the call stack traces through the node id and parent.

To calculate a specific function's execution time:

  1. Traverse samples and timeDeltas, calculating the total timeDeltas for each node id.
  2. Connect nodes' parent-child relationships, adding children's durations to their parent's duration

here is implementation



class ChromeTraceAnalyzer {
    nodes;
    constructor(trace) {
        this.setConfig(trace);
    }

    // 함수 이름을 통해 실행시간을 추적합니다. 단위는 밀리세컨드(ms)입니다.
    getDurationMs(name) {
        if (this.nodes == null) throw new Error('nodes is not initialized');
        const result = this.nodes.find((node) => node.callFrame.functionName === name);
        return result.duration / 1000;
    }

    //chrome trace report를 통해 콜스택 트리(nodes)를 형성합니다. 
    setConfig(trace) {
        const { traceEvents } = trace;

        const profileChunks = traceEvents
            .filter((entry) => entry.name === 'ProfileChunk')

        const nodes = profileChunks
            .map((entry) => entry.args.data.cpuProfile.nodes)
            .flat()

        const sampleTimes = {};

        profileChunks.forEach((chunk) => {
            const {
                cpuProfile: { samples },
                timeDeltas
            } = chunk.args.data;

            samples.forEach((id, index) => {
                const delta = timeDeltas[index];
                const time = sampleTimes[id] || 0;
                sampleTimes[id] = time + delta;
            });
        });

        this.nodes = nodes.map((node) => ({
            id: node.id,
            parent: node.parent,
            callFrame: node.callFrame,
            children: [],
            duration: sampleTimes[node.id] || 0
        }));

        const nodesMap = new Map();

        this.nodes.forEach((node) => {
            nodesMap.set(node.id, node);
        });

        this.nodes
            .sort((a, b) => b.id - a.id)
            .forEach((node) => {
                if (node.parent == null) return;
                const parentNode = nodesMap.get(node.parent);
                if (parentNode) {
                    parentNode.children.push(node);
                    parentNode.duration += node.duration;
                }
            });
    }
}



Enter fullscreen mode Exit fullscreen mode

Let’s Analyze and Record with Playwright

Now, let’s analyze the trace with Playwright and record the results. The main functions are runApp, mount, draw, layout, and paint.

  • RunApp: The parent function of mount and draw. It encompasses all activities involved in drawing the diagram.
  • Draw: The parent function of layout and paint. It includes tasks that modify the actual DOM.


test('Capture analyzed trace when diagram is rendered', async () => {
        const COUNT = 10;

        const duration = {
            timestamp: Date.now(),
            runApp: 0,
            mount: 0,
            draw: 0,
            layout: 0,
            paint: 0,
            note: ''
        };
        for (let i = 0; i < COUNT; i++) {
            const browser = await chromium.launch({ headless: true });
            const context = await browser.newContext();
            const page = await context.newPage();
            await page.goto('http://localhost:4173/performance/diagram');
            await browser.startTracing(page, {});
            await page.evaluate(() => window.performance.mark('Perf:Started'));
            await page.click('button');
            await page.waitForSelector('svg');
            await page.evaluate(() => window.performance.mark('Perf:Ended'));
            await page.evaluate(() =>
                window.performance.measure('overall', 'Perf:Started', 'Perf:Ended')
            );

            const buffer = await browser.stopTracing();
            const jsonString = buffer.toString('utf8'); // buffer를 UTF-8 문자열로 변환
            const trace = JSON.parse(jsonString); // 문자열을 JSON 객체로 파싱
            const analyzer = new ChromeTraceAnalyzer(trace);
            duration.runApp += analyzer.getDurationMs('runApp') / COUNT;
            duration.mount += analyzer.getDurationMs('mount') / COUNT;
            duration.draw += analyzer.getDurationMs('draw') / COUNT;
            duration.layout += analyzer.getDurationMs('layout') / COUNT;
            duration.paint += analyzer.getDurationMs('paint') / COUNT;
            browser.close();
        }

        console.log('****Execution Time****');
        console.log(`runApp: ${duration.runApp}ms`);
        console.log(`mount: ${duration.mount}ms`);
        console.log(`draw: ${duration.draw}ms`);
        console.log(`layout: ${duration.layout}ms`);
        console.log(`paint: ${duration.paint}ms`);
        console.log('********************');

        const __dirname = path.dirname(fileURLToPath(import.meta.url));
        const filePath = path.join(__dirname, '../performance-history/duration.ts');

        let fileContent = fs.readFileSync(filePath, { encoding: 'utf8' });
        fileContent += `histories.push(${JSON.stringify(duration)});\n`;
        fs.writeFileSync(filePath, fileContent);
    });


Enter fullscreen mode Exit fullscreen mode

We will not directly save the trace report but rather the analyzed results. To ensure a uniform testing environment, we create a new browser instance for each performance report. If you repeatedly delete and redraw diagrams without resetting the browser, you can observe runApp's execution time decreasing due to browser optimization behaviors.

here are results

measure function call duration

Conclusion

chart on performance records

trace history chart

For sustainable optimization efforts, it's crucial to record performance like this. By setting up CI/CD to fail tests if function execution times exceed certain thresholds, we can track unexpected performance degradations during code modifications. Although currently based on my local machine, plans are in place to standardize the machine environment using Docker.

The scarcity of libraries for analyzing Chrome traces meant I had to develop one myself. While working on the Flitter library and analyzing other metrics like memory allocation, I gained enough knowledge to consider creating a chrome-trace-viewer library. (The existing libraries on npm either do not work well or were released 5–6 years ago.)

You can check the actual code here:

GitHub: https://github.com/meursyphus/flitter/blob/dev/packages/test/tests/tracking-performance.test.ts

That's all for now.

Reference

Top comments (0)