DEV Community

Corina: Web for Everyone
Corina: Web for Everyone

Posted on • Updated on

Testing for SC 2.5.3 Label in Name with Playwright

What is Success Criterion 2.5.3?

It’s a WCAG (Web Content Accessibility Guidelines) rule that requires careful naming of interactive elements and headings. For these types of elements, the browser has to compute an accessible name which may be different from the visible text or label of that element. The 2.5.3 criterion requires that the accessible name contains the visible text. In fact, as a best practice, the start of the accessible name should match the visible text.

For example, given a button with visible text “Submit”, an accessible name of “Submit form” will guarantee compliance with the rule, but “Form submission” will not.

This rule helps two groups of users:

  1. Speech input users who rely on the visible text to activate an interactive element. Since voice software can recognize an element only by its accessible name, a user’s command should match the name assistive tech will recognize.

  2. Sighted users who rely on screen readers. With this rule in place, they will hear the same text as the one they see on the screen.


Testing conformance with 2.5.3

In the past, I used to rely on Google’s Lighthouse scans.

Note: Google's Lighthouse scans use Deque's axe-core library.

Several months ago my code unexpectedly failed this rule a couple of times, so I decided to develop my own test. I just wanted to explore the criteria such an evaluation might use. Since the source of failures were elements with aria-label, I decided to design the test around them.

This is certainly not to say that aria-label is the recommended way to name an interactive element. Quite the contrary. I just didn't know any better at the time. My advice: use it when semantic HTML can’t take you any further, and even then you should first consider better ARIA solutions (like aria-labelledby) or CSS solutions (like visually-hidden text).


First approach: collect all elements with aria-label

This approach has one advantage: once all elements are collected, it takes the visible text and the accessible name (given by the aria-label value) and compares the first few words.

test.beforeEach(async ({ page }) => {
    await page.goto('https://webforeveryone.us/');
});

test('Controls conform with SC 2.5.3 Label in Name', async ({ page }) => {
    const elements = page.getByRole('main').locator('[aria-label]');
    const elementsCount = await elements.count();
    const errors: string[] = [];
    for (let i = 0; i < elementsCount; i++) {
        const element = elements.nth(i);
        const visibleText = await element.innerText() ?? "";
        const accName = await element.getAttribute('aria-label') ?? "";

        if (visibleText && accName) {
            const textToCompareFromVisible = firstNWords(visibleText, 3).toLowerCase();
            const textToCompareFromAccName = firstNWords(accName, 3).toLowerCase();
            const match = textToCompareFromAccName === textToCompareFromVisible;
            if (!match) {
                errors.push(`Acc name "${accName}" should start with "${visibleText}"`);
            }
        }
    }  
    if (errors.length > 0) {
        throw new Error(`Label in Name test failed:\n${errors.join("\n")}`);
    }
});

function firstNWords(text: string, n: number) {
    return text.split(" ").slice(0, n).join(" ");
}
Enter fullscreen mode Exit fullscreen mode

Why this first approach is far from perfect

  1. It excludes any elements named with aria-labeledby, the other ARIA attribute used to compute the accessible name. This attribute supersedes any attribute, including the aria-label.

  2. It excludes elements whose accessible name is based on visually-hidden text or attributes like the alt text for images.

  3. It does not try to match the entire string of the visible text with the start of the accessible name.

Note: When testing with Windows’ speech software, I can successfully activate a control using the first two words of the visible text or label. Of course, one testing scenario should not speak for all possible contexts, but for the purpose of this test I went with the assumption that the speech software can recognize an element based on the first few words of its accessible name. This assumption takes care of the speech input users that SC 2.5.3 is concerned with, but it ignores the sighted screen reader users. For them, a scenario in which the visible text and the accessible name diverge significantly after the first few words is not a great experience.

The motivation to try a different approach was driven not only by these drawbacks but also by necessity. Lighthouse has removed the 2.5.3 rule from its tests, and it’s only available with the axe-core Pro account. Below is my second, still very raw attempt. I’m hoping to improve on it as I keep tinkering with it.


Second approach: manually select all target elements

This is more labor intensive. Playwright’s Codegen feature helped with collecting the information about each clicked element:

await page.getByRole('Learn accessibility').click();
Enter fullscreen mode Exit fullscreen mode

and then I used it to set up the Locator object:

const buttonLearnA11y = page.getByRole('link', { name: 'Learn accessibility' })

Enter fullscreen mode Exit fullscreen mode

The ideal solution would involve grabbing a Locator using page.getByRole('link', { name: 'Some name' }) or page.getLabel('link', { name: 'Some name' }) and then directly accessing the name from the Locator object itself. As far as I can tell, this is not possible, so I had to manually add the name value to the checkLabelInName function:

const labelCheck = await checkLabelInName(buttonLearnA11y, 'Learn accessibility');

Enter fullscreen mode Exit fullscreen mode



Here’s the entire code, including the helper functions:

export async function checkLabelInName(element: any, accName: string) {
    const visibleText = await element.innerText();

    if (!visibleText.trim() || !accName.trim()) {
        return true;
    }

    const normalizedVisibleText = normalizeText(visibleText);
    const normalizedAccName = normalizeText(accName);

    const numWordsToCompare = Math.min(countWords(normalizedVisibleText), countWords(normalizedAccName));

    const textToCompareFromVisible = getWords(normalizedVisibleText, numWordsToCompare).toLowerCase();
    const textToCompareFromAccName = getWords(normalizedAccName, numWordsToCompare).toLowerCase();

    return textToCompareFromVisible === textToCompareFromAccName;
}

function normalizeText(text: string) {
    return text
        .replace(/[,.!?;:"'()]/g, '')  
        .replace(/\s+/g, ' ')          
        .trim();                      
}

function countWords(text: string) {
    return text.split(' ').length;
}

function getWords(text: string, count: number) {
    return text.split(' ').slice(0, count).join(' ');
}

test.beforeEach(async ({ page }) => {
    await page.goto("https://webforeveryone.us/");
});

test.describe("home-main controls conform to 2.5.3 Label in Name", () => {
   test("Learn accessibility BUTTON conforms to 2.5.3 Label in Name", async ({ page }) => {
        const buttonLearnA11y = page.getByRole('link', { name: 'Learn accessibility' })
        const labelCheck = await checkLabelInName(buttonLearnA11y, 'Learn accessibility');
        expect(labelCheck).toBeTruthy();
    });

    // more 2.5.3 Label in Name tests
}) ;
Enter fullscreen mode Exit fullscreen mode

Why the second approach in not perfect

  1. Quite labor intensive (or maybe Playwright has been spoiling me too much with its Codegen voodoo?! Is this even an excuse when judging the quality of a test?!)

  2. It relies on the tester’s accuracy and knowledge of which elements need to be tested (I know, I know. That's the tester's job ...)

  3. The test passes in scenarios where the rule simply does not apply. As test defects go, this is not an outrageous one, but it's still misleading.


This is it! Right now, I'm out of ideas. As I'm adding this test to all the pages of my site, I will (hopefully) come across instances of false positives and other inconsistencies that will help with the next iteration.

Thank you for reading! I'd appreciate your thoughts or hints on how to improve my approach.


Image credit: Photo by Alex Kondratiev

Image description: Glass test tubes containing green, orange, and blue liquids.

Top comments (0)