There are a few popular tools available for the accessibility testing such as Lighthouse, AXE Tool , and plugins.
Developers and Testers expect that these tools magically help them cover 100% accessibility issues. However, accessibility tools cannot provide you 100% cover from the issues but it can help you to atleast report some % of issues and how to fix them.
Today we are going to learn the issues automated took can and cannot report:
Common Tools
Can | Can't | |
---|---|---|
alt tags | Can identify if alt tags are missing | Can't identify if alt tags are meaningful |
Labels | Can identify if labels are missing & order is incorrect | - |
Color contrast | Can identify if color contrast is failing | Can't identify if color contrast on images and gradient |
Focus | - | Can't identify if the focus order is correct |
Order of Headings | Can identify if order of H1-H6 is correct | - |
ARIA | Can identify if ARIA tag is missing | Can't identify if the usage is correct |
Role & landmarks | Can identify if ARIA role and landmarks is missing | Can't identify if the usage is correct |
Semantic | - | Can't identify if the semantic tags are used or not |
Responsive | - | Tools cannot identify if app is not responsive |
Experience | - | Can't identify if the experience of assistive technologies is similar to non-assistive technologies |
The above points are important to understand the scope of automated testing. While using the tools one should be aware 'what' the tools will be testing and do the manual testing of 'what cannot' be tested by tools.
Happy Learning!!
Top comments (4)
Absolutely right. 100 scores, 0 issues is not the indicator of application is accessible. Thank you for sharing your experience
Nice article ! (Just a little error in the table last cell, it is assistive technologies not assertive)
It shows how much of the accessiblity testing rely on the human analysis.
<3 Thank you for highlighting. Fixed :)