Add automated accessibility tests
Increasing Access
Automated accessibility testing would allow contributors to easily check and update their contributions to match WCAG guidelines. It would also allow contributors to easily flag areas of the website that may not be completely accessible yet, which were missed through manual testing.
Most appropriate sub-area of p5.js?
Home
Feature request details
I believe there's currently no automated accessibility testing on p5.js-website. The tool I'm most familiar with is Axe-Core, which can be set up to run with an E2E or component testing framework (Cypress, Playwright or Storybook), and test pages or components against specified WCAG standards.
The work I'm proposing for the initial set up Axe-Core for automated testing for eventual continuous adoption on all pages of the website:
- Choose an E2E or component testing framework & set this up
- Set up Axe-Core to run on one page locally as a proof-of-concept
- Get community agreement on the version of WCAG to use, and rules to add or omit
- Create a new GHA workflow to run automated Axe-Core tests on new PRs
Thanks for this idea @clairep94 ! This will have to be discussed a bit more, but anyone who is interested to potentially support this is welcome to add their thoughts below.
It might be useful to find recent accessibility bugs (eg keyboard traps on code snippets: https://github.com/processing/p5.js-website/pull/667) or discussions, and use those to suggest what to prioritize during implementation? I believe there were also some color contrast bugs recently.
Hi @clairep94 ! Just following up on this briefly: the p5.js-website accessibility stewards are working on identifying different web accessibility issues, such as https://github.com/processing/p5.js-website/issues/866. You can use the labels "Accessibility: High Severity," "Accessibility: Low Severity" and "Accessibility: Best Practice" to monitor the issues they will add in the future. These issues and their relative ranking could potentially be useful to validate any particular tool (arguably, automation ought to identify high severity issues with low false negatives; and low severity or best practice with low false positive). What do you think? Are you still interested in working on this?
I recommend implementing axe + Playwright as the automated accessibility testing solution for this project, for the following reasons:
-
Astro project characteristics Astro is content-oriented and often outputs static sites. Playwright is well-suited for this scenario, as it can run E2E tests across Chromium, Firefox, and WebKit browsers, making it very friendly for public sites and static outputs. It also performs reliably and quickly in CI/CD environments, making it ideal for running in GitHub Actions or other CI platforms.
-
Limitations of Cypress Cypress officially supports Chromium and Firefox, while WebKit/Safari support is still experimental and not as stable or feature-complete as in Playwright. It is better suited for scenarios requiring an interactive GUI for debugging, but in terms of reliable cross-browser coverage and CI efficiency, it has certain limitations compared to Playwright.
-
Role of Storybook Storybook focuses on component-level testing rather than E2E. Its advantage lies in checking individual components for accessibility issues during development. However, it cannot verify the overall page interactions and behaviors. I believe it’s best to start with E2E-level automated scans, since accessibility is ultimately about the full user experience—catching full-page issues first, then progressively adding component-level checks will be more effective.
-
Ease of integration with axe The axe team provides the
@axe-core/playwrightpackage, which can be called directly within Playwright tests to quickly perform accessibility scans and check compliance with WCAG standards. Integration is straightforward, and it can output violation reports directly in CI.
Therefore, I suggest starting with Playwright + @axe-core/playwright for E2E-level automated accessibility testing, and introducing Storybook for component-level testing as needed.
Thank you for this detailed and helpful breakdown @coseeian . I see you've self-assigned, will you work on this next / can this task be broken down into smaller steps for contributors, especially when initial setup is complete? Would be great also to update the documentation.
OK. I will complete the initial setup and create documentation outlining how to use the tool and key points contributors should be aware of.
Hi @coseeian I noticed the initial Playwright + Axe-Core setup has been completed, but it seems there are still a few things from the proposal that's not done can i work on this issue? Please let me know!
I'm currently working on this issue. Since its scope is quite broad, I'll continue until I reach a good stopping point. Once that’s done, we can check together if it makes sense to split or reassign parts of it
Sure @coseeian, sounds good Please let me know once you reach a stopping point I'd love to help out. I have experence writing E2E tests (including with Playwright), so happy to contribute wherever needed.