software-engineering-quality-framework
software-engineering-quality-framework copied to clipboard
Update quality-checks.md
Added BrowserStack and suggested config for endorsed accessibility test tools
Hi - can we clarify the conditions we think should apply to tests passing or failing? (Not the contents of your specific tests, but the things you're testing against?)
For example, are we expecting tests to fail if certain UI elements don't exist under some browsers, or for tests to fail if buttons don't take you to the correct destination, etc, etc?
I think this is what teams are missing: a clear steer on what "good" really means for cross-browser testing.
Thanks!
Hi Andy, in NHSUK we use Chrome as a default browser when running our tests. We expect the same test to pass on almost any BrowserStack combination of device, OS or browser. This is because we use the NHSUK front end library and the UI presented should not differ between devices/browsers, other than screen size/scale etc. This is the same for elements that appear on the page and end-to-end journeys.
Our automation tests are still running via our frame work, BrowserStack just gives us the ability to run our tests against different browsers/devices. BrowserStack itself isn't actually running any tests. We just use their interface to run our tests.
There are probably a couple of things bundled in this PR, both good - I think, they are as follows:
- Accessibility testing using BrowserStack
- Recommended browser compatibility testing
With the above in mind
- Do we have an opinion on the tooling that can be integrated with the BrowserStack subscription to establish clear passing/failing quality gates? E.g. Axe. If so, sounds like a good candidate for a recipe book
- Do we have a list of browsers that we could name explicitly that we would like / need to test against? E.g. any further reference to NHSD policy etc.