Snap
Snap copied to clipboard
Accessibility for totally blind people via screen readers
I wonder if the Snap team has any plans to implement accessibility for totally blind people via screen readers. I realize this is a challenge because Snap's GUI is canvas-based. But Quorum has demonstrated that it's feasible to have a block-based editor that's also accessible to blind people.
We do want to support blind programmers and are looking into adding screen reader support. There are some mid- to longterm developments for this, but also a lot of roadblocks stemming from the horrible architecture around screen readers, especially their unwillingness to expose an API instead of relying on a (shadow) DOM.
What would it take to make the shadow DOM solution acceptable to you? Requiring the user to press a button to enable accessibility is tolerable; I understand that unconditionally creating the shadow DOM would compromise performance. If someone else contributed a shadow DOM-based accessibility implementation, would you accept it?
I'm not really concerned about performance. If I were able to call a screen reader api with whatever strings Snap would expose that would be wonderful and really quick to do. The problem is that screen readers don't work that way but expect to plug into standardized ui components, which makes them unusable for novel GUIs (as well as for many games that have their own way of generating what to show). As I've mentioned, we are looking into the "shadow DOM best practice", but it's rather frustrating work, because we could do so much better if screen reader manufacturer were actually interested in helping blind people.
C’mon Jens - it’s not quite like that. That’s far too reductive. If we must blame someone it’s the browser vendors, but putting blame here doesn’t help. But saying folks aren’t interested in helping blind users is false.
The challenges are a direct result of canvas not necessarily the specific web APIs, and the fact that no one ever imagined an accessibility story for canvas. To be precise, it’s not a “shadow DOM” that we need — shadow DOM is a different HTML api. But any regular old invisible DOM would work. Basic HTML DOM elements do actually provide a reasonable screen reader experience out of the box.
At the same time there are lower-level APIs which are currently being developed that would be good to explore, namely the Accessibility Object Model, which may allow direct manipulation of the accessibility tree that a browser exposes to a screen reader. See this for a start Accessibility Object Model, Phase 3wicg.github.io
I also really really want to point out that there is a huge spectrum of folks between totally blind and perfect vision, many of whom also use screen readers and other features — like keyboard navigation for all elements. I am actually meeting with a student very soon about some potential options but this is a pretty massive undertaking.
If I were able to call a screen reader api with whatever strings Snap would expose that would be wonderful and really quick to do
There are ARIA live regions which can emit hand-crafted announces. Snap! project to test this future. Works for me with Chrome.121@Win10 and an OS built-in screen reader. BTW: Win + Ctrl + N shows a config dialog. Win + Ctrl + Enter starts a screen reader if installed.
I am actually visually impaired, I can't see out of my right eye, so I have met many blind people, and have looked into screen readers. The 2 most popular screen readers that blind people use (on windows) are JAWS and NVDA. It would be good to test this out on those as well as the OS screen readers. Now, I'm not actually sure if testing it out with JAWS is very doable, as that is a paid software, but at least NVDA is free and open source (on github in fact).
I still maintain that the text to speech and labeling is not at all the hard nor interesting part to discuss, with some exceptions. Snap needs a persistent keyboard navigable cursor for the entire IDE. We do have a kind of cursor for entering a stack of blocks but it’s still fiddly to use. Some folks with motor impairments use screen readers, and as Batman mentioned there’s many low vision and partially sighted folks too. (Myself included! Though I too rarely rely on a screen reader.)I think think a full snap test plan would need to include* windows narrator* NVDA, which might also work in Ubuntu now (?)* VoiceOver on macOS * VoiceOver on iOS* JAWS on windows But anyway, we are a long way away from devising testing plan. -- Michael BallFrom my iPhonemichaelball.coOn Feb 19, 2024, at 4:49 PM, ego-lay_atman-bay @.***> wrote: I am actually visually impaired, I can't see out of my right eye, so I have met many blind people, and have looked into screen readers. The 2 most popular screen readers that blind people use (on windows) are JAWS and NVDA. It would be good to test this out on those as well as the OS screen readers. Now, I'm not actually sure if testing it out with JAWS is very doable, as that is a paid software, but at least NVDA is free and open source (on github in fact).
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you commented.Message ID: @.***>