Enhanced Accessibility Signals with Advanced User and Modality-Specific Delays
fixes #204257
This pull request introduces a comprehensive enhancement to the accessibility signals feature in VS Code, providing a more responsive and user-centric experience. The final implementation manages different signal modalities with distinct delays, which are further fine-tuned based on user navigation (skimming through lines vs. reading content inside each line). This dual-level customization significantly improves the flexibility and adaptability of signal handling.
Key developments in this feature include:
- Modality Support: The SignalModality enum type was introduced, enabling the specification of desired modalities per call. This foundational change allows for the separation and individual handling of different modalities based on various conditions like triggers or settings.
- Modality-Specific Delays: Each signal modality can now have its own distinct delay. This enhancement allows for the nuanced handling of different modalities, catering to their unique requirements.
- Feature-Specific Delays: A BaseLineFeature class was introduced that supports feature-specific delays. This update allows unique delays for different types of features, providing a more tailored user experience.
- User-Action Dependent Delays: The system now adjusts delays based on user navigation behavior. When a user navigates within the same line, longer delays are applied to minimize interference with screen reader output. Conversely, shorter delays are implemented for line changes, providing timely feedback for users who are skimming through the program.
These changes collectively provide a more responsive and user-centric accessibility signals feature in VS Code, enhancing the overall user experience.
Testing the Feature:
- Open a test file containing code.
- Enhance the test environment by adding breakpoints, folding code blocks, and intentionally introducing bugs to generate errors and/or warnings in the editor.
- Navigate through the code in two distinct ways: a) Skim through the program by moving quickly between lines. b) Read the content of lines with features using the left/right arrow keys to navigate character by character (or word by word).
- Currently, the delays are predefined in the code and are not user-configurable. However, you can manually adjust these values in the code to observe their effects.
Future work:
- Deprecate the
accessibility.signals.debouncePositionChangessetting and eliminate its related code. - Introduce a categorization of the feature types according to their relevance; tentative proposal: critical (errors/warnings), info (folded areas, breakpoints, bookmarks...).
- Add user-configurable settings for the new delays, probably grouping them by modality and/or feature category.
@microsoft-github-policy-service agree
@microsoft-github-policy-service agree
cc: @rperez030
I have made two additional commits since the original pull request. These updates introduce user-configurable settings for the delays and remove the previously used debouncing code, which has become redundant and indeed could interfere with the new delays, extending them beyond their intended duration. With these updates, I consider that the new feature is now complete.
I have decided to retain the accessibility.signals.debouncePositionChanges setting for the time being. Although it is not utilized in the current version, it could potentially serve as a global toggle for all new delays in future updates. If this approach is adopted, a renaming of the setting would be appropriate to reflect its broader role.
Please review these changes at your earliest convenience. I look forward to your feedback and suggestions.
Thanks for the PR! Unfortunately, I'm not able to understand the PR description.
This foundational change allows for the separation and individual handling of different modalities based on various conditions like triggers or settings.
What do you mean with that?
These changes collectively provide a more responsive and user-centric accessibility signals feature in VS Code, enhancing the overall user experience.
This statement needs some convincing arguments...
The code change might be good and justified, but I just don't understand the reasoning behind it.
@hediet, Thanks for taking the time to review this PR, I've updated the description with a detailed explanation of the issues in accessibility signals that this PR intends to solve. I'm unsure if I need to follow a specific template for the description. I noticed a new issue that might suggest this, but navigating the pull request page with my screen reader this point isn't clear to me.
Will look into this next week, this week we have testing!
Layering issue:
[valid-layers-check ] [build/lib/layersChecker.ts]: Reference to symbol 'NodeJS' from '@types/node' violates layer '**/vs/**/browser/**' (/mnt/vss/_work/1/s/src/vs/workbench/contrib/accessibilitySignals/browser/accessibilitySignalLineFeatureContribution.ts (46,59) Learn more about our source code organization at https://github.com/microsoft/vscode/wiki/Source-Code-Organization.
Thanks for working on this. It works well based on my testing 👏🏼
Layering issue:
[valid-layers-check ] [build/lib/layersChecker.ts]: Reference to symbol 'NodeJS' from '@types/node' violates layer '**/vs/**/browser/**' (/mnt/vss/_work/1/s/src/vs/workbench/contrib/accessibilitySignals/browser/accessibilitySignalLineFeatureContribution.ts (46,59) Learn more about our source code organization at https://github.com/microsoft/vscode/wiki/Source-Code-Organization.
I have changed the NodeJS.timeout type to any, I tried to change it to number but the linter complains... Maybe there is a better option in the context of the VSCode repo, but I've not found it, I did a quick search in the repo for other setTimeout uses but I could not see a type definition.
My apologies, I closed the PR by mistake 😳
Thanks for working on this. It works well based on my testing 👏🏼
Thanks for your feedback and for taking the time to test my work, I really appreciate it 👏🏼. If there are any specific areas you think could be improved or require further clarification, please let me know. I look forward to any additional comments you may have ☺
@ramoncorominas I cannot thank you enough for all the work and time spent on this. You definitely took my ramblings about delays to the next level. I really like the modalities approach. @meganrogge this work well on Windows as far as I can tell. I don't have my environment properly setup to test on macOS, so I may wait until it can be merged to test over there.
Thanks for testing on Windows @rperez030. I tested it on macOS 👍🏼
I had a more detailed review now.
Over the years, the SignalLineFeatureContribution changed in many (unintended) ways and is not in a good shape anymore.
Ideally, a LineFeature is stateless, but _lineChanged works against that.
Before changing the code more, I'd suggest to describe this entire feature in a detailed way first (please without chat gpt). Also, there are two things that can be debounced: Cursor changes and feature (meaning markers, breakpoint etc.) changes. For example, a warning might appear for some milliseconds and then disappear again.
I also don't fully understand the Voice modality. How is it different from an aria alert?
I also don't fully understand the
Voicemodality. How is it different from an aria alert?
An aria alert role tells a screen reader that changes in a particular container are relevant so that they can be announced via Braille or speech, what we today call announcements. I understand the voice modality to be specific about TTS. For example, someone could build an extension to support magnification users who are not screen reader users that makes voice announcements in certain situations depending on these settings. Someone who uses a screen reader may benefit from hearing certain events announced by a different voice in a process that is separate from the screen reader. Someone may build an extension that listens for haptic signals to transmit haptic feedback to a user via a wristband.
Also, @hediet, I'm not saying this is the case, but some folks who have English as their second language, or who do not speak English at all but want to contribute may use Copilot or any translation tool to help with their writing, or actually use the Copilot feature to generate commit messages, for example. I suggest that we focus on the actual content, the accuracy of the explanation, the potential benefit of the contribution, and the quality of the code rather than the tools contributors use to get things done.
Someone who uses a screen reader may benefit from hearing certain events announced by a different voice in a process that is separate from the screen reader.
That sounds interesting. How would that work technically? Currently, signals are not forwarded to extensions, so support for this would have to be implemented in VS Code core.
I'm just concerned to add an enum variant (Voice) for something that is not implemented yet (or even worse, not implementable), as it is just dead code.
So far it looks like the Modality enum just reflects the existing audio-cue/aria-alert settings and enables feature-category/modal-specific delays.
A document that describes how these range/line-based "features" should work would be very helpful.
Also, @hediet, I'm not saying this is the case, but some folks who have English as their second language, or who do not speak English at all but want to contribute may use Copilot or any translation tool to help with their writing, or actually use the Copilot feature to generate commit messages, for example.
I agree and respect that, but the text below "Enhanced Solution Strategy" in the initial PR description is extremely difficult to read and to make sense of - and this is very typical for chat gpt generated text. The text above it however is very clear and precise (thanks for adding it!).
Superseded by #210679. Given changed requirements of the feature and new insights, we decided to rewrite the line-feature based signals (now called text property signals). This cleans up the code significantly and gives a lot of flexibility with regards to delay times.
Thanks for the idea of per-modality-delays!