Poor osascript performance on Sonoma?
Issue
Setup:
- Fresh setup running Real World Example.
Details
I've noticed that using Guidepup is very slow on macOS Sonoma.
In particular, it seems to take a long time for the AppleScripts to execute when you shell out to osascript. When following the "Real World Example", I observed a 5-6 second gap between each command to find the next heading. I'm wondering if you've been able to reproduce this?
It's not immediately clear if it's a VoiceOver issue or an AppleScript issue, but I've been assuming it's AppleScript. When I dug through source, I noticed you're shelling out to osascript. I'm curious if you've tried to bridge to OSAKit directly instead of shelling out. There's some evidence online that suggests this would be faster.
FWIW, I did manage to get the objc library talking to OSAKit for an unrelated project. If someone else is willing to repro and confirm it's AppleScript, I can work on a PR to move to the objc bindings.
Hi @masegraye 👋
Anything that could improve performance is certainly welcome!
In this instance just want to check if the slowness you’ve encountered is due to the shelling out, or a consequence of the polling mechanic of the VO wrapper.
The default currently is for Guidepup to wait (via poll) until all spoken text has been emitted from VO. Currently to get the additional information you are limited behind the speaking rate of VO, making things unfortunately slow.
If you don’t need the hints then you can use the capture option of "initial" when calling vo.start() to greatly speed things up (not sure if got around to exposing this option in the Playwright package yet…)
Ah yes, see also https://github.com/guidepup/guidepup-playwright?tab=readme-ov-file#providing-screen-reader-start-options
There are options to explore around improving this, and would be also interested to see benchmarks as to how much could be improved by not shelling out, so happy to explore further with you?
@cmorten Interesting - thanks for that background.
I came across guidepup while building out a proof-of-concept. I think I'll be productionizing it, so that will give me more time to go deeper on this.
Happy to circle back with more data and benchmarks.
See also #62 and #82