AtlasEngine does not respect bellStyle for git bash
Windows Terminal version
1.16.10262.0
Windows build number
10.0.22621.1265
Other Software
git bash (git for windows version 2.39.2.windows.1)
Steps to reproduce
- Open git bash in Windows terminal
- Press backspace at an empty prompt
Expected Behavior
Respect users choice of "bellStyle", in my case "none", so nothing should happen.
Actual Behavior
Terminal flashing (like "bellStyle": "window")
That doesn't sound like something that the Atlas engine would affect... That sounds more like the "visual bell" that uses the screen reverse sequence. Could you double check your /etc/inputrc file/? There should be a line like set bell-style none (or, if my hypothesis is correct, set-bell-style visible).
You're right about my inputrc file.
$ cat /etc/inputrc | grep bell
set bell-style visible
What led me to file this bug report is the inconsistency, it only happens when AtlasEngine is enabled. I used the terminal-specific bellStyle setting to disable it generally so that I don't need to set it multiple times, e.g. in git bash, every WSL distro, PowerShell, etc.
it only happens when AtlasEngine is enabled
Oh no, did we break visual bell for the DX renderer?
Oh no, did we break visual bell for the DX renderer?
No. If you do something like tput flash, you can see it works just fine. The difference is that the ncurses flash is implemented as a toggle of DECSCNM with a delay, while the bash visual bell has no delay at all. So unless you happen to get a repaint at exactly the right time, you're not going to see anything.
This was a known issue that we discussed back when DECSCNM was first implemented (see https://github.com/microsoft/terminal/issues/72#issuecomment-504910694). The only thing surprising here is that the Atlas renderer actually does show bash's visual bell sometimes. On my system it's not consistent - sometimes I see it, and sometimes I don't - but I never see anything with the DX renderer.
Yup! We should flush a frame when we get a DECSCNM.
I have started working on this issue. I'm about halfway done writing code that can block the caller until a frame has been rendered. This would ensure that AtlasEngine (or any other renderer) has drawn the visual bell for at least 1 frame. This would also allow us to implement synchronized update sequences #8331.
Unfortunately, I won't be able to land this in 1.19 because to implement this correctly I need to rewrite all of RenderThread (@DHowett's words, not mine 😄).
FYI, the way I implemented the synchronized update sequence was with a simple paused flag in the Renderer class that would make the Renderer::PaintFrame return immediately when set. To prevent it being paused indefinitely, it also checked if the pause had been active for more than 200ms, in which case it would reset the flag and allow the PaintFrame to proceed.
It felt like a bit of a hack, but it seemed to work OK, and didn't require rewriting all of RenderThread. 😛
Also, I'm not entirely sure what you mean by "block the caller until a frame has been rendered", but for synchronized updates, the VT parser needs to carry on parsing output while the renderer is blocked, which seems like it might be the opposite of what you're proposing.
Also, I'm not entirely sure what you mean by "block the caller until a frame has been rendered", but for synchronized updates, the VT parser needs to carry on parsing output while the renderer is blocked, which seems like it might be the opposite of what you're proposing.
Ah whoops, I was a bit too terse. Basically, I'd like to block the VT thread when the synchronized update ends (not when it begins). From what I could tell, that's what other implementations do so I wanted to adopt it. The "at which point any changes made during the update should be applied atomically" part of the spec seems a bit ambiguous.
It would've fit this issue, since it requires flushing a frame as well. The current RenderThread implementation has multiple race conditions, among which one where a flush can be randomly missed. _hPaintCompletedEvent is only signaled when the thread is not in the PaintFrame() call, which is incorrect. It can also be in between outside the call and the two SetEvent / ResentEvent calls and there's an arbitrarily long time between _hPaintEnabledEvent and _hPaintCompletedEvent handling. Additionally, there's a race condition between the call to NotifyPaint() and a hypothetical WaitForPaintCompletion(), because that can also take an arbitrarily long time and so it might wait on the wrong frame to draw. (There's more of them.)
I prototyped a "better" RenderThread here: https://github.com/microsoft/terminal/blob/dev/lhecker/14897-flush-DECSCNM/src/renderer/base/thread.cpp#L82-L108
While writing that code I realized that it suffers from the last problem I mentioned above, so I decided to put it on hold. I realized that for all meaningful operations on RenderThread you're holding the console lock anyways (like when calling Renderer::TriggerRedrawAll()), so we can drop the use of atomics and Win32 events mostly and focus on using the console lock for the render thread as well. This allows us to properly synchronize and signal when a frame finished.
I should mention that I don't want to imply with my comment that I strictly want to build it that way. The only thing I feel strongly about is that I think the current RenderThread implementation has multiple race conditions that should be fixed at some point to make the system more robust and reliable. I do somewhat feel like that this issue would be a good time to do that. All the rest is even less than "somewhat".
I'd like to block the VT thread when the synchronized update ends (not when it begins). From what I could tell, that's what other implementations do so I wanted to adopt it. The "at which point any changes made during the update should be applied atomically" part of the spec seems a bit ambiguous.
OK, that makes sense. I didn't particular care about triggering the refresh exactly at the point of the end sequence because it's not something that apps can rely on anyway (because of the need for a timeout), and it seemed to work well enough for my use case. I certainly wouldn't be opposed to a better solution though. I just lost interest with the idea once I realised I could achieve what I needed with the standard VT paging operations.