snowcap: add mouse_area widget
This PR adds the MouseArea widget to snowcap.
This widget is a native Iced widget that allows finer control compared to Button, (e.g. knowing which mouse button has been pressed).
Hello @Ottatop
Fix on_double_click -- this isn't working bc the widget_ids aren't stable in my tests. Need further investigation.
The widget_id issue happen because every call to the program:view() method recreate a new tree, meaning widget gets assigned new widget_ids.
The issue with double_click is that two events are raised with the same target: on_press & on_double_click
- On press is handled first, which triggers a call to prog:update()
- prog:view() then rebuild the widget tree, followed by re-arming the callbacks
on_double_clickis then handle. It still has the previouswidget_id, so it's no longer in the callback list.
An easy fix is to have the user define a unique identifier for the mouse_area ~~, and tie it to its widget_id (meaning if two area have the same unique_id, they'll have the same widget_id, but that's a small price to pay, so I'm going with that for the time being.~~
~~That unique_id is currently an arbitrary string, which I'm using as a key to memoize widget_id inside the snowcap API.
I'm yet to decide whether this should be optional or required (both on rust & lua side, but lua will be built to be resilient if it's missing).~~
I'm also wondering if this should/could go into the protobuf definition, as a new Key message:
message Key {
oneof key {
// autogenerated unstable
uint32 widget_id = 1;
// user defined, stable. Use if more than a single message can be generated at the same time
string unique_id = 2;
}
}
Shifting it to the protobuf side means that both lua and rust can rely on a Key type available in the event - for any event aware widgets - without needing to store them and do a special treatment in the callback (a widget_id will still have to be generated, but the handling will be the same for all widgets).
Side note about the unique_id: It might be useful even for widget without callback.
One of the reason I need the mouse area is to properly handle a clickable taglist for my bar. I'll likely use that value as an identifier for the state in the bar itself (I wrapped widgets so they have a view & update function), and that may prove useful for other stateful widgets, even without events driven by snowcap (e.g. a textbox).
So the trick I was using in lua doesn't feel rust-y enough for my taste (memoize some value in a map/dictionary).
I'm adding the unique_id to the GetWidgetEventsResponse type for the time being, but I really think it would be best to have it inside a message Key {} type (I just don't want to do API breaking changes without permission :) )
On the second click, MouseArea will generate both click and double click events within a single update. I think it's better to have GetWidgetEventsResponse send a vec of events so both can be sent at the same time, which would allow the client to apply view twice without having to call update in between.
I'm not a fan of exposing a user-facing ID method and I think the current system of auto-generating IDs is good because we can hide backend implementation details like how widget IDs work. Exposing a unique_id method and requiring people to provide a unique ID is not ideal because we are a) pushing implementation details into the user-facing API which limits the ability to make breaking changes, and b) is error-prone. The current MouseArea API needs unique_id to be called when both on_press and on_double_click, but nothing actually enforces that. Someone could just forget to add it and wonder why the double click doesn't work, which is a sharp corner in the API that I don't want.
I'd do away with the unique ID and implement my suggestion above instead. This means we won't need the user to call a separate method to make things work.
I just don't want to do API breaking changes without permission
I believe widget events were added after the most recent release so you don't have to worry about breaking changes in the development window since then.
On the second click, MouseArea will generate both click and double click events within a single update. I think it's better to have GetWidgetEventsResponse send a vec of events so both can be sent at the same time, which would allow the client to apply view twice without having to call update in between.
I think the current system of auto-generating IDs is good because we can hide backend implementation details like how widget IDs work.
While I agree it's best to hide implementation details, this is causing some events to be dropped, even when batching events in GetWidgetEventsResponse (although that does fix the double click). The issue is that surface::update can be called more than once for a given view generation, but the client will always regenerate a view if at least on event (or event batch) is received.
Added some log in snowcap_api::layer.rs to test this:
Move event are sometime dropped:
2025-09-28T09:11:33.598709Z INFO pinnacle::config: UPDATE Move(
2025-09-28T09:11:33.598757Z INFO pinnacle::config: Point {
2025-09-28T09:11:33.598783Z INFO pinnacle::config: x: 176.21484,
2025-09-28T09:11:33.598806Z INFO pinnacle::config: y: 87.203125,
2025-09-28T09:11:33.598830Z INFO pinnacle::config: },
2025-09-28T09:11:33.598852Z INFO pinnacle::config: )
2025-09-28T09:11:33.598880Z INFO pinnacle::config: view called
2025-09-28T09:11:33.694117Z INFO pinnacle::config: UPDATE Move(
2025-09-28T09:11:33.694331Z INFO pinnacle::config: Point {
2025-09-28T09:11:33.694464Z INFO pinnacle::config: x: 176.89063,
2025-09-28T09:11:33.694585Z INFO pinnacle::config: y: 87.203125,
2025-09-28T09:11:33.694695Z INFO pinnacle::config: },
2025-09-28T09:11:33.694821Z INFO pinnacle::config: )
2025-09-28T09:11:33.694941Z INFO pinnacle::config: view called
2025-09-28T09:06:40.326876Z INFO pinnacle::config: NOMSG WidgetId(
2025-09-28T09:06:40.326943Z INFO pinnacle::config: 113,
2025-09-28T09:06:40.326997Z INFO pinnacle::config: ) => MouseArea(
2025-09-28T09:06:40.327048Z INFO pinnacle::config: Event {
2025-09-28T09:06:40.327108Z INFO pinnacle::config: event_type: EventExit,
2025-09-28T09:06:40.327174Z INFO pinnacle::config: data: None,
2025-09-28T09:06:40.327231Z INFO pinnacle::config: },
2025-09-28T09:06:40.327320Z INFO pinnacle::config: )
2025-09-28T09:06:40.327415Z INFO pinnacle::config: avail key => [WidgetId(114)]
Scroll events are particularly bad:
2025-09-28T09:49:23.228364Z INFO pinnacle::config: UPDATE Scroll(
2025-09-28T09:49:23.228395Z INFO pinnacle::config: Lines {
2025-09-28T09:49:23.228408Z INFO pinnacle::config: x: 0.0,
2025-09-28T09:49:23.228421Z INFO pinnacle::config: y: -1.0,
2025-09-28T09:49:23.228435Z INFO pinnacle::config: },
2025-09-28T09:49:23.228447Z INFO pinnacle::config: )
2025-09-28T09:49:23.228461Z INFO pinnacle::config: view called
2025-09-28T09:49:23.230134Z INFO pinnacle::config: dropped event WidgetId(98)
2025-09-28T09:49:23.230166Z INFO pinnacle::config: avail id: "[WidgetId(99)]"
2025-09-28T09:49:23.230181Z INFO pinnacle::config: dropped event WidgetId(98)
2025-09-28T09:49:23.230198Z INFO pinnacle::config: avail id: "[WidgetId(99)]"
2025-09-28T09:49:23.230214Z INFO pinnacle::config: dropped event WidgetId(98)
2025-09-28T09:49:23.230232Z INFO pinnacle::config: avail id: "[WidgetId(99)]"
2025-09-28T09:49:23.273369Z INFO pinnacle::config: UPDATE Scroll(
2025-09-28T09:49:23.273435Z INFO pinnacle::config: Lines {
2025-09-28T09:49:23.273468Z INFO pinnacle::config: x: 0.0,
2025-09-28T09:49:23.273498Z INFO pinnacle::config: y: -1.0,
2025-09-28T09:49:23.273530Z INFO pinnacle::config: },
2025-09-28T09:49:23.273560Z INFO pinnacle::config: )
2025-09-28T09:49:23.273596Z INFO pinnacle::config: view called
2025-09-28T09:49:23.277340Z INFO pinnacle::config: dropped event WidgetId(99)
2025-09-28T09:49:23.277395Z INFO pinnacle::config: avail id: "[WidgetId(100)]"
2025-09-28T09:49:23.277422Z INFO pinnacle::config: dropped event WidgetId(99)
2025-09-28T09:49:23.277447Z INFO pinnacle::config: avail id: "[WidgetId(100)]"
2025-09-28T09:49:23.277472Z INFO pinnacle::config: dropped event WidgetId(99)
2025-09-28T09:49:23.277498Z INFO pinnacle::config: avail id: "[WidgetId(100)]"
2025-09-28T09:49:23.277525Z INFO pinnacle::config: dropped event WidgetId(99)
2025-09-28T09:49:23.277552Z INFO pinnacle::config: avail id: "[WidgetId(100)]"
2025-09-28T09:49:23.277582Z INFO pinnacle::config: dropped event WidgetId(99)
2025-09-28T09:49:23.277622Z INFO pinnacle::config: avail id: "[WidgetId(100)]"
2025-09-28T09:49:23.277652Z INFO pinnacle::config: dropped event WidgetId(99)
2025-09-28T09:49:23.277684Z INFO pinnacle::config: avail id: "[WidgetId(100)]"
2025-09-28T09:49:23.277715Z INFO pinnacle::config: dropped event WidgetId(99)
2025-09-28T09:49:23.277746Z INFO pinnacle::config: avail id: "[WidgetId(100)]"
2025-09-28T09:49:23.277775Z INFO pinnacle::config: dropped event WidgetId(99)
2025-09-28T09:49:23.277806Z INFO pinnacle::config: avail id: "[WidgetId(100)]"
2025-09-28T09:49:23.277835Z INFO pinnacle::config: dropped event WidgetId(99)
2025-09-28T09:49:23.277869Z INFO pinnacle::config: avail id: "[WidgetId(100)]"
2025-09-28T09:49:23.288885Z INFO pinnacle::config: UPDATE Scroll(
2025-09-28T09:49:23.288985Z INFO pinnacle::config: Lines {
2025-09-28T09:49:23.289076Z INFO pinnacle::config: x: 0.0,
2025-09-28T09:49:23.289197Z INFO pinnacle::config: y: -1.0,
2025-09-28T09:49:23.289283Z INFO pinnacle::config: },
2025-09-28T09:49:23.289356Z INFO pinnacle::config: )
2025-09-28T09:49:23.289435Z INFO pinnacle::config: view called
2025-09-28T09:49:23.291875Z INFO pinnacle::config: dropped event WidgetId(100)
2025-09-28T09:49:23.291968Z INFO pinnacle::config: avail id: "[WidgetId(101)]"
2025-09-28T09:49:23.292041Z INFO pinnacle::config: dropped event WidgetId(100)
2025-09-28T09:49:23.292108Z INFO pinnacle::config: avail id: "[WidgetId(101)]"
2025-09-28T09:49:23.292195Z INFO pinnacle::config: dropped event WidgetId(100)
2025-09-28T09:49:23.292266Z INFO pinnacle::config: avail id: "[WidgetId(101)]"
2025-09-28T09:49:23.299020Z INFO pinnacle::config: UPDATE Scroll(
2025-09-28T09:49:23.299061Z INFO pinnacle::config: Lines {
2025-09-28T09:49:23.299084Z INFO pinnacle::config: x: 0.0,
2025-09-28T09:49:23.299101Z INFO pinnacle::config: y: -1.0,
2025-09-28T09:49:23.299125Z INFO pinnacle::config: },
2025-09-28T09:49:23.299143Z INFO pinnacle::config: )
2025-09-28T09:49:23.299179Z INFO pinnacle::config: UPDATE Scroll(
2025-09-28T09:49:23.299201Z INFO pinnacle::config: Lines {
2025-09-28T09:49:23.299224Z INFO pinnacle::config: x: 0.0,
2025-09-28T09:49:23.299247Z INFO pinnacle::config: y: -1.0,
2025-09-28T09:49:23.299270Z INFO pinnacle::config: },
2025-09-28T09:49:23.299299Z INFO pinnacle::config: )
2025-09-28T09:49:23.299332Z INFO pinnacle::config: view called
Here is the specific code I use for testing: testing.patch
apply the patch, then run:
just run -c ./api/rust/examples/default_config
This is ultimately your call, but the way I see it:
- We decide that some event drop are acceptable
- We expose a way for user to have either a unique key across view or an autogenerated id, and document that some widget may drop events if the autogenerated id is used (i.e. it's there for convenience/ease of use, and if reliability is needed, best use the unique key)
- We have both an auto-generated id and a user defined one, but it's up to the widget to decide which is used
- We associate the widget_id per layer, and reset it before each view generation => This should ensure that unless the order/amount of widget changes, the widget_ids remain stable accross generation.
- We generate some kind of hash as a widget_id (this require Msg to implement Hash and I'm not sure this can be done with closure, so ... meh)
I'm not a huge fan of #1.
#3 hides the internals, but I don't know how to be 100% sure the issue cannot appear so a widget_id is fine.
#4 could cause issue if the order & amount of widget changes, but the widget type stays the same. That being said this is a very niche issue, and might never appear in the wild :)
Regardless, I think the event batching is a good idea :D
I'm of course open to suggestion if you have a better idea.
EDIT: it came to me that the issue might be solved by keeping the previous set of handlers, but that may consume more ram than simple strings.
I'd also like to point out that iced has an opaque Id type which can either be unique or user defined, for widget which might need to be reffered to outside the view function
Agree that 1 is bad, I think any solution is required to process all events.
With 2, 3, 4, and 5, while we stabilize the ids somewhat, we still run into the problem of sending widget events for stale views:
When a text input widget is added, this is going to cause problems with spammed input. For example, with a text input initialized with text "123", typing "4" immediately followed by "5" will have widget event 1 send "1234" while widget event 2 sends "1235", which is no bueno.
For now the only way I've come up with to fix this to buffer all incoming input events while we're waiting for an update, then once it comes in we send all buffered inputs immediately and batch send the resulting widget events:
One problem I see with this approach is that all input during an update isn't tied to the current view on the screen but to the upcoming one, which isn't very intuitive. You could click a button that hasn't appeared yet, for example. Though I don't know how impactful this is in practice.
If you have any ideas I'd love to hear them because all the current solutions have some downside or another. Or maybe the whole architecture is just flawed and we need to come up with something else.
When a text input widget is added, this is going to cause problems with spammed input. For example, with a text input initialized with text "123", typing "4" immediately followed by "5" will have widget event 1 send "1234" while widget event 2 sends "1235", which is no bueno.
Ugh, I oversaw that ...
For now the only way I've come up with to fix this to buffer all incoming input events while we're waiting for an update, then once it comes in we send all buffered inputs immediately and batch send the resulting widget events:
I feel this would be the correct way to fix the issue with the current architecture. I must admit I'm not sure exactly how to implement it as it is tho.
One problem I see with this approach is that all input during an update isn't tied to the current view on the screen but to the upcoming one, which isn't very intuitive. You could click a button that hasn't appeared yet, for example. Though I don't know how impactful this is in practice.
My 2 cents is that we're getting in edge-case of an edge-case territory. While it wouldn't be intuitive, I think the chances of triggering that bug are quite low. To have events happening on a future view would requires knowledge of how the view will look like. This is more likely to happen with stuff like events from an unbounded scroll-wheel, but these are always unreliable because you don't know the input buffering done by the client when you uses them (e.g. when scrolling a huge file on any text processor, the scrolling usually continue after you've stop the wheel).
If you have any ideas I'd love to hear them because all the current solutions have some downside or another. Or maybe the whole architecture is just flawed and we need to come up with something else.
I kinda like the idea of 'buffering inputs' (with the caveat I mentioned).
As for an architecture change, the only alternative I currently see would be to have a more static view, where we don't implicitly call Program::view() on the client side for each events.
With this model, the client send a view only once and receive events as they happen, but the server is tasked to maintain the program state until the client explicitly invalidate the view (in which case it's ok to drop events).
For your text input example, this means the client would receive 1234, the server would refresh the text input content and rebuild the view with the same widget_id, then the client would receive 12345, etc.
The issue with this model is that we loose on flexibility, and both the server-side and client-side become more complex because the server now has to handle the state logic until the view is dropped, and the client-side have to explicitly invalidate the view.
EDIT: Thinking back on this, I'm not even sure it would solves the issue. You could still get a 'rollback' as the server would continue handling events while the new view is in-flight. I think as long as we have iced-rs on server-side, the best we can do is to keep the current model, with the input buffering you proposed. While it has its downside I think it's the best option, since it keeps client and server side relatively simple, and we are guaranteed to get a new view after each events batch is sent (so it's easier to reason about that)
I feel this would be the correct way to fix the issue with the current architecture. I must admit I'm not sure exactly how to implement it as it is tho.
I did a quick & dirty implementation (I set a boolean when messages are sent to the client so that we wait the new view before processing any additional events). I'm not dropping events anymore as far as I can tell.
I'll see if I can find any adverse side-effect (and fix the lua side, too), but I think we can roll with it, unless we find something major requiring a re-architecture.
I've re-added the lua side. As far as I can tell, input buffering works, and I don't see any bugs.
I'll have to note that technically this is a leaky implementation if the config goes into an infinite loop in the Programs update function, because the event buffer is currently unbounded. It shouldn't be an issue as long as the config answer, because events are still batched together, so the whole event queue is flushed.
We could switch the event buffer to a circular buffer, but I don't think it's actually needed (in the sense that if the config is stuck in an infinite loop, the user have bigger issue, so detecting that might be more valuable than using a circular buffer here). For this reason, I've not done that for the time being. Let me know if I should change the buffer to a circular buffer in this MR (this can also be added at a later point if we do encounter issues).
@Ottatop let me know if the current implem works for you, and I'll rebase everything to cleanup the history before marking the PR as ready.
-- testing patch with the lua & rust side: testing_lua.patch
Ye we can roll with this impl. The event buffer being unbounded is fine, we can change it if it ever becomes an issue.
Uh I've found a slight issue that only affect winit.
Mouse motion event's can get very chatty and can overwhelm iced_futures subscriptions channels (which have a size of 100 events AFAICT). If this happens, any further events in the batch are dropped which is less than ideal (if anything, I'd rather drop mouse events than other potentially more useful ones)
A secondary issue is that the spammy nature of mouse motion events can lead to a bit of lag (that being said, I've compiled in debug, and I've added logging. Removing the logs themselves lead to fewer warning from iced, but the config taking too much time to handle events and send a view could cause events to accumulate).
This is tied to the monitor/window framerate (main monitor refreshes @ 240 fps and have no issue whatsoever). I barely hit ~50 evts per update). Secondary monitor @ 60 fps and can reach ~150-200 if I wiggle the mouse.
I see a few possible mitigation:
- throttle the mouse moved events. The raw events contain the 'time' (in milisec as far as I can tell) so we could throttle that way
- discard mouse_move events before we broadcast events to subscriptions but after the widget handles them -> This means we can't have subscriptions on mouse move (we don't have any at the moment), but the widgets themselves gets all events. This doesn't mitigate the potential lag issue.
In my opinion, if this is to be addressed, throttling motion events should be the way to go. That being said, the issue only occurs on winit backend (max I've seen on udev is 14 events, and this is consistent regardless of the monitor), so I'm not even sure it's worth mitigating it.
That being said, the issue only occurs on winit backend (max I've seen on udev is 14 events, and this is consistent regardless of the monitor), so I'm not even sure it's worth mitigating it.
I just want to clarify this. After further testing, I'm still unable to reproduce when pinnacle is started from tty (regardless of compilation option). With the winit backend, I have way less lag/warnings if I remove all logging.
I do think this can & should be mitigated by throttling mouse-move events (or any kind of chatty/repetitive hw events). However, since this is tied to winit, I'd rather open an issue to take some time to think about a mitigation scheme that wouldn't impact udev backend. Instead of 'not worth mitigating it', I meant 'not worth mitigating it right now, in this PR'
Ideally it would be some dynamic adjustment to prevent hw events from overwhelming iced channels, while still allowing most events to go through. As an example of such scheme: if the event_queue is regularly filled above 75% or has reached the threshold at least once, we increase the throttling delay. This could also be achieved by increasing channel size a smidge in iced, but afaik they don't expose any way to do that at the moment
Yea let's push off figuring out throttling/other handling for spammy signals. This problem is probably going to affect other things like gesture binds in the future so I'd like to come up with a more general solution.
Yea let's push off figuring out throttling/other handling for spammy signals. This problem is probably going to affect other things like gesture binds in the future so I'd like to come up with a more general solution.
Alright, this should be good to go then
I'll look at this more in depth soonTM but the current objective is to bugfix and stabilize 0.2.
Does that mean this PR won't make the cut for 0.2 ?
Just to clarify, I don't necessarilly mind the MouseArea not making the cut, but this PR fundamentally change the way we handle events (they are batched, and the surface buffers events until after we receive the next view).
Since this is quite the (protobuf api) breaking change I was wondering if it was possible to cherry-pick these two commit for 0.2 and have a 'clean slate' to add widgets in 0.3.
Let me know if you want me to open a PR with just the event handling change.
FYI, I've rebased everything so the first two commit now setup the event handling, meaning splitting the branch will be straightforward.
Yes, PR for the event handling changes would be good