winit
winit copied to clipboard
MouseInput should have a position
A common case is to get the location on screen a user clicks. The way to do that currently is to create a variable that stores the cursor location, which is updated whenever CursorMoved is received. Then, when a MouseInput event is received, you use whatever that variable is currently set to. I think it would be useful for winit to do this internally, so that the MouseInput event could store a position, making it easier for the user. winit should also update the variable whenever Window::set_cursor_position() is called.
It may be important to keep in mind that there are literally edge cases to take into account: when the cursor reaches the border of the monitor, MouseInput
events can keep being generated while the actual location of the cursor does not change.
More generally, there is the question of the behavior of such a field when the cursor is outside of the window, as there will be platform dependent quirks.
Good points. Another problem I thought of is since it uses the location from the previous event poll, if your program is lagging badly, than the mouse will have moved a big distance relative to the previous poll, so it will seem like your input is lagging even more. Not sure if that's how it works, but it seems logical.
This would convolute the the MouseInput
event.
There is already WindowEvent::CursorMoved
(in-window only and possibly accelerated) and DeviceEvent::MouseMotion
(tracks raw movement even when the cursor leaves the window and should not be accelerated). The former is for emulating a cursor and the later is better suited for camera control.
You would need to pick one of these to attach to MouseInput
and either option would be wrong because there are use cases for both. The mouse position should remain separate from the button input.
This takes like 25 lines of code for the user to implement and doesn't fall within the scope of things that winit sets out to do. This feels opinionated; it's really up to the user to decide if they want to use the location data when the mouse is clicked. The API, as it is now, reflects that.
I do not believe it convolutes MouseInput, in my opinion it is a fairly basic task and it would help for winit to at least show the user how to track MouseInput locations. If that's in the form of an example instead of a MouseInput field, that's alright, but I think winit would benefit from adding a field. The fact that there are edge cases that need to be accounted for just gives weight to including it in winit and preventing the user from messing it up.
In terms of technical feasibility and api choices:
- I admit I don't know the details of MouseInput that well. Can MouseInput events be generated from clicks outside the window?
- CursorMoved vs MouseMotion: If MouseInput's are only generated inside the window, than it makes sense to only use CursorMoved
- If MouseInput's can be generated outside the window, than it would be up for discussion. I don't know the difference between accelerated vs non accelerated that well
@vberger what do you mean by "MouseInput events can keep being generated while the actual location of the cursor does not change."? I assume MouseInput's are generated due to clicks, in which case if the position dosen't change, that dosen't matter, then you just have many clicks in the same place.
Sorry, I was not formulating this correctly, forget about this part.
The thing is, currently winit strives to have a scope small enough to be reasonably maintainable. As such, anything that can be easily implemented outside of winit will, in general not be implemented in winit unless it can be justified that the implementation would be significantly simpler if it lived in winit.
This really does not seem to be the case here. However I agree it'd be worth adding an example showcasing the proper way to track cursor location.
I agree with the original argument that it is quite rare to want to get a mouse input event without having the position associated with it. Same with TouchpadPressure
, really. Given that it takes 25 lines of code to do, I'd prefer that we do the 25 lines once, in winit
, rather than everyone having their own. And given that we already have WindowEvent
vs. DeviceEvent
splits, each with their own motion and button types, the answer to the question of "which type to do where" seems obvious.
I'll make a demo PR for discussion if people want.
@icefoxen Eh not really, take most games, they just want the delta without the position. Should they pay the calculation cost every time? (I'd probably honestly say yes but I can see a reason for no)
I've yet to see a game that wanted the delta on click, not the position. Plenty want the delta on motion, but you can perfectly calculate delta from position and vice versa, and the performance difference is negligible -- a single vector subtraction.
For the sake of argument I've implemented this for linux: https://github.com/icefoxen/winit/commit/0ff0b8e67c67beb7b266f0b8e7eddec24265c543 . It is indeed somewhat narsty, I won't lie. Would be better if there were a layer that could be inserted between EventLoop
and platform_impl::EventLoop
to keep track of this for you, but there isn't one. The argument that this is the job of a higher-level library is not without merit but is also kinda a pain in the ass.
Even the delta on click, take an FPS, it wants the location to 'finish' moving before applying the click (shot) itself.
But yeah, devil's advocate, I'm for including the calculated position anyway if it's not an exclusive mouse. ^.^
If it is exclusive, then there is no absolute position and that should never be sent.
Given that it takes 25 lines of code to do
Where did the 25 lines number come from? Isn't this just adding a match on the Event
and keeping track of last position? Yes, this requires the user to track the state but this isn't very complicated. The API enables the user to chose between Window::CursorMoved
or DeviceEvent::MouseMotion
.
At what point do you draw the line? The API as it exists provides everything needed to do this right now.
It came from cedric-h:
This takes like 25 lines of code for the user to implement...
Besides, why not put it in? Isn't this just adding a match on the Event
and keeping track of last position? Yes, this requires winit to track the state but this isn't very complicated. And given the primary purpose of clicking a mouse is to click on something with a location associated with it, it seems like a reasonable request.
winit
's API as it exists provides everything needed for me to either write the same code over and over with each game I want to make, or write my own higher-level abstraction on top of it. Neither is terribly satisfying. Given that the API as it exists provides everything needed to do this, can we ponder what we want the API to do?
I think the main issue with this idea is that it forces users to deal with multiple sources of truth.
If we are already tracking the mouse position, and then we get a mouse input event... Which position do we trust? Is it a guarantee that the position in the mouse input event will be the same as the one in the last cursor moved event? Should this even be something we should be worrying about?
The worst parts are that it introduces a guarantee that the API has to satisfy at runtime, makes things more stateful, and opens a door for bugs that aren't possible with the current design.
I think we're making this too complicated. Looking at the various platform impl's, the only one that doesn't already include a position along with a mouse button event is Wayland, and Wayland is deliberately pretty heckin' low level. It also defines explicitly how to get the position of the button event. So if the goal is to expose the platform API in a portable manner with no fluff, I'd say that that's what we should do.
@icefoxen Does that include the mouse cursor in exclusive mode? Having an absolute position doesn't make sense for exclusive mode. Things like many games as one example will absolutely require an exclusive mouse lock, which doesn't have absolute coordinates, the program only gets delta's and absolute doesn't make sense (since it of course may not be constrained to the screen, could be rotating around a 3D camera for example). I'm now leaning to absolute should not 'always' exist.
@OvermindDL1 It looks like the way you get mouse's motion delta is via DeviceEvent::MouseMotion
, DeviceEvent
doesn't have any absolute position data at all. Meanwhile I'm talking about WindowEvent::MouseInput
. Per the docs the two event types solve different problems, with DeviceEvent
being lower level, so as far as I'm concerned DeviceEvent
is fine the way it is.
I think the main issue with this idea is that it forces users to deal with multiple sources of truth.
I agree with this sentiment.
In my mind, there are two ways Winit can expose device input state:
- Expose everything through events, and require the user to derive the device state through those events.
- Have Winit keep track of device state, and allow the user to query the current device state at any time.
Either of those options are acceptable. However, not committing to either option in particular and internally keeping track of some device state for some events is far, far worse than going all-in on either option, since that makes the API harder to predict. It forces users to reference the documentation to check and see if a value is exposed in an event, rather than assuming that a particular event will only store information related to that event. As such, I'd prefer not to expose mouse position through MouseInput
.
(On that note, that's also the main reason I think we should remove the modifiers
fields on events once ModifiersChanged
is implemented everywhere. That's a discussion for #1124 though.)
I agree that we want a single source of truth and that exposing things through events is nicer. I also agree that we want to do it once, right. But in this case SOMEONE is going to need to keep track of this state no matter what, and in almost all backends the OS already does it for you. So why not expose the information they give you instead of forcing the user to re-implement their own state?
There's also the fact that, you know, if you ever want to actually USE winit
, you're going to want a position with almost every single WindowEvent::Mouse*
you get. When architectural purity runs counter to actual usefulness, you're making art, not engineering.
@icefoxen wouldn't any actual user of Winit already be keeping track of the mouse position externally to Winit anyway? How would adding the mouse position to MouseInput
be useful for those users?
But in this case SOMEONE is going to need to keep track of this state no matter what, and in almost all backends the OS already does it for you. So why not expose the information they give you instead of forcing the user to re-implement their own state?
That's a better argument for a general-purpose input state querying API rather than exposing mouse position in MouseInput
. I'm totally up for having a discussion about input state querying, and I'm not convinced either way about whether or not we should have it, but exposing mouse position through MouseInput
simply isn't going to be enough for people to not have to keep track of mouse position in their own code.
...the OS tracks the state for you, and gives it to you in the form of a position for a button event. So you don't NEED to store it anywhere, because you already have it when you need it. Please, please just actually look at the PR I made as an example?
This takes like 25 lines of code for the user to implement...
I'm somewhat curious about this, since its something that came up in discussion. If i'm not mistaken, the deltas reported by MouseInput
are the same ones reported from the device through raw input, correct? If so, I don't see how this could possibly by an easy implementation, considering that the operating system does its own (platform and configuration dependent) processing on these events to derive an actual mouse position. What is the current recommendation for going about this?
If i'm not mistaken, the deltas reported by
MouseInput
are the same ones reported from the device through raw input, correct?
Deltas aren't reported by MouseInput
. It came up a couple of times in this issue but the suggestions remains adding position to mouse input, which I'm in favor of.
If we want more events to include delta information I think we should discuss that in a different issue.
I see this is four years old now.
What happens with DeviceId when there are two input devices? Suppose you have a laptop with both a touchpad or touchscreen, and a mouse plugged in.
- Do you get MouseInput events with two different DeviceId values, depending on which device had a button push?
- Do you get CursorMoved events from both devices? What DeviceId do you get? Whichever moved last? The one for the screen?
- Can there ever be more than one device referenced in CursorMoved events? What does that mean?
I'm not sure if this is an argument for separating the two issues or combining them.
Not accounting for any backend specific weirdness:
- Do you get MouseInput events with two different DeviceId values, depending on which device had a button push?
That is what should happen.
- Do you get CursorMoved events from both devices? What DeviceId do you get? Whichever moved last? The one for the screen?
You get a CursorMoved
event for both devices separately with their corresponding DeviceId
.
- Can there ever be more than one device referenced in CursorMoved events? What does that mean?
No.
I'm interested in why there would be any confusion around this.
It seems very intuitive to me to assume that every device would generate its own events with their own DeviceId
.
And how does any of this relate to this issue? Apologies if I missed anything.
OK. This make sense only if each input device has its own cursor. If there are two input devices, are there two cursors?
If there are two input devices, are there two cursors?
I see the issue now! I know nothing about multi-cursor behavior, but I know that there are sometimes multiple physical input devices for the same cursor.
I'm unsure though how this would exactly relate to Winit, I'm only familiar in detail with the Web backend, but I would assume that all backends just faithfully report what their windowing system tells them. So it really entirely depends on the OS (or windowing system in the case of Linux).
Linux supports both multiple devices for the same cursor and multiple on-screen cursors are possible. That's not the default, though. The default is multiple devices feeding into one cursor. Here's what multiple mice feeding one cursor looks like.
xinput
⎡ Virtual core pointer id=2 [master pointer (3)]
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
⎜ ↳ Logitech USB Optical Mouse id=13 [slave pointer (2)]
⎜ ↳ Logitech USB Laser Mouse id=15 [slave pointer (2)
]
The XTEST device is for playback of recorded sessions. This is most likely to come up when there's both a touchscreen and a mouse attached.
Plugging in a USB joystick did not result in a mouse-type device appearing. At least not by default.
Windows supports touchscreen and mouse input on the same machine, but the last one used apparently becomes primary and gets cursor control. So that's different than UNIX. There are third-party multi-cursor programs for Windows, but that's considered unusual.
So input devices and cursors are different. (Do they have different DeviceId values?) Not returning cursor position on mouse events is correct. Programs should probably check if more than one cursor is being reported and complain. Otherwise, the last cursor position applies to all pointing devices.
So, not asking for any change to Winit. Some of this might be explained in the documentation, though.
So input devices and cursors are different. (Do they have different DeviceId values?) Not returning cursor position on mouse events is correct. Programs should probably check if more than one cursor is being reported and complain. Otherwise, the last cursor position applies to all pointing devices.
So, not asking for any change to Winit. Some of this might be explained in the documentation, though.
I see now why this was relevant to the issue here. Unfortunately I have no clue even how this exactly plays out in the Web backend.
I believe any discussion around this should live in a separate issue, at least until we figure out all the details. The outcome might be that this just needs better documentation, which is a very valid request as well.
See https://github.com/rust-windowing/winit/issues/795