Gigantic Refactor
Fully decouples picking from both input and picking backend. Adds the ability to place event listeners on entities, and easy event-forwarding to entities on interaction.
Overall, from a system architecture perspective and as a next step I would try to remove the need for hardcoded key-codes and provide some configurable way for the user to override keybindings that does not require having to implement the same systems again. But I suspect most of the heavy lifting should be done on the bevy_input side of things to enable use cases like this.
Overall, from a system architecture perspective and as a next step I would try to remove the need for hardcoded key-codes and provide some configurable way for the user to override keybindings that does not require having to implement the same systems again. But I suspect most of the heavy lifting should be done on the
bevy_inputside of things to enable use cases like this.
I thought this separation was clear, but it sounds like that is not the case. I put all default inputs into separate systems so they could be easily replaced. Perhaps it would help to consolidate all inputs into a single place in the root crate?
@Weibye thank you for your feedback!
Are there plans to upstream all or parts of this to Bevy?
Yes, that is the goal!
Would it be relatively easy to extended this in the future support VR raycast interactions?
Can you expand on this? There's no reason you can't create arbitrary pointers with arbitrary backends.
What about gamepad support?
Yes, that is the intent for all inputs being removable, and one of the motivators for adding custom cursors not tied to any hardware, you can definitely control a pointer with a gamepad. I'll probably make a rebinding example that shows this.
Most if not all of crates/bevy_picking_core/src/pointer.rs seems like it make sense as a part of bevy's input-system (given that were to be improved accordingly)
Yup, that's what I'm hoping to do here. I need to make a picking backend for bevy_ui, before it's even possible though.
I thought this separation was clear, but it sounds like that is not the case. I put all default inputs into separate systems so they could be easily replaced. Perhaps it would help to consolidate all inputs into a single place in the root crate?
It could just be that my gut reaction is that "unless it is the system responsible for receiving input into the game / app, there should not be direct mentions of actual key codes", instead there should be something defining an one or multiple actions of what can be triggered then it is up to the input manager (or something in between) to bind buttons to actions.
(I don't think bevy is quite ready to support that yet)
My background here is Unity and for quite some time it was idiomatic to reference buttons directly in gameplay code. While easy it caused maintenance nightmare. Then a while a ago came the new input manager provided a separation between physical input and in-game actions which was heavenly and made everything much easier to maintain. (The actual system is poorly made and cumbersome to work with, but the concept is great)
Also, make sure the feedback makes sense to you before you act on it, I'm just a person on the internet 😃
Would it be relatively easy to extended this in the future support VR raycast interactions?
Can you expand on this? There's no reason you can't create arbitrary pointers with arbitrary backends.
I think this is the answer then :) A VR input ray, (to interact with UI panels from a distance, like a laser-pen) would simply be another pointer input
I tested out all of the examples on this branch. They behaved as expected. :tada:
I tested out all of the examples on this branch. They behaved as expected. tada
@bonsairobo any thoughts on what would make for some good examples? I'd like to add examples that demonstrate how you can make use of the high level pointer events and listeners to do complex things easily. Maybe a drag and drop example?
any thoughts on what would make for some good examples? I'd like to add examples that demonstrate how you can make use of the high level pointer events and listeners to do complex things easily. Maybe a drag and drop example?
@aevyrie Definitely drag and drop. I noticed that was missing.
Let me tell you about one time I built a UI with drag-and-drop (using Amethyst) :). I basically had a little table of square cells where each row held the timeline of actions for a character. Then you could create a plan by placing token entities (each representing a type of action) into the cells.
This was interesting as a UI because it involved drag/hover/drop events, and it also had to enforce limits on what could be placed in a cell, which required noticing a problem and then animating the token entity back into a valid spot. You could also notice when a token is being dragged over a cell, and change the highlight of the cell to indicate valid/invalid placement. I think this example would require use of the high-level pointer events, event forwarding, and listeners.
I guess I just described the UI feature set of pretty much any tabletop game.