glTF
glTF copied to clipboard
KHR_node_selectability Draft Proposal
As discussed in the Interactivity DTSG.
But there are some ways of deriving one from the other...
Sure. Two points seem to be the most atomic and fundamental values in this interconnected system. Various math nodes could be used to trivially derive everything else.
that single operation could cause multiple elements to be selected with a single operation
Yes if they have event "listeners", i.e., event/onSelect nodes with the corresponding nodeIndex configuration values.
Just a general question here from someone who hasn't been involved in the interactivity discussions: Do we need to assume a "selection ray" in this extension? I can easily imagine VR/XR scenarios where I could tap a "selection point" in space, or drag out a selection box of some kind, or use some not-as-yet-invented controller to indicate selection of a node by more futuristic means. Must it always be a ray?
tap a "selection point" in space, or drag out a selection box of some kind
This sounds like new, more advanced events.
We could probably allow returning NaN if the exact coordinates cannot be provided for some reason. @dwrodger WDYT?
tap a "selection point" in space, or drag out a selection box of some kind
This sounds like new, more advanced events.
We could probably allow returning NaN if the exact coordinates cannot be provided for some reason. @dwrodger WDYT?
Yes, I think that NaN in the case that the ray can't be defined sounds fine. It may also be appropriate to update the language where it defines selection and what it means for an object to be invisible to selection. That language could just say that implementations that use systems other than ray-based selection are free to interpret "invisible to selection" in whatever way makes sense for their selection mechanics.
Maybe as additional input here, in WebXR each controller has a "ray pose" and a "grip pose" and both are very much needed for different use cases.
For example, visuals for controller models are usually aligned with the "grip pose", and subsequently the action of "dragging an object" often is based on the "grip pose". The "ray pose" or "aim pose" is needed for clicking on things.
These two so far have survived a number of novel interaction mechanisms, including things like Apple Vision Pro where suddenly pointers are transient (they exist only temporarily while the user is interacting) and have very different "ray pose" (ray from the eye to where the user is looking) and "grip pose" (point and orientation for where in space the user has started the hand gesture for the selection).
Reading a bit more in the spec, I wonder about this section:
In the case of multiple-controller systems, the controllerIndex output value MUST be set to the index of the controller that generated the event; in single-controller systems, this output value MUST be set to zero.
Maybe it should read "the unique ID of the controller that has generated the event" instead? The index of a controller can change; for example, in touch-based systems usually each new touch has a new pointer ID (see https://developer.mozilla.org/en-US/docs/Web/API/PointerEvent/pointerId). Or in WebXR, where users can connect and disconnect new controllers arbitrarily (e.g. switch from controllers to hands and back) or have transient pointers as well.
(I think it depends a bit on what expected usage for the returned controller index is – are there nodes that can get more data from a controller?)
In other words, a node is “selectable” if and only if there exists no node including or above it in the hierarchy where such node has a selectable property with value false.
Are nodes by default selectable, even if the node doesn't have the extension? It was ambiguous in the spec.
From a performance point of view, it would be great if this is opt-in only and default to false.