model-viewer
model-viewer copied to clipboard
Separate scaling from rotating on WebXR
Reference Issue
// Rotating, translating and scaling are mutually exclusive operations; only // one can happen at a time, but we can switch during a gesture.
With two finger interaction, scaling and rotating are not really mutually exclusive.
Description
I've created a threshold to detect if it's a real rotation or a small movement when pulling fingers apart. ROTATION_THRESHOLD
was chosen based on some experimentation locally, but I would like to open this value for discussion.
If the threshold is crossed the rotation is active, if not then in the first frame we set the new firstRatio
value for scaling.
@elalish , I've also changed the function name I pinpointed yesterday 😅
Thanks for your pull request. It looks like this may be your first contribution to a Google open source project (if not, look below for help). Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).
:memo: Please visit https://cla.developers.google.com/ to sign.
Once you've signed (or fixed any issues), please reply here with @googlebot I signed it!
and we'll verify it.
What to do if you already signed the CLA
Individual signers
- It's possible we don't have your GitHub username or you're using a different email address on your commit. Check your existing CLA data and verify that your email is set on your git commits.
Corporate signers
- Your company has a Point of Contact who decides which employees are authorized to participate. Ask your POC to be added to the group of authorized contributors. If you don't know who your Point of Contact is, direct the Google project maintainer to go/cla#troubleshoot (Public version).
- The email used to register you as an authorized contributor must be the email used for the Git commit. Check your existing CLA data and verify that your email is set on your git commits.
- The email used to register you as an authorized contributor must also be attached to your GitHub account.
ℹ️ Googlers: Go here for more info.
@googlebot I signed it!
@lucadalli I'd be interested to get your opinion on this. It only affects WebXR mode, so just Android for now, so at least you can test it.
I don't use the WebXR viewer, nor have I have ever worked with WebXR input events, so I'm assuming the same capabilities as touch events.
I can agree with the sentiment to make rotation and scaling distinct. In fact I would be inclined to have 1 finger for rotation and 2 fingers exclusively for scaling, similarly to camera-controls
.
However, when interacting with the AR model, I find myself hovering my fingers directly over the placement box and performing all interactions there. With mutually exclusive interactions, since a single touch on the placement box moves the model, to rotate the model I would be forced to put my finger outside the placement box. I find this to be awkward not only because of the uncomfortable finger position but also because of the additional cognitive load associated with recognizing where the placement box is so that I can press outside of it.
This brings me to the changes proposed in this PR. They make rotation and scaling somewhat distinct, without compromising the ability to use 2 fingers within the placement box to rotate the model. I do consider it an improved UX. Alternative/supplementary behavior comes to mind, which I'd like to share.
To allow for scale adjustments without impacting rotation we can make the interactions which start outside the placement box mutually exclusive. The user is already introduced to the pattern that the same interaction outside and inside the placement box produces different behaviors. A single touch in the placement box moves the model, while a single touch outside rotates it.
Similarly, 2 fingers inside the placement box both rotate and scale, while 2 fingers outside would only scale.
On top of all this, I think we should also introduce the ability to alternate between the 1 finger interaction and the 2 finger interaction without having to reset by lifting all fingers, like we just did with camera-controls
(merged but unreleased, observable by interacting with any of the model-viewer elements in the docs).
In other words, for an interaction which started outside the placement box, lifting one finger after scaling the model would resume rotation and adding a finger during rotation would initiate scaling.
For an interaction which started inside the placement box, lifting one finger after scaling/rotating would resume model placement and adding a finger during placement would initiate scaling/rotating.
This leaves us with 4 possible behaviors. Behaviors in the same column can be alternated between by adding/removing the second finger. (For simplicity's sake, inside or outside of the placement box would be determined by the first touch. It is not recalculated until all fingers are lifted and a new interaction is started.) To help the user distinguish between the two sets of behaviors, we could highlight the placement box a different color (blue?) when the touches are inside the placement box.
Touches | Inside Placement Box | Outside Placement Box |
---|---|---|
1 | move | rotate |
2 | rotate/scale (possibly with threshold) | scale |
I am not strongly against or in favor of this PR, nor do I feel strongly about the alternative I suggested. I'm sharing my thoughts to further the discussion.
I'm not sure if this is some intended behavior which is being erratic but it seems like a bug to me.
When testing the demo linked above by @FelipeR2U, sometimes I am able to double tap box corners which are diagonally opposite and make the model do a 180. I am able to reproduce on master
but it's far less common.
@lucadalli - About your first Comment
Thanks for the inputs, I really appreciate all the study you proposed and I mostly agree! I'm just concerned about the difficulty for a user to put both fingers inside the placement box, since it's not the whole model bounding-box, but only an extension of it's base. Also keeping in mind that the box purpose is to detect a XRTransientInputHitTestResult
(finger touch hits the model-base?) and that scale/rotate
keep the model in place, maybe those actions should not be attached to it.
If the proposed table is in fact accepted, I'm a little divided about which action should occur inside/outside the box for 2-touch.
Finally, I'd also like to highlight the current UX of Scene-Viewer: Once the 2-touch action is decided, we can only switch to the other if we start another interaction, which is in my opinion a better experience in fine tuning the model rotation/size. I didn't make the PR like this because of the same comment:
// Rotating, translating and scaling are mutually exclusive operations; only one can happen at a time, but we can switch during a gesture.
Sorry I forgot about this one! I'm in the midst of switching to pointer events, and still open to this once that lands.
Sorry I forgot about this one! I'm in the midst of switching to pointer events, and still open to this once that lands.
Sorry, I was organizing my GH account and closed some of my PRs, I'm gonna reopen it for further discussion