[Enhancement] - Trackpad gesture events for native pan/zoom on OSX.
Hii!
Background. On OSX, users are used to zooming and panning with two-finger trackpad gestures for "canvas"-style apps like blender, Affinity Designer, Photoshop, etc. Having to hold down the mouse button to pan is a pain.
The good news: these can be captured in JavaScript using gestureevent DOM events, which are supported by Chrome, Safari, and (I think) Firefox on that platform. I've implemented support for this in my own three.js-based DOOM map editor, so I'm somewhat familiar and happy to work on this if you like.
Implementation. One way to implement this could be to refactor the camera handling into multiple modes. Upon seeing the first gestureevent event, the mode would be permanently switched to trackpad mode. Platforms which don't support trackpad events would stay in the default mode (unchanged behavior).
- Default mode: Drag events pan the camera unless a lasso/box select is active. Vertical mousewheel events zoom the camera. Horizontal mousewheel events are ignorred.
- Trackpad mode: Mousewheel events pan the camera vertically or horizontally. Native gesture pan/zoom events will either pan or zoom. Probably ignore rotation events. Drag events are ignored unless the lasso/box select is active.
I'm happy to do the work -- would you consider patches that implement this if the behavior doesn't affect other platforms?
Caveats.
- Mobile. I'm not sure how this would affect mobile platforms. I think Safari on iOS sends vertical/horizontal wheel events on one-finger tap-and-drag and gestureevents on zooming, but I'd need to test. Does regl-scatterplot care about mobile?
- Lasso. I don't see this affecting wit lasso or selection tools, am I wrong...?
- Changed defaults. I'm proposing changing the default to be platform dependent: Windows and Linux users would have to click-and-drag, but Mac users wouldn't. The docs would get a bit more complex.
- Downstream libraries. Do you have a sense for how this would affect library users? (I dearly want this interaction for
jupyter-scatterplotfor example, which is the reason why I'm starting here upstream)
Please give me some direction and then I'll start prototyping if my executive function doesn't crap itself :-)
Related: #204
A different more complex way is to instead make all behavior configurable downstream. Some general interface like (just spitballing):
enum ModifierKey { CTRL, SHIFT, META, ALT }
enum Action { PAN, ZOOM, ROTATE }
type MouseDrag = { kind: "mousedrag"; left: boolean; right: boolean; middle: boolean; modifiers?: ModifierKey[] };
type MouseWheel = { kind: "mousewheel"; vertical: boolean; horizontal: boolean; modifiers?: ModifierKey[] };
type GestureScale = { kind: "gesturescale"; speed?: number };
type GestureRotate = { kind: "gesturerotate" };
type InputEvent = MouseDrag | MouseWheel | GestureScale | GestureRotate;
type Binding = {
event: InputEvent;
action: Action;
};
type CameraBindings = Binding[];
const DefaultBindings: CameraBindings = [
{ event: { kind: "mousedrag", left: true, right: false, middle: false }, action: Action.PAN },
{ event: { kind: "mousewheel", vertical: true, horizontal: false }, action: Action.ZOOM }
];
const TrackpadBindings: CameraBindings = [
{ event: { kind: "mousewheel", vertical: true, horizontal: true }, action: Action.PAN },
{ event: { kind: "gesturescale"}, action: Action.ZOOM },
{ event: { kind: "gesturerotate" }, action: Action.ROTATE }
];
// and then some way of transitioning between them based on platform ... ...
The advantage is that users could make regl-scatterplot behave in, say, Fusion360 mode (pan w/ right-click or with shift held or ...) or Blender mode or whatever.
The disadvanatge is that there's more complex input handling that's less testable across platforms.
I'm not in favor of it, but am happy to prototype an implementation for if we want that level of customizability.
Thanks for the detailed ticket! I'm happy to add gesture support but we need to check careful where this functionality needs to go. The current camera behavior is implement in https://github.com/flekschas/dom-2d-camera and so I believe if you want to add a pinch-zoom gesture, that's where it should go.
One issue I'd love to explore is whether there's a standard approach to GestureEvent, which seems to be a non-standard WebKit-only event type according to https://developer.mozilla.org/en-US/docs/Web/API/GestureEvent. I'm not entirely against non-standard approaches as long as they are the de facto standard but if it's WebKit only, then I'm not convinced we should go down that route. Without having done any in-depth research, it appears as if most gesture libraries use the standard TouchEvent (https://developer.mozilla.org/en-US/docs/Web/API/Touch_events). So I would favor well-maintained well-tested gesture library (that's ideally small) over a non-standard API.
I like the idea of a fully configurable camera interaction binding but I agree, it seems hard to test
Changed defaults
I'm on macOS and like how the library works today :) So I'm not in favor of changing the defaults. I would argue that the default interaction mode should remain scrolling/wheeling to zoom but if an application developer likes, they can activate trackpad/touch mode for their application.
Mobile
On mobile we could think about turning touch mode on by default. Since regl-scatterplot was build for large-scale data exploration, mobile is not directly the main target.
Downstream libraries
We can surely expose the interaction mode setting in jupyter-scatter. In fact, all options of regl-scatterplot are automatically exposed in jupyter-scatter, so there isn't really anything you need to do other than maybe exposing the setting in a mode prominent way. :)
Lasso
I actually haven't tested the lasso on a touch device (only with the trackpad, where it works as intended). The main issue I could foresee is that the long-press indicator is too small and hidden by the finger. Other than that, the lasso should just work as is.