proposals
proposals copied to clipboard
Module for WebAudio integration
Currently you need to do a bunch of per-frame work to update AudioListener
s and PannerNode
positions (see the examples in https://github.com/immersive-web/webxr/pull/930).
Frameworks like threejs paper over this, so many end users do not need to deal with this, but doing this in vanilla WebXR/WebAudio code is tricky. Furthermore, there's an additional delay introduced with having to ferry this information from the XR frame update to the WebAudio rendering thread.
A nice API to have would be a WebAudio integration where you can attach an XRSpace
to the AudioListener
and PannerNode
s, perhaps with a scaling factor, and the audio rendering thread is then allowed to directly fetch position info at whatever cadence it would like. (This also means that under the hood the panning code can take into account velocity and other features to better predict the head position at render time)
cc @hoch @padenot @rtoy