Compositor Layers
Several people have expressed interest in the idea of supporting compositor layers.
Background (no pun intended): Compositor layers are super cool. Basically instead of sending 1 texture to the XR runtime that contains everything in the scene (skybox, scene geometry, UI elements), you can send things in several different pieces and make the poor runtime deal with compositing all of them into the final eye textures. There are many advantages:
- The compositor is way better at positioning and reprojecting stuff than you are. So if you sent a cubemap layer with your skybox, or a 2D quad layer with an image, in theory it will feel much more "stable" than normal.
- Quad layers make better use of pixels than the warped eye textures, so you can get better clarity for UI/fonts.
- Might save rendering time by not rerendering static parts of the scene (but I guess the runtime still has to do it, so not sure if there are any real savings).
Also, submitting depth buffers to the runtime (for better reprojection and occlusion) is achieved via submitting an extra layer.
Here are some ideas for supporting these in LÖVR.
lovr.headset.newLayercreates a compositor layer. A Layer has a type (2d, 3d, cubemap, equirect, cylinder). Not all types are supported by all runtimes, so their should probably be a way to check which types are supported. Also, not all headset drivers will even support layers. In that case, just one 3d layer will be supported.Layer:getCanvas. Layers are backed by swapchains and manage a Canvas object. You can retrieve the Canvas to render to it if you want to update the layer's contents.lovr.headset.renderTowill probably go away, instead there will belovr.headset.submitthat submits layers to the runtime.lovr.headset.getLayersandlovr.headset.setLayers. These will control which layers get submitted to the compositor. Originally I wanted a stateless API likelovr.headset.submit(...layers), but since this will probably get called inboot.luathere needs to be some way to get the list of layers to it.- Various metadata accessors on Layers. Quad layers can have a pose in the world, an origin (head/floor), dimensions in meters, and an eye affinity (which eye it renders to, in theory you could make a stereo quad layer by rendering 2 quad layers in the same place with different eye affinities?). Cylinders can have angle/radius. All layers can have a tint for doing color fades (without rerendering their contents). etc.
- You can change the environment blend mode that says how layers should blend with layers behind them.
- There will be a default layer. For stereo setups this will be a single 3d layer. lovr.draw draws to the default layer (or the window if headset module isn't active).
- A note on cameras: Before rendering to a layer, its Canvas camera will magically get set up to use something sensible (orthographic for 2D quad, 6 directional views for cubemap, or viewer-synchronized eye views for a 3d layer). You're allowed to change these though. In particular, for 3D layers you're encouraged to update their camera matrices as late as possible, to get better prediction (on the gpu branch I think you'll be able to set canvas cameras after recording a canvas batch and it'll still apply to the batch).
- If a Layer is only going to get recorded once (i.e. a static image), you can set a special flag that enables some optimizations. It would be convenient to create layers from images/textures directly.
In the gpu branch I'm going to split lovr.headset.renderTo into lovr.headset.getCanvas and lovr.headset.submit to prepare for this.
Some thoughts:
- For OpenXR environmental blend mode only effects how the entire stack of layers blends with the background. That is logically the compositor first collapses the entire stack of layers into a single layer, then uses the env blend mode to determine how to blend that stack. To control how layers are blended between each other you use
XR_COMPOSITION_LAYER_BLEND_TEXTURE_SOURCE_ALPHA_BITgiven in the<layer-struct>::layerFlagsflags. - At least on OpenXR you will need to create a depth swapchain in order to submit depth layers, also depth layers are submitted as a attachment to the projection layer, not as its own layer (super minor semantic difference 😃).
- OpenXR has the concept of a static swapchain that can only be acquired once, this allows at least Monado to only create one image for the Swapchain. Great for quad layers/Skyboxes.
- At least for my personal use case it would be good to be able to skip the projection layer all together and only submit quad/cylinder layers. Say for instance a UI application that draws on top of other apps/games.
- I have been thinking a bit about adding a extension to extend Monado that you are able to create
XrSwapchains from a GL texture(s)/VkImage(s).
A layer is going to be associated with 1 or more swapchains to support depth layers. For simplicity, it would be nice to just always use an OpenXR-provided depth swapchain, but it may be preferable to make it conditional in case A) the runtime doesn't support depth swapchains B) the runtime doesn't support depth submission and better performance can be achieved using a transient depth attachment.
You will be able to have control over the full layer stack submitted to the runtime, e.g. lovr.headset.setLayers({ skybox, quad, quad2 }). May want to have a way to skip creation of the default layer in conf.lua to avoid startup overhead, but I'd say it's optional.
That API looks nice, and having a way to skip or even do the init yourself at start up would be nice.
So on some mobile devices one optimisation that one can do is never write out the depth data, that allows a tiling GPU to save that bandwidth. Also I'm not sure that all runtimes exposes depth formats, you should probably look up what the Quest reports.