Headset View Types
Right now headset rendering is pretty much locked to 2 views. This is starting to become bad.
- Sometimes you want 1 view (desktop window, phone AR)
- Sometimes you want 3 views (stereo + mono spectator)
- Sometimes you want 4 views (foveated/varjo stuff)
- Sometimes you want 6 views (CAVE)
- Sometimes you want N views (looking glass?)
OpenXR has the concept of view configurations:
- You can ask the runtime which view types it supports. It gives you back an ordered list (high priority/preference -> low)
- When you start a session, you can say which view type you're gonna use.
- Surprise! Microsoft extension! If you enable
XR_MSFT_secondary_view_configurationyou can specify an optional list of secondary view configurations, in addition to the primary one. This is used for the mono spectator view. Seems neat.
- Surprise! Microsoft extension! If you enable
LÖVR could:
- Add a
ViewTypetype. - Expose the list of supported view types,
lovr.headset.getSupportedViewTypes/lovr.headset.getFeatures().viewtypes. - Allow you to pick one when starting the headset session (
lovr.headset.init(viewtype)for now). If it's not supported, you get an error. If it'snil, it will use the runtime's favorite view configuration. lovr.headset.getViewTypeso you can figure out which view type's being used.- Allow conf.lua to pick a view type that boot.lua will pass to
lovr.headset.init, e.g.t.headset.viewtype = 'mono'. - Something something secondary view types.
This could finally provide a way to do mono desktop windows. The simulator could support both mono and stereo view types, and you can request whichever one you prefer.
CAVE systems are far more complex then just 6 views, not only can it have stereo views (12 views) it can also have more then one "screen" per axis: http://www.visbox.com/products/cave/viscube-c4-t3x/
It's probably better to think more about how the content is handled rather then view type. Say for the common VR type where the user is "in" the content, HMDs, CAVE, fish-tank and maybe Phone AR (magic window). Vs say the user is look at the content through a smaller window onto the content, say looking at something being placed on a table: Phone AR (magic window), Looking glass and maybe fish-tank.
Can you elaborate on the advantages of using "content type" instead of "view type"?
Maybe choosing between "immersive" (VR) and "portal" (magic window) is nice because it can map onto multiple things like view configuration, blend mode, and actions?
Apologies for the delay in replying.
It allows you to reasons about things like how you treat the origin of the world without having to think about the actual views. Say for instance a magic window phone AR or TiltFive, were the content is on a table and the user always looks towards the origin of the space that is on the center of the world.
Or an immersive (VR) one were you are often standing on the origin and looking around. But you could deffo power that experience say with a single screen CAVE system or a HMD.
OpenXR doesn't (for now) have a way to tell the app there is a screen you can display 2D content on. Say on a Phone magic window you might want to have pixel perfect content display on top of the view.
Closing this for now.
- After 2 years there hasn't been a situation where someone has needed this.
- Allowing both mono/stereo for the simulator conflates "view configuration" and "mirror window mode". Someone may mark their app as "mono" to get the mirror window to have 1 view, but then fail to run on a stereo headset with OpenXR.
- It's not completely clear that view configurations should be exposed all the way up to Lua. As discussed here, maybe it ends up being a higher level concept.