Unify RenderLayers and TargetCamera (Part 1: RenderGroups) [ADOPT ME]
Objective
- Address various ambiguities and issues with the separation between RenderLayers and TargetCamera.
- Closes #12468.
Solution
See changelog. This is Part 1 of a 2-part rework. Part 2 will update UI to remove TargetCamera and use affiliated cameras instead.
Part 2 planned changelog:
- Remove
TargetCamera. - Use
PropagateRenderGroups::Cameraby default on UI cameras. This will set the camera affiliation inInheritedRenderGroupsequal to the UI cameras (unless an entity has aRenderGroupscomponent with a manually-specified camera affiliation, which will take precedence).
Part 2 planned migration:
- Replace
TargetCameracomponents that you added manually to cameras withPropagateRenderGroups::Cameracomponents. If those cameras need to see all 'default render layer' entities, then also add aCameraView::default()component.
Changelog
- Introduced a unified
RenderGroupscomponent that merges the functionality ofRenderLayersandTargetCamera.- Includes
RenderLayers(reworked) and an optional 'affiliated camera'.
- Includes
- Reworked
RenderLayersto store a growable bitmask, instead of being fixed to 32 bits. It can now store up to 64 render layers on the stack (including the default render layer), and then will allocate for more bits. - Replaced
LayerwithRenderLayeras a user-facing abstraction for interacting withRenderLayers. ARenderLayeris a bit index, equivalent to the existingLayertype.- Added
DEFAULT_RENDER_LAYER = RenderLayer(0). By default all entities and cameras withoutRenderGroupsorCameraViewcomponents will be in the default layer.
- Added
- Added
CameraViewas a camera-specific component for controlling whichRenderLayersare visible to the camera. Cameras automatically see any entity with aRenderGroupscomponent that has a camera affiliation equal to the given camera. - Added a
PropagateRenderGroupscomponent for controlling howRenderGroupsare propagated down hierarchies.- Added an
InheritedRenderGroupscomponent that mergesRenderGroupscomponents with propagatedRenderGroups. This component is managed automatically inPropagateRenderGroupsSet. It is used internally for rendering, and can be queried by users to e.g. debug visibility issues.
- Added an
- Added
PropagateRenderGroupsSet, which runs in theVisibilityPropagateset.
Migration Guide
- Replace
LayerwithRenderLayer. - Replace
RenderLayerscomponents withRenderGroupscomponents on non-camera entities. - Replace
RenderLayerscomponents withCameraViewcomponents on camera entities.
TODOs
- [x] Complete integration.
- [x] This PBR shader uses render layers internally (
GpuDirectionalLightandViewUniformandDirectionalLightandView). Now that it is a growable bitset, this doesn't work unless we set an upper cap on bits and pass in an array of ints for the flags. - [x]
prepare_lightsinbevy_pbris cloningExtractedDirectionalLight, which contains aRenderGroups. This makes it potentially-allocating due to the growable bitset. There is also a clone when extracting from the world (also forViewUniform). - [x] Check
pixel_grid_snap,render_to_textureexamples.
- [x] This PBR shader uses render layers internally (
- [ ] Task list: #12588
Follow-up
- Part 2: Remove
TargetCameraand use affiliated cameras in for UI instead. - Refactor gizmos/
GizmoConfigto support displaying different gizmos in different cameras. This may require a deeper rethink of how gizmos are designed, since they are currently special-cased.
I want to point out that there's a couple other options here - not that they are necessarily better.
One option is to combine the 'inherited' status as a boolean property within the RenderGroups component, rather than making it a separate marker component. I don't know whether this is better or not, it really depends on how much overhead there is for storing separate marker components.
You could take that further and have the property be an enum:
enum RenderGroupInheritance {
/// This render group was calculated automatically
Implicit = 0,
/// This render group was explicitly set on the entity, but should not cascade to descendants
Explicit,
/// This render group has been explicitly set on the entity, and should cascade to descendants
ExplicitPropagating
}
This means reducing the total number of components involved on each entity. Whether that is an optimization or de-optimization, I don't know.
It also raises the question: if an explicit propagating entity has an explicit non-propagating child, what should the render groups for its grandchild be?
Working on the implementation for propagating render groups now. I think I can manage the following rules without needing to change the time complexity of existing code:
- Any entity with
PropagateRenderGroupswill propagate aRenderGroupto its children (derived from thePropagateRenderGroupsvariant). - If an entity with
PropagateRenderGroupsis in-tree, it will block up-tree propagation and 'take over'. Entities can only inherit one propagatedRenderGroups, unless they havePropagateRenderGroupsin which case they can't inherit.
Allowing PropagateRenderGroups to merge together as you traverse the tree would require more time-complex hierarchy traversal due to a 'dependency diamond' issue.
It is also possible, although I think probably better for a follow-up PR, to allow a component RenderGroupsInheritancePolicy with filtering rules:
Auto: Doesn't do anything extra.Skip: Don't merge inherited groups, but pass them to children.Block: Don't merge inherited groups, and prevent propagation to children.
EDIT: Given the complexity of the propagation algorithm for this PR, I doubt RenderGroupsInheritancePolicy can be easily integrated.
Something that would be helpful, I think, is a high-level description of the propagation algorithm. For example, the stylesheet propagator that I wrote a few months ago could be described by the following summary: "Starting with a query of all UI entities that can be styled, start by identifying the roots and then recursively visit all descendants accessible via the query. Each visited node calculates the composition of the inherited style with the style defined on that node, if any. Mutations to the state of each node are applied differentially to minimize change detection signals."
Something that would be helpful, I think, is a high-level description of the propagation algorithm.
I wrote a description and notes at the top of the propagate_render_groups.rs file.
Something that would be helpful, I think, is a high-level description of the propagation algorithm.
I wrote a description and notes at the top of the
propagate_render_groups.rsfile.
That looks really good.
Part of the motivation for wanting this description is to be able to make an argument as to why this approach is better than alternatives (such as an exclusive system with world access, or a giant query with a bunch of optional components which are then accessed via entity id instead of sequential scanning), both of which are obvious patterns for hierarchical traversals.
I commented on the issue here https://github.com/bevyengine/bevy/issues/12468#issuecomment-1997934690, but I think the preference is to remove RenderLayers from the shader code entirely and compute light visibility on the cpu as property of the view. We could include layers as a dynamic array on the view itself, but iirc there were problems including the layers on the light struct as they're already nested inside another array.
Ok the check has been moved to the CPU code. I added a skip field to the GpuDirectionalLight struct, which allows the shader to skip directional lights not visible to a particular view. It's kind of a hack, but hard to see a better option that doesn't involve a major refactor of this dense code.
I think, at best, we should warn.
Added a warning at 1024 layers, and a panic failsafe if you exceed 1mill, which would be 24MB per RenderLayers struct.
@alice-i-cecile @UkoeHB I tried to make an example, but I couldn't get it to work - possibly I am doing something wrong. Here's what I did:
- Start with the example
update_gltf_scenewhich spawns two copies of the flight helmet model. - Add
RenderGroups::default().add(1).clone()to the first model spawn. (I'm not sure this is the best way to create a RenderGroups with a given layer). - Add
RenderGroups::default().add(2).clone()to the second model spawn. - Finally, add
RenderGroups::default().add(1).clone()to the camera spawn.
At this point I would have expected the second model not to be visible. However, both models were visible.
At this point I would have expected the second model not to be visible. However, both models were visible.
A default RenderGroups starts with the default render layer. Use RenderGroups::from() instead. Also, the camera uses CameraView, not RenderGroups.
If I try RenderGroups::from(RenderLayers(2)) I get the following error:
error[E0391]: cycle detected when getting the resolver for lowering
|
= note: ...which requires normalizing `bevy_render::view::visibility::render_groups::RenderLayers::iter_layers::{opaque#0}`...
= note: ...which requires looking up limits...
= note: ...which requires getting the crate HIR...
= note: ...which again requires getting the resolver for lowering, completing the cycle
= note: see https://rustc-dev-guide.rust-lang.org/overview.html#queries and https://rustc-dev-guide.rust-lang.org/query.html for more information
I changed the camera to CameraView, but I still see both models.
I changed the camera to CameraView, but I still see both models.
Can you share your code? Maybe in a gist. If there is a bug then I will fix it ASAP.
Start with the existing update_gltf_scene example (which is relatively short), and change the setup function to the following:
fn setup(mut commands: Commands, asset_server: Res<AssetServer>) {
commands.spawn((
DirectionalLightBundle {
transform: Transform::from_xyz(4.0, 25.0, 8.0).looking_at(Vec3::ZERO, Vec3::Y),
directional_light: DirectionalLight {
shadows_enabled: true,
..default()
},
..default()
},
RenderGroups::default().add(1).clone(),
));
commands.spawn((
Camera3dBundle {
transform: Transform::from_xyz(-0.5, 0.9, 1.5)
.looking_at(Vec3::new(-0.5, 0.3, 0.0), Vec3::Y),
..default()
},
EnvironmentMapLight {
diffuse_map: asset_server.load("environment_maps/pisa_diffuse_rgb9e5_zstd.ktx2"),
specular_map: asset_server.load("environment_maps/pisa_specular_rgb9e5_zstd.ktx2"),
intensity: 1500.0,
},
CameraView::default().add(1).clone(),
));
// Spawn the scene as a child of this entity at the given transform
commands.spawn((
SceneBundle {
transform: Transform::from_xyz(-1.0, 0.0, 0.0),
scene: asset_server.load("models/FlightHelmet/FlightHelmet.gltf#Scene0"),
..default()
},
RenderGroups::default().add(1).clone(),
));
// Spawn a second scene, and add a tag component to be able to target it later
commands.spawn((
SceneBundle {
scene: asset_server.load("models/FlightHelmet/FlightHelmet.gltf#Scene0"),
..default()
},
MovedScene,
RenderGroups::default().add(2).clone(),
));
}
The generated examples/README.md is out of sync with the example metadata in Cargo.toml or the example readme template. Please run cargo run -p build-templated-pages -- update examples to update it, and commit the file change.
You added a new example but didn't add metadata for it. Please update the root Cargo.toml file.
Changing this to ADOPT ME status since I don't have time or motivation to continue working on it.
Opened a PR to port removing the layer limit (which seemed less controversial) https://github.com/bevyengine/bevy/pull/13317
@tychedelia doc for RenderLayers still contains mention of TOTAL_LAYERS, which doesn't exist
@alice-i-cecile Sorry for the annoying ping, may I adopt this? I am not sure if this is still ongoing. I literally made my rtts a few x transforms away from my game. It is driving me insane the ugly code
@Sirmadeira this PR originally died because one of the rendering SMEs blocked it. You'll want to get consensus/full approval from the rendering guys before putting any work in.
Yep, I'd like to see this work in some form, but I definitely recommend building consensus first.
@UkoeHB If you dont mind me asking what was the reason for the block?
@UkoeHB If you dont mind me asking what was the reason for the block?
Something about it being too complicated or 'we don't really need this'. If you can present a compelling case then it might get enough support from the SMEs.
Originally, there were three different users pushing for this, each with their own use case:
- @tychedelia Wanted to expand the number of render layers, this part got merged.
- @aevyrie Wanted to unify TargetCamera and RenderLayers for their own work (I believe this was related to mod_picking. I don't know how this intersects with the mod_picking upstreaming). This part got dropped.
- What I wanted was an option for inheritable render layers - that is, I wanted to be able to load a GLTF model or scene, assign render layers to the root entity of that scene, and have that layer config apply to all of the different entities and meshes in the loaded scene. Currently the way you have to do this is listen for the scene to finish spawning, then traverse the scene hierarchy and add the render layers component to each descendant. This part of the work also got dropped.