#12502 Remove limit on RenderLayers.
Objective
Remove the limit of RenderLayer by using a growable mask using SmallVec.
Changes adopted from @UkoeHB's initial PR here https://github.com/bevyengine/bevy/pull/12502 that contained additional changes related to propagating render layers.
Changes
Solution
The main thing needed to unblock this is removing RenderLayers from our shader code. This primarily affects DirectionalLight. We are now computing a skip field on the CPU that is then used to skip the light in the shader.
Testing
Checked a variety of examples and did a quick benchmark on many_cubes. There were some existing problems identified during the development of the original pr (see: https://discord.com/channels/691052431525675048/1220477928605749340/1221190112939872347). This PR shouldn't change any existing behavior besides removing the layer limit (sans the comment in migration about all layers no longer being possible).
Changelog
Removed the limit on RenderLayers by using a growable bitset that only allocates when layers greater than 64 are used.
Migration Guide
RenderLayers::all()no longer exists. Entities expecting to be visible on all layers, e.g. lights, should compute the active layers that are in use.
@james7132 I think I know how to use tracy, would you mind pointing me in the direction of what I should benchmark or any documentation for this relative to rendering prs?
I definitely appreciate concerns about performance and do not want to regress the base case here. I'm happy to do some more testing as best I can.
This is also expected to be on the majority of entities that should be rendered, so these costs can easily scale with scene size.
I'm not sure I understand this comment. The baseline I'd expect is that neither the view nor the entity has a render layer, in which case we're just checking that the default equals the default. In fact, because we're providing a Default impl for &RenderLayers that points to a static, we could theoretically just start doing a ptr::eq for the common case here where the view and the entity point to the same default layer.
My biggest concern is that once there's enough layers to spill onto the heap, the performance of visibility calculation will tank as the cache hit rate drops.
That's a fair concern that I would also like to understand better, but it's also worth pointing out that the status quo is that the user simply cannot go beyond ~64~ 32 layers. Assuming the non-heap scenario doesn't regress too much, it's strictly better to be able to do something than not to do it. Plus, there are also scenarios, like the one I'm specifically trying to unblock for myself, with many layers and very few entities.
Expanding the fixed size to u64 or even u128 is likely less controversial, and are likely already more than sufficient
In my use case I need to spawn a camera for every node in a node graph which is necessarily unbounded. There are absolutely other ways that I could work around the limit, but they are very complex. I recognize this is more niche and so don't want to ram through changes solely on my behalf, but it's a pretty hard blocker for me personally.
I'll try to collect some more evidence with regards to the impact here.
- Micro benchmark on
intersectsshows a 70% regression. This isn't too surprising since we are are doing aniterversus just a bitwise op.
check_visibilityshows a ~5% regression.- frames shows no regression
Tested my M2 mbp.
For many_cubes, I attached an explicit layer to the camera and spawned each cube in one of 3 possible layers.
i see a 2.5% regression in check_visibility without adding any layers, from 428ns to 440ns. adding if self.0.as_ptr() == other.0.as_ptr() { return true } to the start of the intersect test reduces that to 1.5%.
i don't think 12 nanoseconds (or 7 nanoseconds with the fast path) per frame for one thread for 160k entities is significant, but i don't write bullet hell games ... if this is a concern i guess we'd have to use a feature flag, which would be messy.
separately, running this myself i was getting a panic in prepare_lights where you replaced directional_shadow_enabled_count with MAX_DIRECTIONAL_LIGHTS on line 1141.
the change makes sense - we want to take as many shadow casters for the view as we can, even if there are shadow casters on other layers - but it looks like that needs an additional check for light.shadows_enabled as well to avoid trying to access cascades when they may not exist. in case shadows_enabled is false, we can break as the shadow casters are sorted to the front.
Can you check how much of an impact spilling onto the heap would have in the worst case?
adding if self.0.as_ptr() == other.0.as_ptr() { return true } to the start of the intersect test reduces that to 1.5%.
Thanks for validating this, I've added it to intersects.
if this is a concern i guess we'd have to use a feature flag, which would be messy.
Barring some stuff around the edges regarding constness, we could do an ugly feature flag for this.
separately, running this myself i was getting a panic in prepare_lights where you replaced
directional_shadow_enabled_count
Added a guard for this. Curious why I didn't hit it on my Mac.
Can you check how much of an impact spilling onto the heap would have in the worst case?
- layer 10k vs layer 1. A 100% slowdown in
check_visibilitythat results in only a 10% slowdown on frames.
- A more realistic layer 256 vs layer 1 shows a 20% slowdown in
check_visibility.
Curious why I didn't hit it on my Mac.
I ran cargo run --release --example many_cubes -- --benchmark
Will check its fixed shortly
works for me too now.
Don't want to claim to be a perf expert, but the numbers seem pretty reasonable to me? I'm sure there are scenarios we haven't covered yet.
If the performance hit is acceptable, I don't think the additional maintence burden is worth it, but I will just say that I was able to implement a feature flag like so that "just works":
#[cfg(feature = "unlimited_render_layers")]
#[path = "unlimited_render_layers.rs"]
mod render_layers;
#[cfg(not(feature = "unlimited_render_layers"))]
mod render_layers;
I don't think feature flagging this is worth it. That's a nasty bit of maintenance burden, and IMO the flexibility here is worth the performance cost.
I know this is pretty far into development but have we considered a tagged pointer to a slice with it's size stored inline?
The render layer struct would be a single pointer whose address would define what it contains:
0bXX...XX00A address to 4 byte aligned bitset with it's size stored in the allocation.0bXX...XX01A 62/30 bit bitset stored in the address bits.0b11...1111A full bitset.
A intersection would be performed as follows:
- If neither bitset is allocated on the heap do a bitwise and of the addresses and return the result.
- If one bitset is heap allocated and the other is full return a clone of the allocated one.
- If both bitsets are heap allocated return a heap allocated bitset.
If most our users only use the lower render layers or all render layers intersections and clones would operate without allocating.
Pointer provenance shouldn't be an issue since we only ever dereference pointers whose addresses we haven't modified in any way.
The powers that SME have decided that we should merge this as is, and tackle perf follow-ups in future PRs if they become meaningful (or someone wants to get nerd-sniped).
Merging!
@alice-i-cecile doc for RenderLayers still contains mention of TOTAL_LAYERS, which doesn't exist
Thank you to everyone involved with the authoring or reviewing of this PR! This work is relatively important and needs release notes! Head over to https://github.com/bevyengine/bevy-website/issues/1316 if you'd like to help out.