bevy
bevy copied to clipboard
Implement minimal reflection probes (fixed macOS, iOS, and Android).
This pull request re-submits #10057, which was backed out for breaking macOS, iOS, and Android. I've tested this version on macOS and Android and on the iOS simulator.
Objective
This pull request implements reflection probes, which generalize environment maps to allow for multiple environment maps in the same scene, each of which has an axis-aligned bounding box. This is a standard feature of physically-based renderers and was inspired by the corresponding feature in Blender's Eevee renderer.
Solution
This is a minimal implementation of reflection probes that allows artists to define cuboid bounding regions associated with environment maps. For every view, on every frame, a system builds up a list of the nearest 4 reflection probes that are within the view's frustum and supplies that list to the shader. The PBR fragment shader searches through the list, finds the first containing reflection probe, and uses it for indirect lighting, falling back to the view's environment map if none is found. Both forward and deferred renderers are fully supported.
A reflection probe is an entity with a pair of components, LightProbe and EnvironmentMapLight (as well as the standard SpatialBundle, to position it in the world). The LightProbe component (along with the Transform) defines the bounding region, while the EnvironmentMapLight component specifies the associated diffuse and specular cubemaps.
A frequent question is "why two components instead of just one?" The advantages of this setup are:
-
It's readily extensible to other types of light probes, in particular irradiance volumes (also known as ambient cubes or voxel global illumination), which use the same approach of bounding cuboids. With a single component that applies to both reflection probes and irradiance volumes, we can share the logic that implements falloff and blending between multiple light probes between both of those features.
-
It reduces duplication between the existing EnvironmentMapLight and these new reflection probes. Systems can treat environment maps attached to cameras the same way they treat environment maps applied to reflection probes if they wish.
Internally, we gather up all environment maps in the scene and place them in a cubemap array. At present, this means that all environment maps must have the same size, mipmap count, and texture format. A warning is emitted if this restriction is violated. We could potentially relax this in the future as part of the automatic mipmap generation work, which could easily do texture format conversion as part of its preprocessing.
An easy way to generate reflection probe cubemaps is to bake them in Blender and use the export-blender-gi
tool that's part of the bevy-baked-gi
project. This tool takes a .blend
file containing baked cubemaps as input and exports cubemap images, pre-filtered with an embedded fork of the glTF IBL Sampler, alongside a corresponding .scn.ron
file that the scene spawner can use to recreate the reflection probes.
Note that this is intentionally a minimal implementation, to aid reviewability. Known issues are:
-
Reflection probes are basically unsupported on WebGL 2, because WebGL 2 has no cubemap arrays. (Strictly speaking, you can have precisely one reflection probe in the scene if you have no other cubemaps anywhere, but this isn't very useful.)
-
Reflection probes have no falloff, so reflections will abruptly change when objects move from one bounding region to another.
-
As mentioned before, all cubemaps in the world of a given type (diffuse or specular) must have the same size, format, and mipmap count.
Future work includes:
-
Blending between multiple reflection probes.
-
A falloff/fade-out region so that reflected objects disappear gradually instead of vanishing all at once.
-
Irradiance volumes for voxel-based global illumination. This should reuse much of the reflection probe logic, as they're both GI techniques based on cuboid bounding regions.
-
Support for WebGL 2, by breaking batches when reflection probes are used.
These issues notwithstanding, I think it's best to land this with roughly the current set of functionality, because this patch is useful as is and adding everything above would make the pull request significantly larger and harder to review.
Changelog
Added
- A new LightProbe component is available that specifies a bounding region that an EnvironmentMapLight applies to. The combination of a LightProbe and an EnvironmentMapLight offers reflection probe functionality similar to that available in other engines.
Thanks a ton for fixing this up. Having to revert always sucks.
Rather than a conditional cfg, can you move it to a runtime check if the device supports the texture binding array feature?
@JMS55 Sure, I think I can do that. Note that this will require rewriting much of the patch as I will have to merge the conditional definitions of RenderViewBindGroupEntries
into one.
@pcwalton thanks for updating this. Hate to do this to you, but this will require some work to rebase now that we merged exposure settings...
Works on iOS and macOS, but still panics on Android:
wgpu error: Validation Error
Caused by:
In Device::create_bind_group_layout
note: label = `mesh_view_layout`
Too many bindings of type SampledTextures in Stage ShaderStages(FRAGMENT), limit is 16, count was 21
Support for WebGL 2, by breaking batches when reflection probes are used.
Should WebGPU be supported now, or is that for later?
@mockersf How are you testing this on Android? I tested cd examples/mobile && cargo apk run
and it works fine on my Pixel 7 Pro. Perhaps this is Android-GPU-specific?
As for your other question, WebGPU has no support for binding arrays and so reflection probes are presently unsupported there.
I'm testing on the simulator on an m1 Mac, I don't have an Android device at the moment
event crates/bevy_diagnostic/src/system_information_diagnostics_plugin.rs:130: SystemInfo { os: "Android 13 sdk_gphone64_arm64", kernel: "5.15.41-android13-8-00055-g4f5025129fe8-ab8949913", cpu: "", core_count: "4", memory: "1.9 GiB" }
event crates/bevy_render/src/renderer/mod.rs:144: AdapterInfo { name: "SwiftShader Device (LLVM 10.0.0)", vendor: 6880, device: 49374, device_type: VirtualGpu, driver: "?", driver_info: "?", backend: Vulkan }
With the Pixel 7 pro, it may be possible you have a more powerful phone with a better GPU, or it may be that the simulator is just not good. But I probably expect the latest google flagship to support more things than most Android phones
Gotcha, I'll try on the simulator next. I suspect that SwiftShader has a relatively low limit while modern Mali has a higher limit. I can try to detect the maximum number of texture bindings and disable reflection probes if it isn't at least 24 (or whatever).
it doesn't crash on the Android simulator if I check on render_device.limits().max_storage_textures_per_shader_stage
it doesn't crash on the Android simulator if I check on
render_device.limits().max_storage_textures_per_shader_stage
OK, I check that now. Seems to work on the default emulator that comes with Android Studio.