Implement fast depth of field as a postprocessing effect.
This commit implements the depth of field effect, simulating the blur of objects out of focus of the virtual lens. Either the hexagonal bokeh effect or a faster Gaussian blur may be used. In both cases, the implementation is a simple separable two-pass convolution. This is not the most physically-accurate real-time bokeh technique that exists; Unreal Engine has a more accurate implementation of "cinematic depth of field" from 2018. However, it's simple, and most engines provide something similar as a fast option, often called "mobile" depth of field.
The general approach is outlined in a blog post from 2017. We take advantage of the fact that both Gaussian blurs and hexagonal bokeh blurs are separable. This means that their 2D kernels can be reduced to a small number of 1D kernels applied one after another, asymptotically reducing the amount of work that has to be done. Gaussian blurs can be accomplished by blurring horizontally and then vertically, while hexagonal bokeh blurs can be done with a vertical blur plus a diagonal blur, plus two diagonal blurs. In both cases, only two passes are needed. Bokeh requires the first pass to have a second render target and requires two subpasses in the second pass, which decreases its performance relative to the Gaussian blur.
The bokeh blur is generally more aesthetically pleasing than the Gaussian blur, as it simulates the effect of a camera more accurately. The shape of the bokeh circles are determined by the number of blades of the aperture. In our case, we use a hexagon, which is usually considered specific to lower-quality cameras. (This is a downside of the fast hexagon approach compared to the higher-quality approaches.) The blur amount is generally specified by the f-number, which we use to compute the focal length from the film size and FOV. By default, we simulate standard cinematic cameras of f/1 and Super 35. The developer can customize these values as desired.
A new example has been added to demonstrate depth of field. It allows customization of the mode (Gaussian vs. bokeh), focal distance and f-numbers. The test scene is inspired by a blog post on depth of field in Unity; however, the effect is implemented in a completely different way from that blog post, and all the assets (textures, etc.) are original.
Bokeh depth of field:
Gaussian depth of field:
No depth of field:
Changelog
Added
- A depth of field postprocessing effect is now available, to simulate objects being out of focus of the camera. To use it, add
DepthOfFieldSettingsto an entity containing aCamera3dcomponent.
The generated examples/README.md is out of sync with the example metadata in Cargo.toml or the example readme template. Please run cargo run -p build-templated-pages -- update examples to update it, and commit the file change.
I've disabled this feature entirely on WebGL 2. There are two problems here:
-
WebGL 2 has no support for multisample depth textures.
-
Naga has a bug whereby it assumes that depth textures are accessed with
sampler2DShadowin GLSL. This is only true if PCF is to be performed on them. But we're not doing PCF here. So Naga is generating invalid GLSL.
Closing as I am burned out and have no motivation to get this landed anymore.
Closing as I am burned out and have no motivation to get this landed anymore.
Can you reopen it and add adopt-me tag to these PRs instead? If it's useful (I can't judge that part), maybe someone would continue the work.
I reduced the support of the Gaussian blur and did the two-at-a-time optimization, which should make the Gaussian blur version faster than the bokeh. This should be ready for review again now.
Merging, and opening an issue for https://github.com/bevyengine/bevy/pull/13009#discussion_r1597707303.
Thank you to everyone involved with the authoring or reviewing of this PR! This work is relatively important and needs release notes! Head over to https://github.com/bevyengine/bevy-website/issues/1314 if you'd like to help out.