Examples: Port advanced postprocessing sample to WebGPU
Related issue:
Description
Long-term goal to port over webgl_postprocessing_advanced and its relevant shaders over to a WebGPURenderer version. We can also determine whether some of these smaller shaders warrant being included with the build or should be relegated to the examples
- [x] #28961
- [x] #28974
- [ ] Colorify Node ( Extend ColorManagement Luminance)
- [x] Vignette Node ( use webgpu_compute_snow vignette implementation )
- [x] Vertical and Horizontal Blur Node ( use existing GaussianBlurNode )
- [x] #29016
- [ ] Final: Implement Example
Gamma Correction Node
I would be inclined not to bring a GammaCorrectionNode into the WebGPU renderer and TSL, as we already have ColorSpaceNode.
Vertical and Horizontal Blur Node (possibly just use GaussianBlurNode )
Yes, no need for separate nodes.
Vignette Node
@sunag already wrote code for that in another example. It is so simple that I don't think we need a separate node.
https://github.com/mrdoob/three.js/blob/a46d6761c9510bf1fb2c126fc560f2cc11e6edb2/examples/webgpu_compute_particles_snow.html#L298
Mask and Clear Mask Node ( including necessary stencil operations )
It might make more sense to port https://threejs.org/examples/webgl_postprocessing_masking first before thinking about the advanced example.
I'm also not sure yet if we want to use the same masking approach as before. The design maybe needs to be revisited.
I'm also not sure yet if we want to use the same masking approach as before. The design maybe needs to be revisited.
I agree. There are several ways to do this with MRT, it could be something like webgpu_mrt_mask or per object, just registering the color through the id.
Yeah, I was hoping we can get a solution without the stencil buffer. The related handling was always a bit inconvenient.
Yeah, I was hoping we can get a solution without the stencil buffer. The related handling was always a bit inconvenient.
Not having to manage setting and clearing masks with the stencil sounds good to me, especially for a simple post step.
Question for @sunag: In the mrt_mask example, when the mrtNode of the scenePass is set to vec4(0), but the mrtNode of the mesh material is set to output, would this generally mean that the alpha of every pixel outside of the mesh itself is 0, while every alpha of the pixel within the mesh is 1? Seems like alpha blending would be the solution here ( especially for post-processing), but need a better understanding of how the mrt system works.
Question for @sunag: In the mrt_mask example, when the mrtNode of the scenePass is set to vec4(0), but the mrtNode of the mesh material is set to output, would this generally mean that the alpha of every pixel outside of the mesh itself is 0, while every alpha of the pixel within the mesh is 1? Seems like alpha blending would be the solution here ( especially for post-processing), but need a better understanding of how the mrt system works.
A MRT used in pass() will be the default value for all render-pass materials, if you use any material with material.mrtNode it will replace the default values of pass() with that of material.mrtNode if defined.
For example in webgpu_postprocessing_bloom_selective:
// global
const scenePass = pass( scene, camera );
scenePass.setMRT( mrt( {
output,
bloomIntensity: float( 0 ) // default bloom intensity
} ) );
// define a custom bloom intensity in some material
material.mrtNode = mrt( {
bloomIntensity: float( 1.0 ) // it could be: output.a
} );
Note that alpha is not considered in the output of bloomItensity, you could build a structure for this just using output.a.
If alpha is not considered in the output of a pass, then how could I get the alpha output of a scene?
how could I get the alpha output of a scene?
I assume you want to transfer the alpha output of scene to an individual texture using MRT:
scenePass.setMRT( mrt( {
output,
sceneAlpha: output.a
} ) );
What is missing to complete this PR?
What is missing to complete this PR?
Sorry I didn't have time to respond earlier. I think just a colorify implementation and then implementing the sample. I was having some issues creating multiple postProcessing objects that only apply to a specific subsection of the screen, but I'll have to check if that's still a problem.
📦 Bundle size
Full ESM build, minified and gzipped.
| Before | After | Diff | |
|---|---|---|---|
| WebGL | 685.18 169.62 |
685.18 169.62 |
+0 B +0 B |
| WebGPU | 825.96 221.44 |
826.09 221.46 |
+133 B +25 B |
| WebGPU Nodes | 825.54 221.34 |
825.67 221.37 |
+133 B +23 B |
🌳 Bundle size after tree-shaking
Minimal build including a renderer, camera, empty scene, and dependencies.
| Before | After | Diff | |
|---|---|---|---|
| WebGL | 461.96 111.46 |
461.96 111.46 |
+0 B +0 B |
| WebGPU | 525.27 141.52 |
525.27 141.52 |
+0 B +0 B |
| WebGPU Nodes | 481.93 131.34 |
481.93 131.34 |
+0 B +0 B |
Vertical and Horizontal Blur Node (possibly just use GaussianBlurNode )
Yes, no need for separate nodes.
Vignette Node
@sunag already wrote code for that in another example. It is so simple that I don't think we need a separate node.
https://github.com/mrdoob/three.js/blob/a46d6761c9510bf1fb2c126fc560f2cc11e6edb2/examples/webgpu_compute_particles_snow.html#L298
I agree it's super simple but it's nice to have also"basic node" part of threejs, it make the postprocessing super easy to setup in a new project without having to dive into TSL and redo the wheel, Vignette is one of the must use pass i saw in almost all project a bit cinematic.
We don't have to port every webgl_postprocessing_ demo to WebGPU. Some of them look a bit dated and webgl_postprocessing_advanced is one of them. The name is also misleading since it's not clear to me what aspect of this example should be "advanced". Let's keep this example just in the space of WebGLRenderer and EffectComposer.