bevy_pbr2: Use the infinite reverse right-handed perspective projection
This has much better precision distribution over the [0,1] range.
NOTE: The initial PR does not address use of orthographic projections in the PBR pipelines. However, I have tested and it seems that swapping the orthographic projection's near and far to reverse the depth direction works just fine.
We're gonna need to adapt this to the omni light changes. A naive merge conflict resolution resulted in no shadows being drawn.
@mtsr
This is probably due to the projection calculation being done in the frag shader? (we probably need to adjust it to account for the depth changes?)
Yeah, we’re aware of that. I’ll fix it later.
Fixed the shadow projection issue. I've also included the flipping of the near/far when extracting the orthographic projection for the 3D pipelines because it doesn't break 2D and otherwise orthographic depth will be broken in the 3D pipelines. I don't really like this approach of swapping them in the OrthographicProjection.get_projection_matrix() as this could be used by others and so be unexpected. There isn't any easy way to flip them after extraction of the matrix though and the code is generic on a trait so I unless I'm missing something about rust, I can't use match.
This is the orthographic projection matrix:
/// Creates a right-handed orthographic projection matrix with [0,1] depth range.
fn orthographic_rh(left: T, right: T, bottom: T, top: T, near: T, far: T) -> Self {
let rcp_width = T::ONE / (right - left);
let rcp_height = T::ONE / (top - bottom);
let r = T::ONE / (near - far);
Self::from_cols(
V4::new(rcp_width + rcp_width, T::ZERO, T::ZERO, T::ZERO),
V4::new(T::ZERO, rcp_height + rcp_height, T::ZERO, T::ZERO),
V4::new(T::ZERO, T::ZERO, r, T::ZERO),
V4::new(
-(left + right) * rcp_width,
-(top + bottom) * rcp_height,
r * near,
T::ONE,
),
)
}
So if near and far are swapped, the affected terms are r' = 1 / (far - near) = 1 / -(near - far) = -r which is easy to adjust, but r * near is not easily adjusted. near could be extracted by r * near / r = near, then near - 1.0 / r = near - (near - far) = far and then one can calculate -r * far to replace r * near but that is all convoluted and ignores precision issues.
Ideas welcome.
The current bias produces pretty significant artifacts:

It feels like the "acceptable bias window" is much smaller. 0.07 is too high (casted shadows stop working as expected) and 0.03 is too low (self-shadowing artifacts become visible).
Are we certain this is correct? Is it a problem with our approach or is this inherent to infinite reverse-z?
I think we just need to add the normal bias as well. It’s important that the orthographic projection is a fairly tight fit around the geometry. Did you make the change to add the bias to the fragment depth with infinite reverse instead of subtract?
I have just pushed a change that moves the depth biasing to linear depth. So, before we would project the fragment position to light space NDC and then add a depth bias there. I found that quite unintuitive to work with.
Some of us in #rendering looked over Unity, Unreal, and Godot. Unity and Godot only really have two bias configuration values, a single depth bias and a single normal bias. Unreal seemed to have all sorts of biases for lots of different cases and was difficult to understand.
Godot was much simpler (which should be a good starting place) and has the depth bias for all lights, and a normal bias only for directional lights. But, the difference is that the depth bias is applied in world space. The currently-committed biasing now looks like this:
In the *Lights:
pub shadow_depth_bias: f32,
pub shadow_normal_bias: f32,
...
pub const DEFAULT_SHADOW_DEPTH_BIAS: f32 = 0.1;
pub const DEFAULT_SHADOW_NORMAL_BIAS: f32 = 0.1;
And in pbr.wgsl (similar for both point and directional lights at the moment):
let depth_bias = light.shadow_depth_bias * dir_to_light.xyz;
let NdotL = dot(dir_to_light.xyz, in.world_normal.xyz);
let normal_bias = light.shadow_normal_bias * (1.0 - NdotL) * in.world_normal.xyz;
let biased_position = vec4<f32>(in.world_position.xyz + depth_bias + normal_bias, in.world_position.w);
The depth bias moves the fragment world position toward the light and is in world units. The normal bias moves the fragment world position along the surface normal as the light ray strikes the surface at more of a glancing angle, and is also in world units. Working with world units, and only having a depth and normal bias feels much more intuitive and produces the same results. Also note that this is projection-independent!
There is still a slight ring of acne from the directional light:
And still a slight ring of acne from the point light:

I'm trying to figure out how to get rid of those damn rings but I'm not there yet. Perhaps someone else who has worked on shadows knows where they come from. My next step is to try to debug those particular pixels to understand what is going on there. This article has an interesting approach of adjusting the normal bias based on the shadow texel size, because the self-shadowing problem is that the volume covered by the shadow texel is popping out of the surface. It would be great if this could be configured programmatically...
This is now based on top of #26 . Also, while I did like the flexibility of being able to choose the projection, in #rendering the question of "why?" came up along with @cart suggesting that it is probably better for users to choose one perspective projection and stick with it to avoid confusion. I agree. As such I reverted the perspective projection method code and now this PR is back to using infinite reverse for perspective cameras and point light shadows, and orthographic with reverse z (for consistency) for orthographic cameras (I've tested that it doesn't negatively impact sprites nor 3D orthographic) and directional light shadows.
So if this is agreeable, it can be merged after merging #26 .
Haha I did a rebase on this branch to "try to fix the history", but it was already fine. I was just looking at the wrong pr in the wrong repo :)