Unable to sample depth texture
Hello! I'm trying the challenge from https://sotrh.github.io/learn-wgpu/beginner/tutorial8-depth/#a-pixels-depth
Running WASM in Chrome
I saw there is an overload for textureSample that returns a f32 when passing in a depth texture:
https://www.w3.org/TR/WGSL/#texturesample, but this fragment shader fails to run:
@group(0) @binding(0)
var t_depth: texture_depth_2d;
@group(0)@binding(1)
var s_depth: sampler;
@fragment
fn fs_main(in: VertexOutput) -> @location(0) vec4<f32> {
let depth = textureSample(t_depth, s_depth, in.tex_coords);
return vec4<f32>(vec3<f32>(depth), 1.0);
}
naga is telling me there is no valid overload for texture which I assume is the textureSample function parameters.
I got it to work using a textureSampleCompare along with a sampler_comparison but that function only seems to return 1.0 or 0.0 if the function passes or fails so I get a solid black/white texture instead of a gradient one based on the depth distance.
Is this currently supported?
This code should totally work, can you share some more information about your environment, the back-end you are using and also the VertexOutput that's missing from your code snippet.
Also the w3 link is the old specification, the new one lives at https://gpuweb.github.io/gpuweb/wgsl
The site is hosted on npm after using wasm pack. I tried this on a Windows machine using Chrome. I also tried this on Chrome on my iPad with the same results. Other than that I have been following the tutorials as closely as possible.
I have changed the shader code around since then but I believe this is the rest of the shader:
struct VertexInput {
@location(0) position: vec3<f32>,
@location(1) tex_coords: vec2<f32>,
}
struct VertexOutput {
@builtin(position) clip_position: vec4<f32>,
@location(0) tex_coords: vec2<f32>,
};
@vertex
fn vs_main(
model: VertexInput,
) -> VertexOutput {
var out: VertexOutput;
out.tex_coords = model.tex_coords;
out.clip_position = vec4<f32>(model.position, 1.0);
return out;
}
Let me know if you need any more info!
Okay I see the problem, the wgsl is correct but the glsl produced is invalid
#version 310 es
precision highp float;
precision highp int;
struct VertexInput {
vec3 position;
vec2 tex_coords;
};
struct VertexOutput {
vec4 clip_position;
vec2 tex_coords;
};
uniform highp sampler2DShadow _group_0_binding_0_fs;
layout(location = 0) smooth in vec2 _vs2fs_location0;
layout(location = 0) out vec4 _fs2p_location0;
void main() {
VertexOutput in_ = VertexOutput(gl_FragCoord, _vs2fs_location0);
float depth = texture(_group_0_binding_0_fs, vec2(in_.tex_coords)); // <-- Problem is here
_fs2p_location0 = vec4(vec3(depth), 1.0);
return;
}
The texture call is invalid because the correct form is
float texture (sampler2DShadow sampler, vec3 P [, float bias] )
The third component of the coordinates vector is the depth reference, this is a problem because glsl doesn't support sampling shadow textures only comparing.
Ah good find! What would the ideal fix be? Using a constant for the bias on ES glsl? I could open up the library and try my hand at a fix if I knew where to start.
This problem is more complicated than setting the bias, OpenGL and by consequence glsl don't allow sampling depth textures only comparison operations.
To fix this wgpu or whoever is consuming the shader, would need to orchestrate with naga to bind the same texture as a normal texture so that naga could sample from that image instead
Is there a workaround for this?
Since this is costly to work around, and WebGPU allows depth texture access and will likely be widely available in the near future, should this issue be resolved by adding a DownlevelFlags-type validation error (perhaps expanding DEPTH_TEXTURE_AND_BUFFER_COPIES) saying this is unsupported?
@kpreid At the very least I believe this could improve the experience of running the simple bevy examples with the GL backends. Right now it just fails and you have to manually turn off the offending pipeline stage: https://github.com/bevyengine/bevy/issues/18932 You'd be able to disable the builtin mipmap generation if the downlevel is not supported. https://github.com/bevyengine/bevy/blob/main/crates/bevy_render/src/batching/gpu_preprocessing.rs#L1125
Are you suggesting that we include in the documentation comment for the flag something like "This flag also covers textureLoad with depth textures."? Right now I've got that PR (https://github.com/bevyengine/bevy/pull/20304) which just got approved which would then rely on this flag for exactly this.
Something like
let flags = adapter
.get_downlevel_capabilities()
.flags;
let downlevel_support =
flags.contains(DownlevelFlags::COMPUTE_SHADERS) && flags.contains(DEPTH_TEXTURE_AND_BUFFER_COPIES);
// contd...
} else if !(culling_feature_support && limit_support && downlevel_support) {
info!("Some GPU preprocessing are limited on this device.");
GpuPreprocessingMode::PreprocessingOnly
}
On my machine, PreprocessingOnly works on GL but culling does not, I believe this is because the culling enables this shader for mipmap downsampling of the depth texture: https://github.com/bevyengine/bevy/blob/main/crates/bevy_core_pipeline/src/experimental/mip_generation/downsample_depth.wgsl