Feature request: textureQueryLod equivalent
Returns the level of detail that would be used when sampling a texture with a sampler + a certain set of texture coordinates.
SPIR-V: OpImageQueryLod. The clamped LOD is returned in the X component, the unclamped LOD is returned in the Y component. HLSL: CalculateLevelOfDetail for the clamped LOD, CalculateLevelOfDetailUnclamped for the unclamped LOD MSL: calculate_clamped_lod for the clamped LOD, calculate_unclamped_lod for the unclamped LOD.
WGSL 2023-06-13 Minutes
- KG: Sounds good but not right now.
- AB: Question about unclamped functionality in SPIR-V. Might not be the same thing (as assumed/stated in OP).
- DN: If it’s common functionality then let’s put it in. Needs investigation.
- MM: Polyfill? Take the derivatives yourself.
- KG: Would be sensitive to sampler state.
- BC: Drivers can put in custom biases that are impossible to detect.
I just hit this missing function in the Unity shaders, it's something that is used frequently.
WGSL 2023-12-05 Minutes
- AB: Think we just missed this. Should be easy.
- → M1
Being able to query LOD is essential for existing shaders, and the WGSL spec should be able to include it as long as implicit derivative consumption detail is in consideration.
I made a simple repository with two minimal apps (Metal through Swift and Vulkan through wgpu-rs) that render a simple triangle with mip levels to off-screen texture and a Python tool to consume binary artifacts to compare output between APIs and drivers: https://github.com/mehmetoguzderin/wgsl-20240121-querylod
Running Metal on M1:
swift main.swift
python3 main.py output.metal.bin 256 510
Pixel value at (256, 510): (3.3203125, 3.3203125, 2.5, 1.0)
Running Vulkan on 4060 Laptop:
cargo run
python3 main.py output.vulkan.bin 256 510
Pixel value at (256, 510): (3.32421875, 3.32421875, 2.5, 1.0)
And running Vulkan through Portability on M1:
Pixel value at (256, 510): (3.3203125, 3.3203125, 2.5, 1.0)
These results seem to align with the interpretation found in DirectXShaderCompiler: https://github.com/microsoft/DirectXShaderCompiler/blob/main/tools/clang/test/CodeGenSPIRV/texture.calculate-lod.hlsl and https://github.com/microsoft/DirectXShaderCompiler/blob/main/tools/clang/test/CodeGenSPIRV/texture.calculate-lod-unclamped.hlsl
//CHECK: [[t1:%[0-9]+]] = OpLoad %type_1d_image %t1
//CHECK-NEXT: [[ss1:%[0-9]+]] = OpLoad %type_sampler %ss
//CHECK-NEXT: [[x1:%[0-9]+]] = OpLoad %float %x
//CHECK-NEXT: [[si1:%[0-9]+]] = OpSampledImage %type_sampled_image [[t1]] [[ss1]]
//CHECK-NEXT: [[query1:%[0-9]+]] = OpImageQueryLod %v2float [[si1]] [[x1]]
//CHECK-NEXT: {{%[0-9]+}} = OpCompositeExtract %float [[query1]] 0
float lod1 = t1.CalculateLevelOfDetail(ss, x);
//CHECK: [[t1:%[0-9]+]] = OpLoad %type_1d_image %t1
//CHECK-NEXT: [[ss1:%[0-9]+]] = OpLoad %type_sampler %ss
//CHECK-NEXT: [[x1:%[0-9]+]] = OpLoad %float %x
//CHECK-NEXT: [[si1:%[0-9]+]] = OpSampledImage %type_sampled_image [[t1]] [[ss1]]
//CHECK-NEXT: [[query1:%[0-9]+]] = OpImageQueryLod %v2float [[si1]] [[x1]]
//CHECK-NEXT: {{%[0-9]+}} = OpCompositeExtract %float [[query1]] 1
float lod1 = t1.CalculateLevelOfDetailUnclamped(ss, x);
Similarly SPIRV-Cross: https://github.com/KhronosGroup/SPIRV-Cross/blob/main/shaders-msl/frag/image-query-lod.msl22.frag and https://github.com/KhronosGroup/SPIRV-Cross/blob/main/reference/shaders-msl/frag/image-query-lod.msl22.frag
//FragColor += textureQueryLod(uSampler2D, vUV.xy);
float2 _22;
_22.x = uSampler2D.calculate_clamped_lod(uSampler2DSmplr, vUV.xy);
_22.y = uSampler2D.calculate_unclamped_lod(uSampler2DSmplr, vUV.xy);
Given the information, I'd like to propose adding the following two functions unless the group strongly favors just calling both functions on backends that return a scalar for the call:
-
textureQueryLodClamped:OpImageQueryLod.0,calculate_clamped_lod,CalculateLevelOfDetail -
textureQueryLodUnclamped:OpImageQueryLod.1,calculate_unclamped_lod,CalculateLevelOfDetailUnclamped
Thank you!
Execution reference image:

WGSL 2024-01-23 Minutes
- Oguz: I made a simple inquiry into the built-in behavior across APIs and drivers. In combination with ecosystem info, I suggest two functions. My comment in the issue: https://github.com/gpuweb/gpuweb/issues/4180#issuecomment-1907044827
- KG: Why two functions?
- MOD: HLSL and MSL have that.
- DN: I don’t understand the two, the wording is different. Vulkan’s reads likes truncated and untruncated.
- MOD: Wrote the app to check. It’s really two different values.
- DN: I’ll need time to absorb the example and what it means.
- DN: from SPIR-V spec
- The first component of the result contains the mipmap array layer.
- The second component of the result contains the implicit level of detail relative to the base level.
- DN: Seems like it can be affected by sampler LoD offset?
- KG: willing to be a third set of eyes.
- DN: I need more time.
- MOD: Easy to hack the project, linked in the issue.
For reference SPIR-V's OpImageQueryLod is here: https://registry.khronos.org/SPIR-V/specs/unified1/SPIRV.html#OpImageQueryLod
The thing I'm concerned about is what happens if the sampler has a mipLodBias that is non-zero. (See VkSamplerCreateInfo).
D3D samplers also have a MipLODBias. See https://learn.microsoft.com/en-us/windows/win32/api/d3d12/ns-d3d12-d3d12_sampler_desc
Then Vulkan describes the LOD calculation in detail.
https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#textures-level-of-detail-operation
that incorporates the sampler's mipLodBias.
But WebGPU samplers don't have a mipLodBias. A quick check shows Metal doesn't support it, and that's why WebGPU doesn't?
WGSL 2024-03-12 Minutes
- DN: I was worried about the MIP LOD bias that the sampler can have in Vulkan and D3D12. But WebGPU does not have that. So it doesn’t matter whether the first component is with or without the bias: for us it will always be zero. So the proposal seems right. Metal may acquire the bias feature that Vulkan and D3D have, but for now we don’t need that complexity. Don’t feel like I understand this super-well. I’m not sure exactly what the HLSL functions actually do in the presence of the bias. The proposal seems to match the HLSL form, which makes sense if we’re only going to be using one or the other. If we get a biased version, it’d probably follow the HLSL biased versions.
- DN: I’d like another week to check the math.
- MOD: The definitions in the specs don’t mention the bias, it seems like they delegate that to other parts of the specification. So I thought we wouldn’t need to take that into account
- DN: In GLSL, it says, look at the lambda calculation in thus-and-such equation, return that. Vulkan has the same thing written out slightly differently, so it probably does end up being: compute the level of detail as the sampler would, but before truncating/clamping it. There may be clarifications needed.
- MOD: When you say “Vulkan”, do you mean the SPIR-V specification?
- DN: In the issue, I linked to the Vulkan LOD spec, and in there it explains how it calculates the value. It shows the various clampings, and truncations in the next section (“image level selection”). Functionally the SPIR-V instruction gets you the “lambda” from the first section and the level from the next section.
- MOD: In case we define this function, should they live in the API spec or the WGSL spec?
- DN: Ideally the WebGPU spec should explain how sampling occurs. The Vulkan spec has a good presentation to use as a model.
- KG: At Mozilla, we discussed whether it should be one function or two. It seems fine either way, because picking the wrong one is easily fixed by adding the other style later.
- DEFERRED TO NEXT MEETING
(From my notes on 2024-03-26)
- I'm trying to map to lambda and d in the Vulkan spec.
- https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#textures-lod-and-scale-factor
- https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#_lod_query
-
Otherwise, the steps described in this chapter are performed as if for OpImageSampleImplicitLod, up to Scale Factor Operation, LOD Operation and Image Level(s) Selection. The return value is the vector (λ', dl). These values may be subject to implementation-specific maxima and minima for very large, out-of-range values.
- Also, should the array level return type be integer instead of fp?
Here's the worked out math:
Sampler relevant parameters:
| WebGPU | WebGPU Default | Vulkan |
|---|---|---|
| GPUSamplerDescriptor | VkSamplerCreateInfo | |
| n/a | effectively 0 | float .mipLodBias |
| bool .anisotropyEnable | ||
| maxAnisotropy | 1 | float .maxAnisiotropy |
| lodMinClamp | 0 | float .minLod |
| lodMaxClamp | 32 | float .maxLod |
| mipmapFilter | nearest | float .maxLod |
Texture view relevant parameters:
| WebGPU | type | Vulkan |
|---|---|---|
| GPUTextureViewDescriptor | VkImageSubresourceRange | |
| baseMipLevel | int | baseMipLevel |
| mipLevelCount | int | levelCount |
Jumping in partway into Vulkan's Scale Factor Operation
Start with normalized texel coordinates provided by the builtin: (s,t,r) (Beware, the SPIR-V OpImageQueryLod calls them u,v,w)
Compute rates of change between adjacent sampling points:
m_ux = |ds/dx| * w_base m_vx = |dt/dx| * h_base m_wx = |dr/dx| * d_base m_uy = |ds/dy| * w_base m_vy = |dt/dy| * h_base m_wy = |dr/dy| * d_base
Where w_base, h_base, and d_base are the dimensions of the base layer of the texture.
Compute rho_x, rho_y from the m_* values so that roughly speaking:
rho_x = roughly the number of base level texels spanned by adjacent sampling points in 1st dim rho_y = roughly the number of base level texels spanned by adjacent sampling points in 2nd dim
and
rho_max = max(rho_x, rho_y)
Compute eta to support anisotropic filtering:
eta = min(rho_max/rho_min, maxAnisotropy)
- eta ==1 is the simple case, i.e. no anisotropy adjustment
- eta > 1 if you want more detail in the distance when dx and dy are unbalanced
Adjacent texels at mip level lambda_base have image values averaged from all base level texels spanned by adjacent sampling points.
lambda_base = log2(rho_max / eta)
Vulkan computes float value:
lambda' = lambda_base + clamp(sampler.mipLodBias + shaderOp.bias, -maxSamplerLodBias, maxSamplerLodBias)
But WebGPU doesn't have sampler.mipLodBias, so use 0. And for textureQueryLod, there is no shaderOp.bias field, so use 0.
So we are left with:
lambda' = lambda_base
Proposed text:
textureQueryLodUnclamped returns the floating point level of detail that would be sampled at the given coordinates, ignoring the sampler parameters minLodClamp and maxLodClamp, and ignoring texture view parameters view baseMipLevel and mipLevelCount. This approximates the mip level where adjacent texels at the given coordinates contain the image content from all level-0 texels spanned by adjacent sampling points, assuming an idealized environment where all mip levels exist and are populated.
Continuing Vulkan's computations...
Clamp to the sampler parameters:
lambda = clamp( lambda', minLod, maxLod )
Clamp to texture view parameters:
q = textureview.subresourcerange.levelCount - 1 level_base = textureview.subresourcerange.baseMipLevel
d' = max( level_base + clamp(lambda, 0, q), minLod_imageview )
Here minLod_imageview is from an extension; assume 0.
So basically:
d' = level_base + clamp(lambda, 0, q)
Now d_l adjusts according to the sampler's mipmapMode:
d_l = nearest(d') if mipmapMode is nearest d_l = d' otherwise
where nearest(d') is basically rounding d' to the nearest integer.
Proposed:
textureQueryLodClamped returns the floating point level of detail that would be sampled at the given coordinates, incorporating the sampler and texture view parameters. Conceptually, this first computes the textureQueryLodUnclamped value at the given coordinates, then clamps to the sampler's mipLodMin and mipLodMax range, then further adjusts to the texture view's baseMipLevel and levelCount parameters.
Comparing with MSL:
calculate_unclamped_lod- calculates the level of detail that would be sampled for the given coordinates, ignoring any sampler parameter. The fractional part of this value contains the mip level blending weights, even if the sampler indicates a nearest Mip selection.
calculate_clamped_lod- similar to the calculate_unclamped_lod, but additionally clamps the LOD to stay: • within the texture mip count limits, • within the sampler's lod_clamp min and max values • less than or equal to the sampler's max_anisotropy value
Note 1: "The fractional part of this value contains the mip level blending weights, even if the sampler indicates a nearest Mip selection." I missed this bit.
In comparison Vulkan has a calculation of a delta, then
δ is the fractional value, quantized to the number of mipmap precision bits, used for linear filtering between levels.
Note 2: MSL documents the effect of anisotropy on the clamped version of the function. By my reading, anisotropy is fully accounted for in the original lambda' calculation. I'm not sure if this will show up in testing, or would be a documentation bug. (I presume the hardware tends to do the same thing.)
WGSL 2024-03-26 Minutes
- (DN asked for extra time last time)
- DN: Didn’t get enough time.
- Trying to translate to lambda and d in the Vulkan spec. (https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#textures-lod-and-scale-factor )
- https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#_lod_query
- Otherwise, the steps described in this chapter are performed as if for
OpImageSampleImplicitLod, up to Scale Factor Operation, LOD Operation and Image Level(s) Selection. The return value is the vector (λ', dl). These values may be subject to implementation-specific maxima and minima for very large, out-of-range values.
- And should the array level return type be integer instead of fp?
- Trying to translate to lambda and d in the Vulkan spec. (https://registry.khronos.org/vulkan/specs/1.3-extensions/html/vkspec.html#textures-lod-and-scale-factor )
WGSL 2024-04-16 Minutes
- DN: Did homework. Read math and have rough text to brain dump. Think we can go forward with basically previous consensus. Have much clearer picture and have text we can drop in and use.
- KG: Assigned to Oguz should we move to DN?
- MOD: David can handle it.