spirq-rs
spirq-rs copied to clipboard
OpTypeImage sampled=1 depth=2 should be recognized as a sampled image to support HLSL
apologies if this is a duplicate, but HLSL generates the following spirv typedef for
Texture2D<float4> albedo_tex;
this yields:
%type_2d_image = OpTypeImage %float 2D 2 0 0 1 Unknown
Currently this makes spirq spit out an unsupported img cfg error, it should instead be recognized as a sampled image, because 2 for depth according to the spirv spec is unspecified with regards to whether it is a depth tex or not (same with sampled). However, 1 is "definitely sampled", therefore this type of texture should be recognized as a sampled texture. I tried this by just changing the image type code to:
pub fn from_spv_def(
is_sampled: u32,
is_depth: u32,
color_fmt: ImageFormat,
) -> Result<ImageUnitFormat> {
let img_unit_fmt = match (is_sampled, is_depth, color_fmt) {
(1, 0, _) => ImageUnitFormat::Sampled,
(1, 2, _) => ImageUnitFormat::Sampled,
(1, 1, _) => ImageUnitFormat::Depth,
(2, 0, color_fmt) => ImageUnitFormat::Color(color_fmt),
(2, 2, color_fmt) => ImageUnitFormat::Color(color_fmt),
_ => return Err(Error::UNSUPPORTED_IMG_CFG),
};
Ok(img_unit_fmt)
}
But i am not sure if there are other considerations that should be taken. The reverse should probably also be done, i.e. (2, 1, _) => ImageUnitFormat::Depth. This code also maps (2, 2) to a color image since the approach is to assume 2 means "no".
This is the same behavior as rspirv-reflect btw.
Thanks for your time debugging this! I wonder what shader compiler you've been using? Is it DXC?
I decided not to support those compile-time-unknown formats basically because it wouldn't be correctly reflecting how the shading code use it. Some crates does rely on the reflected info to allocate texture resources, so I would like to give results with much certainty as possible. So, what I would offer is an option infer_tex_fmt in ReflectConfig to allow SPIR-Q to analyze shader programs. If an entry point sampled a runtime-formatted texture with OpImageSampleDrefXxx instructions, it will be cast into a depth image. If a runtime-formatted texture is never used, DXC with -O1 optimization (default is -O3) is enough to eliminate it. So I think it works for most of the cases? Is that sounds good to you?
Yep, this was with DXC. I think your approach may potentially work, but i can't really tell until it is implemented haha
I thought a little bit more about this and it seems.. not quite necessary to do that. Maybe we can simply mark ImageType::unit_fmt and ImageType::arng as optional. So we can get rid of Error::UNSUPPORTED_IMG_CFG and be more conformant with the spec; i.e., we make the from_spv_def functions return Option<T>. You can PR this change if you are interested.
Yeah i completely agree, that seems like the best path, i will PR this if i have time later 🦀 👍