[SPIR-V] Typed Enum Implicit Conversions Fail
Description When using typed enums in math operations, they can completely fail to resolve to correct values. Casting the enum to its type in-place can resolve the issue.
Steps to Reproduce
- Create a typed enum, in our case an
enum : int { }. - Use that enum in a floating-point operation.
- Observe that the enum value is now only ever compiled to
0.
Actual Behavior Shader playground minimal example of failure here.
enum : int
{
//Hottest star is <50000K and the coldest star is >600K
STAR_MAX_TEMPERATURE = 50000,
STAR_MIN_TEMPERATURE = 600,
STAR_TEMPERATURE_RANGE = (STAR_MAX_TEMPERATURE - STAR_MIN_TEMPERATURE),
};
...
float x = 3.0;
buffer[threadId] = STAR_TEMPERATURE_RANGE * x;
compiles to
%int_0 = OpConstant %int 0
Instead of the expected
%int_148200 = OpConstant %int 148200
Shader playground minimal example of success here.
Environment
- DXC version
trunk(I don't see one clearly on shader-playground, but we're targetingcs_6_6, and are using 1.8 locally) - Host Operating System: Ubuntu 22.04 LTS, or whatever shader-playground runs on (also probably linux, but I don't know).
The initial code generated by DXC looks correct: https://godbolt.org/z/onr1n3boP. The optimizer must be doing something wrong.
My last comment is wrong. DXC is generating an incorrect instruction:
%51 = OpBitcast %float %50
This should be an OpConvertUToF and not a bitcast. The bitcast of 49,600 results in subnormal, which can be flushed to 0.
This should be an
OpConvertUToFand not a bitcast. The bitcast of 49,600 results in subnormal, which can be flushed to 0.
Yep, seems like this is the reason. Looking into it.