VERTEXFORMAT for `uvec`n attribute types
I've a vertex array buffer of 32-bit unsigned integers. In the vertex shader I want to access them as uvec4s i.e. every component is an unsigned byte. Normally in OpenGL 3.3 I use glVertexAttribIPointer to specify the attribute as an integer. However, I don't see any calls to glVertexAttribIPointer in sokol_gfx.h.
I see from the documentation that the most portable vertex format options are
- SG_VERTEXFORMAT_FLOAT,
- SG_VERTEXFORMAT_FLOAT2,
- SG_VERTEXFORMAT_FLOAT3,
- SG_VERTEXFORMAT_FLOAT4,
- SG_VERTEXFORMAT_BYTE4N,
- SG_VERTEXFORMAT_UBYTE4N,
- SG_VERTEXFORMAT_SHORT2N,
- SG_VERTEXFORMAT_USHORT2N
- SG_VERTEXFORMAT_SHORT4N,
- SG_VERTEXFORMAT_USHORT4N
Does that mean currently there's no way to specify integral values (or vectors composed of them) as attributes without normalizing to floats?
I used UBYTE4 and on the shader-side the typed the variable vec4 and casted it to integer; it works. However, would it be better to implement a code path where VertexAttribIPointer is called for these integral types?
AFAIK this part in the documentation only concerns a special behaviour in D3D11, so if you want shader code that works across GLSL and HLSL (via a shader cross-compiler), you need to use one of the normalized formats.
For GLSL I would expect that it works to define the vertex format in the pipeline desc as UBYTE4, and in the GLSL shader define the vertex attribute as:
in uvec4 my_attr;
I would expect that the vertex input stage then does the required conversion (like it does for integer input data which is used as float in the shader code).
Does this work? I haven't used integer vector attributes on the shader side so far, because this isn't supported on GLES2.
TBH I don't know why the glVertexAttribIPointer() function exists, because this would mean that the CPU side needs to know whether this vertex attribute is going to be used as a float- or integer-vector in the shader, which seems a bit odd (other 3D APIs don't have separate functions for float- versus integer-vertex-data (AFAIK at least, only texture samplers in modern 3D APIs have this requirement, that's why I had to add SG_SAMPLERTYPE_xxx to sokol_gfx.h).
PS: if glVertexAttribIPointer() is needed for reading the input vertex data into uvec4/ivec4 we need to check if this then also works as expected when the shader is using vec4 instead.
Otherwise we'd need two "vertex formats" in the pipeline desc, one that describes the input data format, and one that describes whether the shader reads the vertex attribute as float or integer vector (and that would suck a bit, because AFAIK that would be specific to GLES3 and desktop GL - but I need to read up on this specific topic in other APIs too).
AFAIK this part in the documentation only concerns a special behaviour in D3D11, so if you want shader code that works across GLSL and HLSL (via a shader cross-compiler), you need to use one of the normalized formats.
Yeah, got that part, thanks. Sometimes we need whole values i.e. non-normalized and I understand that it's not completely portable.
For GLSL I would expect that it works to define the vertex format in the pipeline desc as UBYTE4, and in the GLSL shader define the vertex attribute as
in uvec4 my_attr;
Did exactly this and didn't work. I got a corrupted scene (model got all twisted and partially rendered). What I did instead was to specify format as SG_VERTEXFORMAT_UBYTE4 and in the shader casted to int.
layout(location=3) in vec4 bones;
bone_xform[int(bones.x)];
Though this works with GLCORE33 backend, it doesn't with D3D11. For D3D11 backend, typing it as uvec4 (or ivec4) works.
I'm not sure about other rendering libraries; have the most experience in OpenGL (and family) only. So I'm sure that the typing in the shaders should be matched with a respective glVertexAttrib variant call. Here's a snip from my notes:
| Function | GLSL Type |
|---|---|
glVertexAttribPointer |
vecn |
glVertexAttribIPointer |
ivecn, uvecn |
glVertexAttribLPointer |
dvecn |
I would expect that the vertex input stage then does the required conversion (like it does for integer input data which is used as float in the shader code).
No; for integer input data used as float in the shader, the conversion happens because you use glVertexAttribPointer (note the missing I or L).
if glVertexAttribIPointer() is needed for reading the input vertex data into uvec4/ivec4 we need to check if this then also works as expected when the shader is using vec4 instead.
You mean, feed in float data via glVertexAttribIPointer and type it as vec4 in shader? Here's the relevant excerpt from the documentation (emphasis mine)
For
glVertexAttribIPointer, only the integer typesGL_BYTE,GL_UNSIGNED_BYTE,GL_SHORT,GL_UNSIGNED_SHORT,GL_INT,GL_UNSIGNED_INTare accepted. Values are always left as integer values.
I don't think it would work. I can test this if you want me to. Reason for my doubt: floats have a larger range compared to same-width (u32) integers.
Hello I have the same problem. It seems that In sokol GL3.3 you can't have non float vertex attributes. Everything has to be floats.
In order to workaround this, I found that you can use floatBitsToInt function in glsl to bitcast float -> int maintaining the bit representation intact.
Hope it helps.