shaders: integer widths other than 32 bits
For MVP we decided to use only 32-bit integers. See #229 After MVP, reconsider 8, 16, and 64 bit integer widths.
The lack of 64-bit integers seems to be imposing limitations on Halide's WebGPU backend: https://github.com/halide/Halide/blob/60621b8b49f1b2539e8f2cc1892dce56c5fd8c5a/doc/WebGPU.md?plain=1#L18
I developed a WebGPU-based BLAKE2b implementation, and it performs approximately 20-30% slower than an equivalent OpenCL-based implementation primarily due to the extra bit-carry operations required for arithmetic on 32-bit integer pairs to represent the 64-bit values defined by the algorithm. Extending WGSL to support u64 would eliminate these extra operations and could bring it up to par with OpenCL for my project.
Copying request from @MichealRead in #5152
Currently, WGSL supports only 32-bit integer types (i32 and u32). Many applications could greatly benefit from native support for additional integer types, such as i8, i16, i64, u8, u16, and u64.
- Supporting smaller integer sizes (i8, i16, u8, u16) would enable more compact data representations, reducing memory usage and potentially improving cache performance.
- Larger integer types (i64, u64) allow handling of larger numbers natively, which is useful for high-precision computations and applications requiring extended numerical ranges.
- Enhanced integer support can benefit many things but primarily AI and video work where 8 and 16 bit buffers are common.
Enhanced integer support can benefit many things but primarily AI and video work where 8 and 16 bit buffers are common.
When the buffer is a large array of small ints, the use case may be covered by storage texel buffers. See #162