simd
simd copied to clipboard
Float16 / bfloat16
Float16 or bfloat16 support for loads and stores is missing. I do not expect SIMD extensions to actually do operations on them in native format in hardware (but it would be nice to expose that and emulate transparently), but at least loading and storing (loads that widens into 32-bit floats) should be supported. Maybe not in the initial version, but these are very useful in machine learning and video/photo editing applications.
Loads and stores are provided as v128 vector sizes by this proposal. Agree that there could be a need for float16 operations in future for these application categories. These needs to be introduced in both scalar and simd in future..