Graham Markall
Graham Markall
Currently `float16` is supported in CUDA (maybe not everything in the latest release, but on `main` at least). We are still working on adding `float16` for CPU targets.
> Does `float16` for CPU problem affected by the LLVM compiler not supporting `float16`? LLVM supports it, as does llvmlite - support was added in https://github.com/numba/llvmlite/pull/509 For systems that don't...
It's supported on CUDA but not the CPU target. So either all or none of them could be checked depending on what we decide this issue is about.
It's possible. There are some llvmlite PRs in flight that will be in support of it: https://github.com/numba/llvmlite/pull/979 and https://github.com/numba/llvmlite/pull/986
gpuci run tests
gpuci run tests
/azp run
gpuci run tests
gpuci run tests
gpuci run tests