KernelAbstractions.jl
KernelAbstractions.jl copied to clipboard
How do I detect what GPU is installed on a host?
I want to build a standalone module that can run on any supported GPU. How do I detect what packages need to be loaded so that I can have a pattern like
if isCUDAsupported()
using CUDA
backend = CUDABackend()
elseif isMetalsupported()
using Metal
backend = MetalBackend()
elseif etc
else
@warn "No supported accelerator detected. GPU kernels will be executed on the CPU!"
backend = CPU()
end
I often say: The choice is up to the user.
Experience has shown that having GPU backends as dependencies can cause issues, when one backend is quicker to update than another.
- You let the user choose, by taking a KA backend as an input argument to your function
- You could use Preferences.jl to let the user choose which backend to load statically
- CUDA.jl as an example of how to use it optionally https://cuda.juliagpu.org/stable/installation/conditional/
I was looking at option 3, but I am unsure how to set it up for all KA-supported backends.
Great job, by the way. On the first try, I got a non-trivial bitonic sort to perform much better than the ThreadX.sort! on M2 and a Tesla P100.
Is the KA implementation expected to be 15-20% slower than the Metal version? Or am I doing something wrong?