KernelAbstractions.jl icon indicating copy to clipboard operation
KernelAbstractions.jl copied to clipboard

How do I detect what GPU is installed on a host?

Open pitsianis opened this issue 2 years ago • 2 comments

I want to build a standalone module that can run on any supported GPU. How do I detect what packages need to be loaded so that I can have a pattern like

if isCUDAsupported()
  using CUDA
  backend = CUDABackend()
elseif isMetalsupported()
  using Metal
  backend = MetalBackend()
elseif etc

else
  @warn "No supported accelerator detected. GPU kernels will be executed on the CPU!"
  backend = CPU()
end

pitsianis avatar Apr 24 '23 21:04 pitsianis

I often say: The choice is up to the user.

Experience has shown that having GPU backends as dependencies can cause issues, when one backend is quicker to update than another.

  1. You let the user choose, by taking a KA backend as an input argument to your function
  2. You could use Preferences.jl to let the user choose which backend to load statically
  3. CUDA.jl as an example of how to use it optionally https://cuda.juliagpu.org/stable/installation/conditional/

vchuravy avatar Apr 24 '23 22:04 vchuravy

I was looking at option 3, but I am unsure how to set it up for all KA-supported backends.

Great job, by the way. On the first try, I got a non-trivial bitonic sort to perform much better than the ThreadX.sort! on M2 and a Tesla P100.

Is the KA implementation expected to be 15-20% slower than the Metal version? Or am I doing something wrong?

pitsianis avatar Apr 24 '23 23:04 pitsianis