Bend icon indicating copy to clipboard operation
Bend copied to clipboard

Request: Open-CL / Vulkan / Metal support

Open guymarshall opened this issue 1 year ago • 18 comments

Is your feature request related to a problem? Please describe. I have an AMD GPU so I cannot take advantage of the CUDA running.

Describe the solution you'd like I would love either an Open-CL or a Vulkan running mode (or both) if possible.

Describe alternatives you've considered I have tried compiling to CUDA and then running a translation layer to Open-CL but the performance isn't what I expected.

Additional context None.

guymarshall avatar May 17 '24 21:05 guymarshall

We do plan porting to other platforms soon. We just want to first have one very solid implementation and CUDA happened to be the best/easiest to do it first.

developedby avatar May 17 '24 21:05 developedby

No worries! Very impressive project, I am looking forward to using it in the future.

guymarshall avatar May 17 '24 22:05 guymarshall

Will it be possible to run with zluda? https://github.com/vosen/ZLUDA (CUDA on AMD GPUs)

jlucaso1 avatar May 17 '24 22:05 jlucaso1

hoping for metal support (it seemed like the cpu benchmarks were already on mac)

joshuarossi avatar May 18 '24 00:05 joshuarossi

Is your feature request related to a problem? Please describe. I have an AMD GPU so I cannot take advantage of the CUDA running.

Describe the solution you'd like I would love either an Open-CL or a Vulkan running mode (or both) if possible.

Describe alternatives you've considered I have tried compiling to CUDA and then running a translation layer to Open-CL but the performance isn't what I expected.

Additional context None.

I think you mean AMD ROCM. ROCM is AMD's alternative to CUDA for nvidia. vulkan is for older GPUs. so finally it should look like bend run-ro <file.hvm> # uses the ROCM interpreter (massively parallel)

MrAmbiG avatar May 18 '24 00:05 MrAmbiG

I think you mean AMD ROCM. ROCM is AMD's alternative to CUDA for nvidia. vulkan is for older GPUs. so finally it should look like bend run-ro <file.hvm> # uses the ROCM interpreter (massively parallel)

You may be mistaken- Vulkan as a standard was released 8 years ago, and modern drivers only coming out in 2019 or later. (https://en.wikipedia.org/wiki/Vulkan)

On ROCm, it is not what we want. ROCm is a software stack specifically for AMD Graphics cards. CUDA, on the other hand is not a software stack. AMD's equivalent to Nvidia's CUDA is indeed OpenCL, or alternatively Vulkan. Vulkan specializes in graphics, where OpenCL is more general.

In any case fwiw OpenCL support would be appreciated, as it also would have the benefit of supporting intel GPUs.

Bjohnson131 avatar May 18 '24 03:05 Bjohnson131

We do plan porting to other platforms soon. We just want to first have one very solid implementation and CUDA happened to be the best/easiest to do it first.

If that is the case, I recommend reading the documentation of HIP Porting Guide as you write your CUDA. Main difference is to account for possible warp size of 64 instead of always assuming 32 and you are almost good to go with ROCm support

h9419 avatar May 18 '24 10:05 h9419

Vulkan would be great, in my experience Vulkan and ROCm are about the same actually and Vulkan has much broader support. I think it's the future personally once someone writes a nice wrapper around it that takes the role of CUDA or ROCm instead of being more generalized, I am not sure if it is the case with Vulkan vs CUDA I'd like to know honestly same with Vulkan and OpenVINO or whatever it is. But if Vulkan is so good at least on AMD, seems like no brainier just to implement that, not sure what the benefits of ROCm would be other than ease of development perhaps please correct me but seems to be my perspective as a user

nonetrix avatar May 19 '24 06:05 nonetrix

This request belongs to HVM ?

imxade avatar May 19 '24 19:05 imxade

I have an AMD gpu, so for the moment using the CUDA interpreter is off the table, but would the C interpreter still be usable?

TheSimicSlaw avatar May 19 '24 20:05 TheSimicSlaw

...and Metal? 🥹

Is that this ticket too or does it need another one?

haslo avatar May 20 '24 11:05 haslo

...and Metal? 🥹

Yes

developedby avatar May 20 '24 11:05 developedby

I cannot wait for Metal support! 😇

LeaveNhA avatar May 21 '24 17:05 LeaveNhA

I think you mean AMD ROCM. ROCM is AMD's alternative to CUDA for nvidia. vulkan is for older GPUs. so finally it should look like bend run-ro <file.hvm> # uses the ROCM interpreter (massively parallel)

You may be mistaken- Vulkan as a standard was released 8 years ago, and modern drivers only coming out in 2019 or later. (https://en.wikipedia.org/wiki/Vulkan)

On ROCm, it is not what we want. ROCm is a software stack specifically for AMD Graphics cards. CUDA, on the other hand is not a software stack. AMD's equivalent to Nvidia's CUDA is indeed OpenCL, or alternatively Vulkan. Vulkan specializes in graphics, where OpenCL is more general.

In any case fwiw OpenCL support would be appreciated, as it also would have the benefit of supporting intel GPUs.

AMD vulkan (derived from mantle) api, is not limited to AMD GPUs, one can use it with NVIDIA's Nsight suite of development tools where as nvidia CUDA is designed specifically for nvidia GPUs, just like ROCM was designed with AMD GPUs in mind. So if you want a low level access to the gpu which isn't specifically designed with any gpu vendor in mind, you would choose vulkan api, if you want to specifically target AMD GPUs (especially the newer ones) ROCM is a better choice. However for this project I do support you prioritizing vulkan api over others because that would not only give a low level access to the GPU but it supports all 3 vendors. Intel does list https://www.intel.com/content/www/us/en/support/articles/000005524/graphics.html vulkan api as a supported api for their GPUs.

Vendor OpenCL Vulkan API ROCM CUDA
Intel Yes Yes No No
Nvidia Yes Yes No Yes
AMD Yes Yes Yes No
Technology Open Source
ROCm Yes
CUDA No
Vulkan API Yes
OpenCL Yes

MrAmbiG avatar May 22 '24 18:05 MrAmbiG

I think you mean AMD ROCM. ROCM is AMD's alternative to CUDA for nvidia. vulkan is for older GPUs. so finally it should look like bend run-ro <file.hvm> # uses the ROCM interpreter (massively parallel)

You may be mistaken- Vulkan as a standard was released 8 years ago, and modern drivers only coming out in 2019 or later. (https://en.wikipedia.org/wiki/Vulkan) On ROCm, it is not what we want. ROCm is a software stack specifically for AMD Graphics cards. CUDA, on the other hand is not a software stack. AMD's equivalent to Nvidia's CUDA is indeed OpenCL, or alternatively Vulkan. Vulkan specializes in graphics, where OpenCL is more general. In any case fwiw OpenCL support would be appreciated, as it also would have the benefit of supporting intel GPUs.

AMD vulkan (derived from mantle) api, is not limited to AMD GPUs, one can use it with NVIDIA's Nsight suite of development tools where as nvidia CUDA is designed specifically for nvidia GPUs, just like ROCM was designed with AMD GPUs in mind. So if you want a low level access to the gpu which isn't specifically designed with any gpu vendor in mind, you would choose vulkan api, if you want to specifically target AMD GPUs (especially the newer ones) ROCM is a better choice. However for this project I do support you prioritizing vulkan api over others because that would not only give a low level access to the GPU but it supports all 3 vendors. Intel does list https://www.intel.com/content/www/us/en/support/articles/000005524/graphics.html vulkan api as a supported api for their GPUs. Vendor OpenCL Vulkan API ROCM CUDA Intel Yes Yes No No Nvidia Yes Yes No Yes AMD Yes Yes Yes No Technology Open Source ROCm Yes CUDA No Vulkan API Yes OpenCL Yes

I was going to say I thought Vulkan would be the best option not this ROCm but what do I know!

guymarshall avatar May 22 '24 20:05 guymarshall

I think that things like support for different runtimes would benefit from becoming a tracking issue. OpenCL is probably the best next step, as it has the best support across platforms. While vulkan has a compute element, it is notoriously a pain to write, and it seems slower than OpenCL from what I can find.

R-Goc avatar May 25 '24 21:05 R-Goc

The llama.cpp project has 2 implementations for running AI with Vulkan... not as fast as CUDA but very much faster than CPU. Im not that smart but maybe this links help https://kompute.cc/ https://github.com/ggerganov/llama.cpp

maxwell-kalin avatar Jun 07 '24 19:06 maxwell-kalin

Intel specific support would also be great (when you do look to expand to different platforms)

ElliottDyson avatar Jul 21 '24 20:07 ElliottDyson