ggml
ggml copied to clipboard
[Question] What is the status of Vulkan backend?
Vulkan may be not the the best/fastest/easiest or so solution for inference, but is probably most portable GPU acceleration approach.
Is anyone working actively to add support for it? And if so what is status/progress? If not, is it planned?
there are https://github.com/ggerganov/llama.cpp/pull/2059 and https://github.com/ggerganov/llama.cpp/pull/2039
Yeah, I'm working on it. Let me know if you have any questions. It's a big project, but I'm making progress.
Yeah, I'm working on it. Let me know if you have any questions. It's a big project, but I'm making progress.
Is there any low hanging fruit a newcomer to the project could help with?
Yeah, I'm working on it. Let me know if you have any questions. It's a big project, but I'm making progress.
Is there any low hanging fruit a newcomer to the project could help with?
@Calandiel If you have experience with Vulkan, maybe. Otherwise probably not.
Yeah, I'm working on it. Let me know if you have any questions. It's a big project, but I'm making progress.
Is there any low hanging fruit a newcomer to the project could help with?
@Calandiel If you have experience with Vulkan, maybe. Otherwise probably not.
I have. I've written vulkan based render pipelines professionally and made toy neural networks in vulkan trained with sgd. Been working with it at least in some capacity for the last 4 years or so.
Yeah, I'm working on it. Let me know if you have any questions. It's a big project, but I'm making progress.
Is there any low hanging fruit a newcomer to the project could help with?
@Calandiel If you have experience with Vulkan, maybe. Otherwise probably not.
I have. I've written vulkan based render pipelines professionally and made toy neural networks in vulkan trained with sgd. Been working with it at least in some capacity for the last 4 years or so.
Oh cool, I'd be glad to work something out. If you have Discord, send me a message (_occam
), otherwise send me an Email and we'll find another way.
Yeah, I'm working on it. Let me know if you have any questions. It's a big project, but I'm making progress.
Is there any low hanging fruit a newcomer to the project could help with?
@Calandiel If you have experience with Vulkan, maybe. Otherwise probably not.
I have. I've written vulkan based render pipelines professionally and made toy neural networks in vulkan trained with sgd. Been working with it at least in some capacity for the last 4 years or so.
Oh cool, I'd be glad to work something out. If you have Discord, send me a message (
_occam
), otherwise send me an Email and we'll find another way.
Will do, see you on Discord!
I think nomic-ai have functional kompute of llama.cpp right now https://github.com/nomic-ai/llama.cpp and, GPT4ALL is plenty fast on my 7900XTX via vulkan. but I am not sure how to integrate this on ggml as i am not a programmer.
@sorasoras https://github.com/ggerganov/llama.cpp/pull/4456
The vulkan and kompute backends have been merged in llama.cpp, all that is left is update the cmake build files to be able to use them in other ggml projects.
The vulkan and kompute backends have been merged in llama.cpp, all that is left is update the cmake build files to be able to use them in other ggml projects.
does this affect anything using ggml? whisper.cpp stabledifussion.cpp? etc?
@Kreijstal backends are usually upstreamed to ggml, but ggml-api consumers need to use them explicitly.
eg new-ish backend code here in ggml: https://github.com/ggerganov/ggml/blob/master/src/ggml-kompute.h https://github.com/ggerganov/ggml/blob/master/src/ggml-sycl.h https://github.com/ggerganov/ggml/blob/master/src/ggml-vulkan.h