llm icon indicating copy to clipboard operation
llm copied to clipboard

AMD ROCm support with HIPBLAS

Open xangelix opened this issue 10 months ago • 2 comments

https://github.com/ggerganov/llama.cpp/pull/1087 was merged recently and should make its way into GGML shortly. I think (hopefully) it should require only minimal changes on this side.

xangelix avatar Aug 27 '23 17:08 xangelix

related to https://github.com/ggerganov/ggml/issues/472

xangelix avatar Aug 27 '23 17:08 xangelix

rustformers uses llama.cpp as it's ggml source, feel free to create an PR including this change, seams like you only need to adjust the build.rs of the ggml-sys create. I wont be able to test this as i don't own a amd gpu.

LLukas22 avatar Aug 28 '23 08:08 LLukas22