Alpaca icon indicating copy to clipboard operation
Alpaca copied to clipboard

Add GPU Support for AMD/Intel (Vulkan) – The Only Thing Missing

Open linuxkernel94 opened this issue 6 months ago • 4 comments

Hi Jeffser,

First of all — Alpaca is by far the best local LLM app. Clean UI, fast, no fluff. But there's one big missing feature: GPU support for AMD and Intel users.

Right now, Alpaca only uses CPU. While llama.cpp supports CUDA (NVIDIA) and even ROCm (limited AMD), most AMD GPUs like the RX 6600M aren't supported. ROCm is too limited and fragile.

The good news is: llama.cpp now has Vulkan backend, which works with AMD and Intel GPUs cross-platform.

Apps like GPT4All already support GPU via Vulkan out of the box. If Alpaca added Vulkan support, it would be complete — and faster for everyone without NVIDIA cards.

🔧 Suggestion: Ship a build of Alpaca with LLAMA_VULKAN=1 compiled.

Automatically enable Vulkan backend if CUDA isn’t available.

Optionally expose a CLI or config toggle for advanced users.

This one feature would open the door to a much larger user base. Happy to test or help if needed.

Thanks!

linuxkernel94 avatar Jun 29 '25 07:06 linuxkernel94

Hi thanks for the suggestion but Alpaca doesn't use llama.cpp, instead it uses Ollama which doesn't have vulkan support yet

Jeffser avatar Jun 30 '25 00:06 Jeffser

Shouldn't take long anymore, Ollama largely uses llama.cpp behind the scenes and if they adopt it, this could be doable.

mags0ft avatar Jun 30 '25 05:06 mags0ft

Shouldn't take long anymore, Ollama largely uses llama.cpp behind the scenes and if they adopt it, this could be doable.

As mentioned in the Ollama repository, it does not appear that they will implement it. There was a pull request months ago, but as the author of the pull request mentions, they do not know if they will even implement it, so it has fallen behind with updates.

oscar370 avatar Aug 10 '25 19:08 oscar370

Ollama added experimental vulkan support in v0.12.6-rc0

Fail-2K avatar Oct 16 '25 07:10 Fail-2K