Alpaca icon indicating copy to clipboard operation
Alpaca copied to clipboard

Add Support for llama.cpp Models

Open Talnz007 opened this issue 11 months ago • 5 comments

I’d like to suggest adding support for llama.cpp models to expand Alpaca’s compatibility, especially for users with older AMD GPUs that can leverage Vulkan for GPU-accelerated inference.

This would allow users with older GPUs to take advantage of Vulkan for GPU-accelerated inference, ensuring smoother performance and opening up Alpaca to a wider range of hardware setups.

Thank you.

Talnz007 avatar Jan 06 '25 14:01 Talnz007

Llama CPP models can be used indirectly with Alpaca using an external Ollama instance that supports those (running on your local machine) which then gets linked up to the app. This gets even easier from version 5.0.0 and higher, which is expected to launch in 22th of February 2025.

mags0ft avatar Feb 18 '25 11:02 mags0ft

5.0.0*

Jeffser avatar Feb 18 '25 18:02 Jeffser

Of course, 5.0.0. Misread that one ;)

Generally I believe this issue is an interesting addition to Alpaca, but should be considered low-priority for now.

mags0ft avatar Feb 18 '25 18:02 mags0ft

Really appreciate the update! The workaround with an external Ollama instance is a great solution for now, and it’s exciting to hear that version 5.0.0 will make things even smoother.

I totally understand if this isn’t a top priority at the moment, but it would be great to see it considered in future updates. Thanks again for all your hard work!

Talnz007 avatar Feb 18 '25 19:02 Talnz007

I think this feature is a bit older by now, but Alpaca supports direct .GGUF import. As Ollama is designed with llama.cpp compatibility in mind, this issue should be pretty much finished, or am I missing something? 😵‍💫

mags0ft avatar May 03 '25 23:05 mags0ft

Ollama isn't compatible with every llama.cpp model and their features, but it should be enough for now

Jeffser avatar Jun 03 '25 23:06 Jeffser