Alpaca icon indicating copy to clipboard operation
Alpaca copied to clipboard

Alpaca uses my CPU instead of my GPU (AMD)

Open frandavid100 opened this issue 1 year ago • 103 comments

I have noticed that Alpaca uses my CPU instead of my GPU. Here's a screenshot showing how it's using almost 40% of my CPU, and only 1% of my GPU.

Captura desde 2024-07-10 06-51-39

I'm using an AMD Radeon RX 6650 XT GPU, which is properly detected by the OS and used by other Flatpak apps like Steam. As you can see in this other screenshot:

Captura desde 2024-07-10 06-54-34

frandavid100 avatar Jul 10 '24 04:07 frandavid100

Hi, yes this is a problem with ROCM and Flatpaks, I believe this is also a problem with Blender.

Whilst any flatpak can detect and use the GPU for some reason ROCM doesn't work out of the box, there must be a way but I haven't figured it out and it's a bit hard to test since I have an incompatible GPU.

For now I suggest you host an Ollama instance using docker and connect it to Alpaca using the remote connection option.

Jeffser avatar Jul 10 '24 05:07 Jeffser

There's no hurry, I use it sparsely and I can afford it to use the CPU for the time being.

Is there any way I can help to test a possible fix? Is my GPU supposed to be compatible?

frandavid100 avatar Jul 10 '24 07:07 frandavid100

Alpaca is based on ollama. Ollama automatically detect CPU and GPU, but when it's executed by flatpak, ollama is contenarized (idk if this word exist lol) and don't have enough privilege to check if GPU can be used. That's what I understand !!

loulou64490 avatar Jul 10 '24 07:07 loulou64490

yeah that word does exist, though the problem isn't exactly the fact that it is inside a container, the problem is that ROCM doesn't work out of the box.

Jeffser avatar Jul 11 '24 03:07 Jeffser

I think rocm need to be loaded separately. https://github.com/ollama/ollama/releases/download/v0.2.8/ollama-linux-amd64-rocm.tgz this contains the rocm driver. this is a real issue that need to be fixed.

olumolu avatar Jul 23 '24 07:07 olumolu

adding that as it is would mean making Alpaca 4 times heavier and not everybody would even need rocm, the problem here is that either the freedesktop runtime or the gnome runtime should include rocm, that or I might not know a better solution right now since I'm still new with flatpak packaging

Jeffser avatar Jul 24 '24 01:07 Jeffser

I might finally have a solution where the flatpak accesses the ROCM libraries from the system itself

Jeffser avatar Jul 24 '24 02:07 Jeffser

adding that as it is would mean making Alpaca 4 times heavier and not everybody would even need rocm, the problem here is that either the freedesktop runtime or the gnome runtime should include rocm, that or I might not know a better solution right now since I'm still new with flatpak packaging

You could always package it as an extension in that case

0chroma avatar Jul 24 '24 02:07 0chroma

yeah the problem with that is that I would need to make a different package for flathub

Jeffser avatar Jul 24 '24 03:07 Jeffser

Any progress on this? Anything you need help with in getting this done?

TacoCake avatar Aug 04 '24 14:08 TacoCake

do you have rocm installed on your system? I think I can make Ollama use the system installation

Jeffser avatar Aug 04 '24 20:08 Jeffser

if someone has ROCm installed and want to test this run these commands

flatpak override --filesystem=/opt/rocm com.jeffser.Alpaca
flatpak override --env=LD_LIBRARY_PATH=/opt/rocm/lib:/opt/rocm/lib64:/app/lib:/usr/lib/x86_64-linux-gnu/GL/default/lib:/usr/lib/x86_64-linux-gnu/openh264/extra:/usr/lib/sdk/llvm15/lib:/usr/lib/sdk/openjdk11/lib:/usr/lib/sdk/openjdk17/lib:/usr/lib/x86_64-linux-gnu/GL/default/lib com.jeffser.Alpaca

This gives the Flatpak access to /opt/rocm and specifies that it is an available library, the rest are the default Flatpak libraries, just ignore that

Jeffser avatar Aug 04 '24 20:08 Jeffser

How can I install ROCm on my Silverblue machine? I tried to run "rpm-ostree install rocm" but I get a "packages not found" error.

frandavid100 avatar Aug 05 '24 06:08 frandavid100

I think this should be it

https://copr.fedorainfracloud.org/coprs/cosmicfusion/ROCm-GFX8P/

Jeffser avatar Aug 05 '24 07:08 Jeffser

How can I install ROCm on my Silverblue machine? I tried to run "rpm-ostree install rocm" but I get a "packages not found" error.

Ask https://discussion.fedoraproject.org/ for help they actually help in this case

olumolu avatar Aug 05 '24 09:08 olumolu

I was looking around on what Flatpaks include and they have all the stuff needed to run an app with OpenCL (a mesa alternative to ROCm as far as I'm aware) but Ollama can't use it, my recommendation for now is to run Ollama separately of Alpaca and just connect to it as a remote connection

Jeffser avatar Aug 05 '24 23:08 Jeffser

Could you use Vulkan instead of trying to use ROCm? Kinda how GTP4ALL does it https://github.com/nomic-ai/gpt4all

Genuine question

TacoCake avatar Aug 08 '24 18:08 TacoCake

do you have rocm installed on your system? I think I can make Ollama use the system installation

I don't have ROCm on my system, since it's kind of a headache to install on openSUSE Tumbleweed

TacoCake avatar Aug 08 '24 18:08 TacoCake

As far as i know the backend ollama use rocm instead of vulkan from front end this not too easy to implement this

olumolu avatar Aug 08 '24 18:08 olumolu

GPT4All uses llama.cpp backend while this app uses ollama.

Shished avatar Aug 08 '24 18:08 Shished

Yes I don't lnow much about llama.cpp But ollama use rocm I this is really a issue that rocm is not installed on many systems

olumolu avatar Aug 08 '24 18:08 olumolu

As far as i know the backend ollama use rcom instead of vulkan from front end this not too easy to implement this

GPT4All uses llama.cpp backend while this app uses ollama.

Ahhh I see, sorry for the confusion. If anyone wants to track vulkan support on Ollama:

  • https://github.com/ollama/ollama/pull/5059
  • https://github.com/ollama/ollama/issues/2033

TacoCake avatar Aug 08 '24 18:08 TacoCake

Yes if this get merged i hope it will bring vulkan to thos one also. For the time being I dont thing we can do much

olumolu avatar Aug 08 '24 18:08 olumolu

do you have rocm installed on your system? I think I can make Ollama use the system installation

I don't have ROCm on my system, since it's kind of a headache to install on openSUSE Tumbleweed

I know, it's a headache everywhere including the Flatpak sandbox

Jeffser avatar Aug 08 '24 18:08 Jeffser

flatpak override --filesystem=/opt/rocm com.jeffser.Alpaca
flatpak override --env=LD_LIBRARY_PATH=/opt/rocm/lib:/opt/rocm/lib64:/app/lib:/usr/lib/x86_64-linux-gnu/GL/default/lib:/usr/lib/x86_64-linux-gnu/openh264/extra:/usr/lib/sdk/llvm15/lib:/usr/lib/sdk/openjdk11/lib:/usr/lib/sdk/openjdk17/lib:/usr/lib/x86_64-linux-gnu/GL/default/lib com.jeffser.Alpaca

I installed rocm on fedora using this tutorial https://fedoraproject.org/wiki/SIGs/HC#Installation Still though, my GPU usage is 0%. Any other suggestions?

francus11 avatar Aug 12 '24 03:08 francus11

Today I learned that ROCm is actually bundled with the Ollama binary... So I have no idea what to try now lol

image

(third line)

Jeffser avatar Aug 13 '24 06:08 Jeffser

Ollama says that AMD users should try the propitiatory driver tho

https://github.com/ollama/ollama/blob/main/docs/linux.md#amd-radeon-gpu-support

Jeffser avatar Aug 13 '24 06:08 Jeffser

Has anyone managed to get it to work with their AMD GPU (I'm using Fedora Workstation 40)? I have a 6950XT, but it's using my CPU and RAM rn. I don't really want to use the proprietary drivers if I don't have to though....

@Jeffser I do love this project though, but it would be 100x better if I could run the LLMs using my GPU!

P-Jay357 avatar Aug 26 '24 13:08 P-Jay357

Has anyone managed to get it to work with their AMD GPU (I'm using Fedora Workstation 40)? I have a 6950XT, but it's using my CPU and RAM rn. I don't really want to use the proprietary drivers if I don't have to though....

@Jeffser I do love this project though, but it would be 100x better if I could run the LLMs using my GPU!

Yes this will be fixed once ollama merge the vulken support this issue will be fixed else amd need rcom driver along with ollama for amd gpu to function in the ai ml stuff if there is any other option to fix that can be considered as this is a open issue.

olumolu avatar Aug 26 '24 14:08 olumolu

Can someone with an AMD GPU test if it works when they have the propitiatory driver?

Jeffser avatar Aug 26 '24 18:08 Jeffser