krita-ai-diffusion
krita-ai-diffusion copied to clipboard
Amd acceleration in linux
ComfyUI in linux can use rocm, why is it labeled as "windows only" here?
The automated installer only supports DirectML. That's because I don't have an AMD GPU to test RocM (but I can test DirectML with Nvidia). If you're installing and setting up ComfyUI yourself you can make it work.
I can update the readme to reflect that a bit more precisely.
You should be able to convert the automatically installed environment to one that supports ROCm by doing the following:
source .local/share/krita/pykrita/ai_diffusion/.server/venv
pip install --force-reinstall torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.6
For some reason, I had to follow up that command with:
pip install "fsspec>=2023.5.0"
I guess --force-reinstall does something funny with the dependencies.
Anyway, after that I could run the plugin from Krita using my GPU. My AMD GPU is a wimpy Radeon RX 5700 XT though so using CPU was actually faster. I also had to set HSA_OVERRIDE_GFX_VERSION=10.3.0, but that's GPU specific.
@drjarno I can confim it works that way (you have to start the server yourself), and then connect via the plugin. Using a RX6800 it takes around 2-3mins for a 512x512 image.
@x0r13 can you please tell me how to start server yourself. I was using button in krita UI. I reinstalled packages as stated by @drjarno in venv, but it did not helped. I still can see only 2 options CPU or GPU (cuda). My card is same as @drjarno Radeon RX 5700 XT. Thank you in advance
hello! I have an AMD GPU and would love to work on this. It'd be my first issue, so lmk if there's anywhere specific I should look to get started.
You mean adding a RocM option for the installer?
Basically you
- add a
ServerBackend.rocmoption here (make it linux only) - add a branch here and put the latest url for the torch rocm packages
Ideally you can then select RocM from the installer UI and it works. Not sure if you need to pass extra arguments to ComfyUI, it can be done in server.py/start. The annoying part is testing and making sure it really works since install takes a while. See the notes in contributing to speed it up if you have to do it repeatedly.
I'm currently using Fedora 40 with the default open-source ADM drivers. Please enable support for it by default, as I'm currently limited to using only CPU.
I'm currently using Fedora 40 with the default open-source ADM drivers. Please enable support for it by default, as I'm currently limited to using only CPU.
I'm using Nobara 40 (based on fedora) and would love to see support, too. :) Edit: CPU (AMD Ryzen 5 7600) crashes about 2/3 in. Krita just closes.
Slightly meta, but maybe just for good measure the linux-label should be added to this issue.
I would also love to see this getting addressed, in the meantime, maybe the sentence "Custom install required for Linux." in the install guide could link somewhere useful or give a more direct hint on what would be necessary.
Thanks either way for the great work you did already :)
Masz na myśli dodanie opcji RocM do instalatora?
Zasadniczo ty
- dodaj tutaj
ServerBackend.rocmopcję (ustaw tylko dla systemu Linux)- dodaj tutaj gałąź i wpisz najnowszy adres URL dla pakietów torch rocm
Najlepiej byłoby wybrać RocM z interfejsu użytkownika instalatora i zadziałałoby. Nie jestem pewien, czy trzeba przekazać dodatkowe argumenty do ComfyUI, można to zrobić w server.py/start. Irytującą częścią jest testowanie i upewnianie się, że to naprawdę działa, ponieważ instalacja zajmuje trochę czasu. Zobacz notatki w sekcji „Contribution”, aby przyspieszyć to, jeśli musisz to robić wielokrotnie.
You mean adding a RocM option for the installer?
Basically you
- add a
ServerBackend.rocmoption here (make it linux only)- add a branch here and put the latest url for the torch rocm packages
Ideally you can then select RocM from the installer UI and it works. Not sure if you need to pass extra arguments to ComfyUI, it can be done in server.py/start. The annoying part is testing and making sure it really works since install takes a while. See the notes in contributing to speed it up if you have to do it repeatedly.
Would it be possible to make such a file that would work on Linux with Rocm? Unfortunately I don't understand what exactly I have to type in there and how to make it work. I don't know anything about programming and would just like to be able to use in Linux with a RX 6600 XT graphics card. Please someone make a simple instruction or just a dedicated installer with changed script or some settings.
Hi again, are there any solutions for this problem?
@Acly I understand you don't want to work on ROCm or fiddle with things you can't adequately test. Can you make a button to download all the dependencies if we have our own server/backend installed, then we just need to merge that with our existing ComfyUI install? That would go a long way to making this more accessible to people with a ROCm (linux) or a ZLUDA (windows) ComfyUI setup already working. Also, I promote your work all the time! Thanks for everything, even if you can't accommodate this request.
I managed to get it working on my 9070 XT on Arch Linux this way.
Follow the install normally and let it install the CUDA version and the models. When the server launch fails go onto a terminal.
- Enter the plugin enviroment.
source $HOME/.local/share/krita/ai_diffusion/server/venv/bin/activate - Because the enviroment doesn't have pip you have to install it forcefully
python -m ensurepip - Now you can force install rocm version of pytorch. I personally use the nightly version because of the recency of my card.
python -m pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm6.4 - If you run a pip list you should be able to confirm now that the install torch stuff is rocm version
python -m pip list | grep rocm
pytorch-triton-rocm 3.3.0+git96316ce5 torch 2.8.0.dev20250515+rocm6.4 torchaudio 2.6.0.dev20250516+rocm6.4 torchvision 0.22.0.dev20250516+rocm6.4
- Now you can go back to Krita and should be able to launch the server even if CUDA still appears selected.
- I had to enable Tiled VAE to avoid OOM and also dynamic caching. And even then it still behaves quite poorly. So it's probably still better to link with an external setup.