[bug]: invokeai docker v5.3.1 main-rocm still using cpu instead of rocm
Is there an existing issue for this problem?
- [X] I have searched the existing issues
Operating system
Linux
GPU vendor
AMD (ROCm)
GPU model
RX 7700S, RX 7900 XTX
GPU VRAM
8, 24
Version number
5.3.1
Browser
Firefox
Python dependencies
No response
What happened
I was excited to see that https://github.com/invoke-ai/InvokeAI/issues/7146 was closed and merged in and all would be well in the rocm would however after updating my docker image and running with the new v5.3.1 the same issues persist and the container is using the cpu instead of rocm.
What you expected to happen
I expect rocm to be used.
How to reproduce the problem
No response
Additional context
No response
Discord username
No response
I can confirm this, the pytorch included is still CUDA, not ROCm.
When will this be fixed?
having the same issue on invokeai 5.5
+1
A workaround is to build your own container using the run.sh script in docker directory copy .env.sample on .env and set GPU_DRIVER=rocm But after I still have issues The last 5-rocm tagged image still have the issue
docker run --rm -it --entrypoint "/bin/bash" ghcr.io/invoke-ai/invokeai:5-rocm
root@98565b597416:/opt/invokeai# uv pip list | grep torch
Using Python 3.11.10 environment at: /opt/venv
clip-anytorch 2.6.0
pytorch-lightning 2.1.3
torch 2.4.1+cu124
torchmetrics 1.0.3
torchsde 0.2.6
torchvision 0.19.1+cu124
root@98565b597416:/opt/invokeai#
once done you can check
root@300bb74e52c8:/opt/invokeai# uv pip list | grep torch
Using Python 3.11.10 environment at: /opt/venv
clip-anytorch 2.6.0
pytorch-lightning 2.1.3
pytorch-triton-rocm 3.0.0
torch 2.4.1+rocm6.1
torchmetrics 1.0.3
torchsde 0.2.6
torchvision 0.19.1+rocm6.1
A workaround is to build your own container using the run.sh script in docker directory
copy .env.sample on .env and set GPU_DRIVER=rocm
But after I still have issues
The last 5-rocm tagged image still have the issue
docker run --rm -it --entrypoint "/bin/bash" ghcr.io/invoke-ai/invokeai:5-rocm root@98565b597416:/opt/invokeai# uv pip list | grep torch Using Python 3.11.10 environment at: /opt/venv clip-anytorch 2.6.0 pytorch-lightning 2.1.3 torch 2.4.1+cu124 torchmetrics 1.0.3 torchsde 0.2.6 torchvision 0.19.1+cu124 root@98565b597416:/opt/invokeai#once done you can check
root@300bb74e52c8:/opt/invokeai# uv pip list | grep torch Using Python 3.11.10 environment at: /opt/venv clip-anytorch 2.6.0 pytorch-lightning 2.1.3 pytorch-triton-rocm 3.0.0 torch 2.4.1+rocm6.1 torchmetrics 1.0.3 torchsde 0.2.6 torchvision 0.19.1+rocm6.1
Did you find a working image version? I tried a few versions back built by invoke (hosted on ghcr), but they all had this problem.
same problem, using yanwk/comfyui-boot:rocm and ollama:rocm works so I don't think the problem is on my side
We've recently updated ROCm images; closing this - please open another issue if still experiencing problems.