[DOCS] WSL2 with AMD ROCm Guide
Hi, I was showing my non-tech-industry friend your project and they were eager to try it out, they were very sad about the lack of pre-packaged AMD support on Windows and as they have no idea what git clone is I offered to guide them through the manual installation so they could use it with their graphics card, despite my lack of recent Windows experience (hadn't even touched Windows 11 until today).
The official manual installation guide suggests AMD is available for Windows, but there were problems with the recommended PyTorch libraries (which are actually the PyTorch nightly builds that aren't recommended by AMD). After some research, I found a solution using WSL officially with more stable ROCM libraries from AMD. I adapted the installation process for WSL and was able to get it working (I also adapted AMDs steps to use the latest versions I could find and included torchaudio as a requirement for ComfyUI).
I think this workaround would be helpful for others who encounter similar issues, especially those with no experience of GitHub or the technical side as it is very intimidating for folks not in the industry and not very forgiving or friendly for those who just want to use the tool to play around with diffusion and AMD.
I'd like to offer my updated process as a community contribution (even if it isn't officially endorsed). However, please feel free to refine it to make the solution more robust with venv or conda best practices as I definitely did some janky things with pip installations and the order of installation to make it work quickly on my friends PC (I am not a python guy, so can't promise any best practices, I just know enough python from my sysadmin days to be dangerous).
Please note these instructions were written on the 11th January 2025, so some of the compatibilities and versions could be out of date if you read this later, I included some supplemental advice on updating packages at the end:
Stage 1: Setting Up WSL and Git:
[!note] PreReq - Have a Github Acct
- Install WSL in Powershell
wsl --install
-
Get Ubuntu 22.04 from Microsoft Store
-
Log into the 22.04 version
[!note] Your username can be anything, your password will be typed even if it isn't visible so bear that in mind!
- Create your SSH Key (you can hit enter on every option, this isn't best practice but it is doable)
ssh-keygen
- Get your public key
cat .ssh/id_rsa.pub
[!note] If id_rsa.pub doesn't exist it could instead be id_e25519.pub - you always want the .pub file which can be public basically, the other non-.pub should never be shared as it is private and needed for security!
- On the Github webpage, navigate to your Account Settings, then GPG and SSH Keys, in there you can add a 'New SSH Key' give it a name and paste the output from Step 6 in it's entirety (not the command, just the output).
Stage 2: Installing ROCm with AMD Radeon for WSL:
-
On your Windows machine check your drivers are compatible (as of writing AMD Adrenaline v24.12.1)
-
In WSL Ubuntu 22.04 - Get the amdgpu installer
sudo apt update
wget https://repo.radeon.com/amdgpu-install/6.2.3/ubuntu/jammy/amdgpu-install_6.2.60203-1_all.deb
sudo apt install ./amdgpu-install_6.2.60203-1_all.deb
- Check the installer exists and lists use cases:
sudo amdgpu-install --list-usecase
- Install amdgpu with the WSL and ROCm usecase:
amdgpu-install -y --usecase=wsl,rocm --no-dkms
- Verify the Installation:
rocminfo
You should expect to see this in the output somewhere:
...
Marketing Name: [Your Graphics Card Model]
...
Stage 3: Installing ComfyUI with AMD Official ROCm Dependencies
- Clone the ComfyUI Repo from Github:
git clone [email protected]:comfyanonymous/ComfyUI.git
- Upgrade PIP beyond 22.02 (Ubuntu 22.04 ships with this version and there is a bug that cancels out when you have too many python dependencies - and ComfyUI has a lot):
pip3 install --upgrade pip
- Install the official Python Dependencies for ComfyUI (we will be changing some of these in the next step but if we do this first it means we should get everything we need and in the next steps we only need to change 3-4 others)
cd ComfyUI && pip3 install -r requirements.txt && cd ~
- Download and then install the ROCm specific PyTorch libraries to replace the ComfyUI default ones (These versions are the latest as of writing (binaries available since 7th January 2025):
wget https://repo.radeon.com/rocm/manylinux/rocm-rel-6.3.1/torch-2.4.0%2Brocm6.3.1-cp310-cp310-linux_x86_64.whl
wget https://repo.radeon.com/rocm/manylinux/rocm-rel-6.3.1/torchvision-0.19.0%2Brocm6.3.1-cp310-cp310-linux_x86_64.whl
wget https://repo.radeon.com/rocm/manylinux/rocm-rel-6.3.1/pytorch_triton_rocm-3.0.0%2Brocm6.3.1.75cc27c26a-cp310-cp310-linux_x86_64.whl
wget https://repo.radeon.com/rocm/manylinux/rocm-rel-6.3.1/torchaudio-2.4.0%2Brocm6.3.1-cp310-cp310-linux_x86_64.whl
pip3 uninstall torch torchvision pytorch-triton-rocm torchaudio
pip3 install torch-2.4.0+rocm6.3.1-cp310-cp310-linux_x86_64.whl torchvision-0.19.0+rocm6.3.1-cp310-cp310-linux_x86_64.whl pytorch_triton_rocm-3.0.0+rocm6.3.1.75cc27c26a-cp310-cp310-linux_x86_64.whl torchaudio-2.4.0+rocm6.3.1-cp310-cp310-linux_x86_64.whl
- Update the Torch library to be WSL compatible:
location=`pip show torch | grep Location | awk -F ": " '{print $2}'`
cd ${location}/torch/lib/
rm libhsa-runtime64.so*
cp /opt/rocm/lib/libhsa-runtime64.so.1.2 libhsa-runtime64.so
cd ~
- Verify PyTorch is working and recognises your graphics card:
Test 1 - See if the torch dependency works:
python3 -c 'import torch' 2> /dev/null && echo 'Success' || echo 'Failure'
Expected Result:
Success
Test 2 - See if torch can access CUDA for Graphics Cards:
python3 -c 'import torch; print(torch.cuda.is_available())'
Expected Result
True
Test 3 - See if torch can see your graphics card
python3 -c "import torch; print(f'device name [0]:', torch.cuda.get_device_name(0))"
Expected Result:
device name [0]: <Supported AMD GPU>
[!note] You may see an error before printing the GPU, in my testing this error hasn't meant any actual issues with Comfy UI in practice.
Test 4 - Gather all of your env details:
python3 -m torch.utils.collect_env
Expected Result:
PyTorch version
ROCM used to build PyTorch
OS
Is CUDA available
GPU model and configuration
HIP runtime version
MIOpen runtime version
You should now have everything installed and setup!
Stage 4: Starting ComfyUI (If you have done Stages 1-3 then just do this from now on)
- Start the ComfyUI Application (finally!)
cd ComfyUI && python3 main.py
Once it is started it will prompt you that it is being served, probably from an address that looks like this:
http://127.0.0.1:8818
-
You can copy this whole url and paste it into your browser; Happy Diffusing!!
-
To stop it running when you are done you can use Ctrl+C on your WSL window which will stop the running service, just use step 1 of this stage to start it again!
Post-Installation Notes
Updating Torch:
AMD Official Torch packages can be found here
To update libraries the Torch installation commands you need to update the component versions, the rocm versions, and for the triton package the hash/string of characters in the middle all of which you can find above if you click in.
Always pick the cp310-cp310 packages for WSL, not sure why, but a community blog suggested these were the ones built for WSL.
Updating Radeon Software in WSL:
AMD recommends uninstalling the old one before installing the new one as there are no in-place upgrades currently - not sure how to discover new ones.
Adding Models to ComfyUI:
You can download models from CivitAI - once downloaded in your file explorer you can access your linux machine directly in the sidebar and navigate to ComfyUI -> Models to put them in from your Windows machine.
Seeing Outputs:
Similar to Models, Output images can be found going to the Linux section of your File Explorer, the ComfyUI folder and then 'Outputs'.
Accessing from other devices remotely:
ComfyUI by default is only available on the machine it gets installed on, but it is possible to use Tailscale, an excellent easy to setup private VPN, on both your devices and your ComfyUI machine to expose it. Tailscale's docs have examples of how to do this and install, you just need to type in your ComfyUI machine's IP address found in the Tailscale Admin Console instead of '127.0.0.1' from your other device's browser when they are connected.
Making ComfyUI starting easier: If you want to have an easier time of starting the server after the initial setup:
cd ~
touch start_server
vim start_server
In VIM (a text editor for the command line) press 'a' to start editing the file and add:
cd ComfyUI && python3 main.py
After you are done press the escape key, and then type :wq and enter
You should be back in your prompt, to make your new file executable use:
chmod +x start_server
And from now on you can just use:
./start_server
Sources:
Comfy UI Manual Installation AMD Radeon in WSL Guide AMD PyTorch in WSL installation Slightly Outdated: AMD Community Blog on PyTorch WSL w/ Screenshots of Comfy UI
thanks for sharing
Thanks, this also works with WSL Ubuntu 24.04 and ROCm 6.3.2 with small adjustments
wget https://repo.radeon.com/amdgpu-install/6.3.2/ubuntu/noble/amdgpu-install_6.3.60302-1_all.deb
sudo apt install ./amdgpu-install_6.3.60302-1_all.deb
amdgpu-install -y --usecase=wsl,rocm --no-dkms
# Readme recommends python 3.12
sudo apt install -y python3.12 python3.12-venv python3.12-dev python3-pip
git clone https://github.com/comfyanonymous/ComfyUI.git
cd ComfyUI
python3.12 -m venv venv
source venv/bin/activate
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm6.3
pip install -r requirements.txt
I didn't have libhsa-runtime64.so.1.2 so i used libhsa-runtime64.so
location=`pip show torch | grep Location | awk -F ": " '{print $2}'`
cd ${location}/torch/lib/
rm libhsa-runtime64.so*
cp /opt/rocm/lib/libhsa-runtime64.so libhsa-runtime64.so
Start command from Readme
TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL=1 python main.py --use-pytorch-cross-attention
One call-out for the original instruction the torch packages are made for 6.3.2, but you install 6.2.3 rocm which is fine for ComfyUI and Stable diffusion with normal nodes. But if you need to use bitandbytes you need matching torch and rocm version or you will get errors.
hello thanks for this guide :) i try to install ROCM but have this error
The following packages have unmet dependencies: hsa-runtime-rocr4wsl-amdgpu : Depends: libc6 (>= 2.34) but 2.31-0ubuntu9.17 is to be installed Depends: libstdc++6 (>= 12) but 10.5.0-1ubuntu1~20.04 is to be installed rocm : Depends: rocm-developer-tools (= 6.2.3.60203-124~22.04) but it is not going to be installed Depends: rocm-ml-sdk (= 6.2.3.60203-124~22.04) but it is not going to be installed Depends: mivisionx (= 3.0.0.60203-124~22.04) but it is not going to be installed Depends: migraphx (= 2.10.0.60203-124~22.04) but it is not going to be installed Depends: rpp (= 1.8.0.60203-124~22.04) but it is not going to be installed Depends: migraphx-dev (= 2.10.0.60203-124~22.04) but it is not going to be installed Depends: mivisionx-dev (= 3.0.0.60203-124~22.04) but it is not going to be installed Depends: rpp-dev (= 1.8.0.60203-124~22.04) but it is not going to be installed E: Unable to correct problems, you have held broken packages.
same thing on 20, 22 and ubuntu 24
many thanks for your help :)
If missing dependencies in stage 2 step 4 come up, like
The following information may help to resolve the situation:
The following packages have unmet dependencies: hipsolver : Depends: libcholmod3 but it is not installable Depends: libsuitesparseconfig5 but it is not installable rocm-gdb : Depends: libtinfo5 but it is not installable Depends: libncurses5 but it is not installable E: Unable to correct problems, you have held broken packages.
then do
sudo add-apt-repository -y -s deb http://security.ubuntu.com/ubuntu jammy main universe
as there are missing for 24.04, as it seems.
Tried this on RX 6800 and I only get the CPU showing up
I tried pytorch test 3 and it doesn't give me any result
tried it with rx6800xt and 7600x igpu, only got the igpu
Just to set some expectations on my end, as unhelpful as it may be:
I am not affiliated with ComfyUI
I haven't had a personal need to revisit these instructions as I do not use ComfyUI
I don't even personally have a test rig to continuously verify and maintain these instructions as I was helping an artist friend on their PC who had no experience with Linux, Python or tech at large (I own no windows PCs or AMD Graphics cards)
The solution took a lot of research and trial and error remoting into my friends machine to get them set up so I shared the solution publicly in the hope the methodology and sources I used would help others figure out what they need
However, this industry is fast moving and I expected these instructions to get out of date quickly which is why I mentioned the date I wrote them in the preamble
If you find a new solution, or if AMD start producing the packages more cleanly and there is a more official way - or if ComfyUI or anyone else who has the right resources want to take this method over and maintain it I am all for it
Something else to bear in mind is that ComfyUI is in active development so it's own dependencies will change, you may need to cross reference new packages and lists compared to when I wrote this in Jan 2025
I know this isn't too helpful to those looking for answers, but I think it's fair to call out and say