AMD support
Hello guys, you doing awesome work!
I was wondering if there is any tutorial on how to run the janus or janus demo with amd video game cards
AMD No /(ㄒoㄒ)/~~
When running the model on AMD's Windows system using ComfyUI, it prompt "Torch not compiled with CUDA enabled," that the model maynot be directly used with an AMD GPU.
Additionally, the command line show an error: `tokens = torch.zeros((parallel_size*2, len(input_ids)), dtype=torch.int).cuda()'
有没有大佬救一下
I've been able to run the 1b model using a 7900XTX. To run it I used WSL and conda. Apart from installing the requirements, it's needed to install ROCm and PyTorch for ROCm: https://rocm.docs.amd.com/projects/radeon/en/latest/docs/install/wsl/install-pytorch.html
After this it should work, but in my case I also needed to specifically force to use AMD drivers on the script:
vl_gpt = AutoModelForCausalLM.from_pretrained(
model_path, trust_remote_code=True, torch_dtype=torch.bfloat16
).to("cuda")
This youtube tutorial also helped me: https://youtu.be/2KAjtxBYfeg?si=Gpp3jH7GxhS4tFLA
However, I don't think this is the best or recommended way to run the model, as it doesn't make much sense that I am able to run deepseek-r1:70b but not the janus-pro:7b model.
7900XTX can run in Windows by using ZLUDA. 7B model use about 16GB VRAM (parallel_size=1, generate 1 image at a time)
add this code after import torch
line 20
torch.backends.cuda.enable_flash_sdp(False)
torch.backends.cuda.enable_mem_efficient_sdp(False)
torch.backends.cuda.enable_math_sdp(True)
torch.backends.cudnn.enabled = False
Hello! I had some issues getting it to work with an AMD card (RX 6800 XT) on Ubuntu. I had to add my user to the video and render groups. To check if you are in these groups, use the command "groups" in the terminal. If you don't see "video" and "render," run: sudo usermod -a -G video,render $LOGNAME Sources => https://rocm.docs.amd.com/projects/install-on-linux/en/latest/install/prerequisites.html