automatic icon indicating copy to clipboard operation
automatic copied to clipboard

[Feature]: Add ROCm for Windows support

Open TeutonJon78 opened this issue 2 years ago • 25 comments
trafficstars

Feature description

Since there seems to be a lot of excitement about AMD finally releasing ROCm support for Windows, I thought I would open a tracking FR for information related to it. Before it can be integrated into SD.Next, pyTorch needs to add support for it, and that also includes several other dependencies being ported to windows as well. Obviously no ETA is known for any of that work to be done and released.

  • Initial Announcement: https://community.amd.com/t5/rocm/available-now-new-hip-sdk-helps-democratize-gpu-computing/ba-p/621029/jump-to/first-unread-message
  • AMD Documentation: https://rocm.docs.amd.com/en/docs-5.5.1/deploy/windows/index.html
  • Official GPU Support List: https://rocm.docs.amd.com/en/docs-5.5.1/release/windows_support.html

Status:

  • [x] AMD released Drivers
  • [ ] ROCm dependencies updated for Windows
  • [ ] pyTorch ported to ROCm for Windows
  • [ ] any needed support added to SD.Next

Tracking Bugs/PRs from upstream projects:

  • pyTorch: https://github.com/pytorch/pytorch/issues/106161
  • MIOpen: https://github.com/ROCmSoftwarePlatform/MIOpen/pull/2272

Version Platform Description

AMD on Windows 10/11

TeutonJon78 avatar Jul 29 '23 09:07 TeutonJon78

SQUEEEEEE!!! Ahem.. adjusts tie.. marvellous, ol' chap!

brknsoul avatar Jul 29 '23 13:07 brknsoul

If anyone finds any more requirements that need porting, post them here as well. I didn't do a deep dive into which all is needed beyond pyTorch and MIOpen.

TeutonJon78 avatar Jul 29 '23 17:07 TeutonJon78

Can't wait to see what the performance is like (and more importantly get access to the full extent of SD features various UIs support). I've been using Shark... It supports LoRas... and that's it, but has to recompile for every combination of model, LoRA, resolution, and sometimes prompt length but from benchmarks I've found laying around it's faster than anything but high end NVidia modules in SD2.1 image generation under that compilation method. The problem is the inflexibility of the build system and how much disk space it starts eating up after you mess around with multiple model / lora combinations. directml is a huge speed drop and memory management eats in comparison.

I kind of figured this was coming since the late June agility SDK hardware scheduling drivers accidentally included an amdhip64.dll but it's nice to see it was faster than I'd thought. I'll be keeping an eye out here for news. :D

NeedsMoar avatar Aug 01 '23 21:08 NeedsMoar

rocm windows not support my gpu (amd rx 560), guess i stay on hackintosh

sukualam avatar Aug 07 '23 14:08 sukualam

There is also this handy chart on the differences between ROCm on Linux and Windows: https://rocm.docs.amd.com/en/latest/rocm.html#rocm-on-windows

TeutonJon78 avatar Aug 27 '23 07:08 TeutonJon78

Seems like shark is getting rocm somehow despite miopen not even being on windows yet

image

Enferlain avatar Aug 30 '23 14:08 Enferlain

I don't want to open a new issue or anything just for this so I allow myself to comment here. Someone managed to get a repo 'working' with onnx and olive. I followed the tutorial but couldn't optimized my own checkpoint, hiresfix wasn't working on my rig, and I couldn't have the loras working neither.

Positive point though: generating a basic pic was indeed really 10times faster on my 7900xt (I am really surprise how fast it is), but since I'm not insterested generating pics without hires/lora/checkpoints, I won't try more now. But if someone else wants to give it a try, I leave this here: https://community.amd.com/t5/ai/updated-how-to-running-optimized-automatic1111-stable-diffusion/ba-p/630252

CharlesCato avatar Oct 25 '23 08:10 CharlesCato

I don't want to open a new issue or anything just for this so I allow myself to comment here. Someone managed to get a repo 'working' with onnx and olive. I followed the tutorial but couldn't optimized my own checkpoint, hiresfix wasn't working on my rig, and I couldn't have the loras working neither.

Positive point though: generating a basic pic was indeed really 10times faster on my 7900xt (I am really surprise how fast it is), but since I'm not insterested generating pics without hires/lora/checkpoints, I won't try more now. But if someone else wants to give it a try, I leave this here: https://community.amd.com/t5/ai/updated-how-to-running-optimized-automatic1111-stable-diffusion/ba-p/630252

I added an experimental support for ONNX and Olive. Wiki I recommend to use this one rather than my fork. (because a1111 does not have diffusers support besides sd.next does, the implementation on sd.next is more organized and clean)

lshqqytiger avatar Oct 30 '23 02:10 lshqqytiger

There seems to be an Olive optimised model of Dreamshaper here; https://huggingface.co/softwareweaver/dreamshaper

Since I'm still getting errors trying to convert models, I'll give this a try and report back.

brknsoul avatar Oct 30 '23 07:10 brknsoul

I added an experimental support for ONNX and Olive. Wiki

So, I'm very novice with git and repo/branch. You said on the info to switch to Olive branch. I assume I needed to run git checkout --track origin/olive But then when I try to launch a1111 I get this error:

  File "D:\StableAMD\onnx\automatic\launch.py", line 170, in <module>
    init_modules() # setup argparser and default folders
  File "D:\StableAMD\onnx\automatic\launch.py", line 33, in init_modules
    import modules.cmd_args
  File "D:\StableAMD\onnx\automatic\modules\cmd_args.py", line 3, in <module>
    from modules.paths import data_path
  File "D:\StableAMD\onnx\automatic\modules\paths.py", line 6, in <module>
    import olive.workflows
ModuleNotFoundError: No module named 'olive'

Not sure what to do.

CharlesCato avatar Oct 31 '23 14:10 CharlesCato

It's probably best to just clone a separate instance;

git clone -b olive https://github.com/vladmandic/automatic olive

This will create the Olive branch in a folder called olive.

brknsoul avatar Oct 31 '23 15:10 brknsoul

git clone -b olive https://github.com/vladmandic/automatic olive

I actually tried that...... but for some reason I thought putting automatic/olive was the right way to do, so it failed... Thank you. I'll try then.

CharlesCato avatar Oct 31 '23 15:10 CharlesCato

git (program) clone (command) -b olive (select 'olive' branch) https://github.com/vladmandic/automatic (url of git repo) olive (folder to clone into).

That last one can be anything; git clone -b olive https://github.com/vladmandic/automatic iLikeBigButtsAndICannotLie will clone the repo into a folder called "iLikeBigButtsAndICannotLie" ;-)

brknsoul avatar Oct 31 '23 15:10 brknsoul

Well... nothing changed though. Same error.

CharlesCato avatar Oct 31 '23 15:10 CharlesCato

I added an experimental support for ONNX and Olive. Wiki

So, I'm very novice with git and repo/branch. You said on the info to switch to Olive branch. I assume I needed to run git checkout --track origin/olive But then when I try to launch a1111 I get this error:

  File "D:\StableAMD\onnx\automatic\launch.py", line 170, in <module>
    init_modules() # setup argparser and default folders
  File "D:\StableAMD\onnx\automatic\launch.py", line 33, in init_modules
    import modules.cmd_args
  File "D:\StableAMD\onnx\automatic\modules\cmd_args.py", line 3, in <module>
    from modules.paths import data_path
  File "D:\StableAMD\onnx\automatic\modules\paths.py", line 6, in <module>
    import olive.workflows
ModuleNotFoundError: No module named 'olive'

Not sure what to do.

Thank you for reporting! I think the recent commits about paths corrupted the installation process. Will be fixed..

lshqqytiger avatar Oct 31 '23 22:10 lshqqytiger

Thank you for reporting! I think the recent commits about paths corrupted the installation process. Will be fixed..

if you've merged dev recently, then yes. note that paths.py must not have any additional dependencies or imports as its imported by launcher before anything else has started.

vladmandic avatar Oct 31 '23 22:10 vladmandic

Fixed: 2fd8d2c139101676aae091fee9098a5e9b00a12d

lshqqytiger avatar Oct 31 '23 23:10 lshqqytiger

Fixed: 2fd8d2c139101676aae091fee9098a5e9b00a12d

That cannot go into base_requirements.

And installing olive must be optional, not for every install.

I can add --use-onnxflag and bind it to that if you want?

vladmandic avatar Nov 01 '23 00:11 vladmandic

I think so. But if it imports modules.paths from init_modules and then import olive, it's broken. https://github.com/microsoft/Olive/issues/554 Is there any other way to solve this?

And I considered that but thought it was inappropriate because OnnxStableDiffusionPipeline belongs to diffusers. Should I add --use-onnx?

lshqqytiger avatar Nov 01 '23 00:11 lshqqytiger

Let me take a look to morrow, I've handled module conflict before, never clean, but doable.

vladmandic avatar Nov 01 '23 01:11 vladmandic

@lshqqytiger lets move olive conversation to #2429

vladmandic avatar Nov 01 '23 16:11 vladmandic

https://github.com/ROCm/MIOpen/pull/2570

Kademo15 avatar Dec 14 '23 10:12 Kademo15

RoCk has an official released now at windows am i right? is there any plans to complete this task then?

morovinger avatar Apr 05 '24 09:04 morovinger

Source?

brknsoul avatar Apr 05 '24 13:04 brknsoul

https://github.com/ROCm/MIOpen/discussions/2703#discussioncomment-11626273

johnnynunez avatar Dec 20 '24 15:12 johnnynunez