ComfyUI icon indicating copy to clipboard operation
ComfyUI copied to clipboard

Allow enabling use of MIOpen by setting an env var

Open BradPepersAMD opened this issue 1 week ago • 4 comments

This change keeps the default behavior where MIOpen use is disabled on many AMD arch but allows you to set COMFYUI_ENABLE_MIOPEN=1 as an env var to override this and keep using MIOpen. The goal is to make it easier to test with this on or off without changing code. AMD is also looking to gather information to help us resolve this problem and to do so, we need an easy way for users to turn on use of MIOpen.

You can see the discussion around gathering the data here.

BradPepersAMD avatar Dec 17 '25 05:12 BradPepersAMD

@comfyanonymous we would appreciate a quick merge here :)

jammm avatar Dec 17 '25 10:12 jammm

I already explained it a while ago in another PR:

https://github.com/comfyanonymous/ComfyUI/pull/10678#issuecomment-3514054195

Why make it an environmental variable when you can just make it a launch arg like --force-cudnn-enabled?

In cli_args.py, you'd add something like: parser.add_argument("--force-cudnn-enabled", action="store_true", help="Force ComfyUI to use CUDNN descriptive words with more words following words")

The model_management.py already imports the args.

I remember testing it and it worked, just took a simple if/else statement around what's already in the model_managment.py. Nobody really wants to juggle environmental variables, just make it a launch flag. But I suppose it's not that big of a deal to add set COMFYUI_ENABLE_MIOPEN=1 to the startup bat. Just seems cleaner to make it an arg and add some help info, so that people know it's even there.

In cli_args.py somewhere around line 90 is where I put this:

parser.add_argument("--force-cudnn-enabled", action="store_true", help="helpful words")

In model_management.py around line 340:

if not (any((a in arch) for a in AMD_RDNA2_AND_OLDER_ARCH)):
	if args.force_cudnn_enabled:
		torch.backends.cudnn.enabled = True
		logging.info("Set: torch.backends.cudnn.enabled = True by user")
	else:
		torch.backends.cudnn.enabled = False  # Seems to improve things a lot on AMD
		logging.info("Set: torch.backends.cudnn.enabled = False for better AMD performance.")

That should be along the lines of what I did, when I tested it.

RandomGitUser321 avatar Dec 17 '25 13:12 RandomGitUser321

I already explained it a while ago in another PR:

#10678 (comment)

Ideally we'd like users to run their comfyui as-is without changing any commands so it's easy to re-use existing workflows. We're fine with either this or https://github.com/comfyanonymous/ComfyUI/pull/10678 being merged.

jammm avatar Dec 17 '25 14:12 jammm

Well either way, cudnn breaks VAE decodes on my Windows native setup with TheRock nightly wheels from https://rocm.nightlies.amd.com/v2/gfx110X-all/ (tested with 2.11.0a0+rocm7.11.0a20251217). Both float16 and float32 create a bunch of noise and bfloat16 will spit some HIP related error in the console. You can see it working fine in the previews though. I only tested with the Flux VAE and I made sure to clear the .miopen cache between tests.

RandomGitUser321 avatar Dec 18 '25 12:12 RandomGitUser321