ComfyUI-AutomaticCFG icon indicating copy to clipboard operation
ComfyUI-AutomaticCFG copied to clipboard

Generation freezes with directml (amd gpu's)

Open patientx opened this issue 1 year ago • 9 comments

Hi there. Tried your extension and thought maybe it would somehow improve using sdxl with lightning lora's since when using them max usable cfg is 2 , normally 1 is recommended. If we use normal cfg like 5-6 etc the output is garbled and unusable. And yes with your extension I was able to use cfg with a value of 5.0 and the output was not garbled and maybe actually better then using the setup (sdxl model with 4 step lightning lora normally cfg 1) normally. BUT there is a problem I was able to make it work only once and now I think I know why :

Your extension is all about sigma values yes ? There is a problem with directml like you can see here with ipadapterplus extension with comfyui (https://github.com/cubiq/ComfyUI_IPAdapter_plus/issues/109)

Up until a few days we were just disabling some lines to make it work with amd gpu's and now the author solved it (hopefully) and maybe you can see what they did and solve it here the same way.

Thanks.

patientx avatar Mar 26 '24 21:03 patientx

I am really unsure to understand what is the source of your issue.

Extraltodeus avatar Mar 27 '24 12:03 Extraltodeus

With AMD gpu (rx 6600) using directml on windows, generation stops at second or third step if I use your node regardless of model etc. used. And this only occured with ipadapter extension which was solved recently.

patientx avatar Mar 27 '24 16:03 patientx

Amd here too (Rx580), I don't think the problem is in this plugin, I just downloaded and tested (SDXL+Lightning) and it worked right away. Is your ComfyUi updated?

MythicalChu avatar Mar 27 '24 18:03 MythicalChu

It seems I spoke too soon, it freezes (randomly) on me too. I narrowed it down to the line:

max_val = torch.mean(max_values).item()

I changed it similarly as the fix for Ipadapter:

max_val = torch.mean(max_values).detach().cpu().numpy()

and it seems to not be freezing anymore, but since it only froze sometimes, more tests are required to be sure.

MythicalChu avatar Mar 31 '24 22:03 MythicalChu

It seems I spoke too soon, it freezes (randomly) on me too. I narrowed it down to the line:

max_val = torch.mean(max_values).item()

I changed it similarly as the fix for Ipadapter:

max_val = torch.mean(max_values).detach().cpu().numpy()

and it seems to not be freezing anymore, but since it only froze sometimes, more tests are required to be sure.

Yes ! This works, and actually makes the 4 step gens with lightning models much more detailed with this combined.

I am really unsure to understand what is the source of your issue.

Is it possible to add it to original code so we don't need to change it everytime ?

patientx avatar Apr 01 '24 23:04 patientx

Well the problem is that it transfers it to the CPU so I'll check if instead it would be possible to use the comfy torch device instead.

If you add this import at the beginning of the script:

import comfy.model_management as model_management

and use this instead:

max_val = torch.mean(max_values).to(device=model_management.get_torch_device())

Does it works?

Extraltodeus avatar Apr 07 '24 09:04 Extraltodeus

Well the problem is that it transfers it to the CPU so I'll check if instead it would be possible to use the comfy torch device instead.

If you add this import at the beginning of the script:

import comfy.model_management as model_management

and use this instead:

max_val = torch.mean(max_values).to(device=model_management.get_torch_device())

Does it works?

Did these changes, after two gens same situation, generation just stops. So, this didn't solve it. The one "MythicalChu" suggested works though. Used it a lot of times.

Also not related to this but there is this error which shows up at every generation step , it is not doing anything bad just keeps showing up.

"WARNING: The comfy.samplers.calc_cond_uncond_batch function is deprecated please use the calc_cond_batch one instead."

patientx avatar Apr 07 '24 10:04 patientx

"WARNING: The comfy.samplers.calc_cond_uncond_batch function is deprecated please use the calc_cond_batch one instead."

you need to update to the latest version

Extraltodeus avatar Apr 07 '24 19:04 Extraltodeus

torch.mean(max_values).detach().cpu().numpy()

That solved, just saw the patch notes on comfyui also, they were warning about the change too.

patientx avatar Apr 07 '24 20:04 patientx

Is this all good now or is the problem still remaining?

Extraltodeus avatar May 15 '24 18:05 Extraltodeus

solved I think, haven't used directml for a while now, on zluda these days.

patientx avatar May 15 '24 20:05 patientx

Nice!!! :D

Extraltodeus avatar May 16 '24 22:05 Extraltodeus