stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

Apple silicon:black output randomly happens

Open A2Sumie opened this issue 2 years ago • 14 comments

Describe the bug I'm using a Apple silicon device(mbp16) and black outputs happens from time to time. Tried Euler a and DPM fast samplers, the failure happens randomly. No error outputs in console.

To Reproduce Steps to reproduce the behavior:

  1. Start a conversion batch.
  2. Wait for completion.
  3. See error

Expected behavior All outputs are properly processed.

Screenshots If applicable, add screenshots to help explain your problem. Running another batch, will attach when occurs. 00036-724949706-

Desktop (please complete the following information):

  • OS: Macos 12
  • Browser Edge Mac(image in output folder is also black)
  • Commit revision 698d303b04e293635bfb49c525409f3bcf671dce

Additional context Add any other context about the problem here.

A2Sumie avatar Oct 13 '22 02:10 A2Sumie

try --precision full --no-half, worth a shot https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Troubleshooting

ClashSAN avatar Oct 13 '22 03:10 ClashSAN

I run the command --precision full --no-half --opt-split-attention-v1 --disable-safe-unpickle and I still get the black ones on my Mac as well, @ClashSAN . Generally it's like a third of the batch that turns out black.

Karsten385 avatar Oct 13 '22 03:10 Karsten385

was this always an issue? can you revert to an older commit?

ClashSAN avatar Oct 13 '22 03:10 ClashSAN

try --precision full --no-half, worth a shot https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Troubleshooting

The default line in the script is python webui.py --precision full --no-half --opt-split-attention-v1 Haven't tried --disable-safe-unpickle yet.

A2Sumie avatar Oct 13 '22 04:10 A2Sumie

try removing the --opt-split-attention-v1, the newer opt-split-attention implementation will be on by default.

ClashSAN avatar Oct 13 '22 04:10 ClashSAN

try removing the --opt-split-attention-v1, the newer opt-split-attention implementation will be on by default.

Looks promising in the new batch, still running.

A2Sumie avatar Oct 13 '22 04:10 A2Sumie

https://github.com/dylancl/stable-diffusion-webui-mps/commit/01b071c72d107555961e839082773d78e1e1ad99 Looks like they updated the script, default parameters are changed. so make sure both the script and repo are updated.

the invoke commit - https://github.com/AUTOMATIC1111/stable-diffusion-webui/pull/2234

if the issue is solved, say so. you can close it, @A2Sumie

ClashSAN avatar Oct 13 '22 04:10 ClashSAN

Not solved. I'm not sure but I was using AC power when the outputs became stable. Now I'm on battery and black outputs returned. Not sure if this is the cause.

A2Sumie avatar Oct 13 '22 06:10 A2Sumie

I also have this issue since October 11 evening (around 7 PM UTC+2) I used to be able to generate a batch of 4 images in one go, now if I do that I guess black output every time. Everything seems slower too.

TaciteOFF avatar Oct 13 '22 08:10 TaciteOFF

I also have this issue since October 11 evening (around 7 PM UTC+2) I used to be able to generate a batch of 4 images in one go, now if I do that I guess black output every time. Everything seems slower too.

I run the command --precision full --no-half --opt-split-attention-v1 --disable-safe-unpickle and I still get the black ones on my Mac as well, @ClashSAN . Generally it's like a third of the batch that turns out black.

I've found something weird. Toggling fans to full speed would significantly increase the rate of success. Using battery power or throwing my Mac on to my bed gives bunches of black outputs. Can you check this out?

A2Sumie avatar Oct 13 '22 10:10 A2Sumie

If you have modelname.vae.pt in the models directory, adding --no-half-vae could fix the issue

Kalekki avatar Oct 13 '22 10:10 Kalekki

If you have modelname.vae.pt in the models directory, adding --no-half-vae could fix the issue

I have this added but does not help. :(

A2Sumie avatar Oct 13 '22 10:10 A2Sumie

I've found something weird. Toggling fans to full speed would significantly increase the rate of success. Using battery power or throwing my Mac on to my bed gives bunches of black outputs. Can you check this out?

You actually might be onto something here, I put my Mac standing up on its short side so the fans had plenty of room to breathe and it put out a bunch of images correctly in a row. Weird.

Karsten385 avatar Oct 13 '22 19:10 Karsten385

Not sure if there is an issue for consistent black image outputs on M1 or if it is the same problem, but for me only a few samplers work correctly. Important note, they all work with 1 sampling step (or is it because no sampling is applied on step 1?), but some of them output black images for 2+ steps. I marked with ✅ ones that do work for me

[text-to-image] Euler A ❌ Euler ✅ LMS ❌ Heun ❌ DPM2 ✅ DPM2 a ❌ DPM fast ✅ DPM adaptive ✅ LMS Karras ❌ DPM2 Karras ✅ DPM2 a Karras ❌ DDIM ✅ PLMS ✅

[image-to-image] All the samplers work correctly ✅

UPDATE: the method bellow fixes my issue with samplers, but does NOT fix random black image outputs
in file repositories/k-diffusion/k_diffusion/sampling.py after def to_d(x, sigma, denoised): add sigma = sigma.to('cpu').to('mps') <- for some reason when the sigma value is close to 0 and is located not on the CPU, it shows it as 0, moving it to cpu fixes it for me, may be it is only a problem for specific nightly torch release / mac os version.

Also, the issue discussed in this topic only appears for a single sampling step, if you plot every step, it recovers in the next step.

remixer-dec avatar Oct 15 '22 12:10 remixer-dec

Not sure if there is an issue for consistent black image outputs on M1 or if it is the same problem, but for me only a few samplers work correctly. Important note, they all work with 1 sampling step (or is it because no sampling is applied on step 1?), but some of them output black images for 2+ steps. I marked with ✅ ones that do work for me

[text-to-image] Euler A ❌ Euler ✅ LMS ❌ Heun ❌ DPM2 ✅ DPM2 a ❌ DPM fast ✅ DPM adaptive ✅ LMS Karras ❌ DPM2 Karras ✅ DPM2 a Karras ❌ DDIM ✅ PLMS ✅

[image-to-image] All the samplers work correctly ✅

UPDATE: the method bellow fixes my issue with samplers, but does NOT fix random black image outputs in file repositories/k-diffusion/k_diffusion/sampling.py after def to_d(x, sigma, denoised): add sigma = sigma.to('cpu').to('mps') <- for some reason when the sigma value is close to 0 and is located not on the CPU, it shows it as 0, moving it to cpu fixes it for me, may be it is only a problem for specific nightly torch release / mac os version.

Also, the issue discussed in this topic only appears for a single sampling step, if you plot every step, it recovers in the next step.

Thanks a lot, the fix works with the lastest version on a MAC STUDIO 2022 so I can use "Euler A"

MrPalais avatar Dec 07 '22 16:12 MrPalais