automatic
automatic copied to clipboard
[Issue]: Failed to validate samples when performing "force HiRes" upscaling
Issue Description
Please see log info below for example. This happens every other upscale pretty much.
The normal generatino will work fine, but when I go to upscale with "force HiRes" enabled, only a black image is produced.
Version Platform Description
14:04:22-925565 INFO Starting SD.Next
14:04:22-928072 INFO Logger: file="D:\sd\sdnext\sdnext.log" level=INFO size=1519675 mode=append
14:04:22-929072 INFO Python 3.10.6 on Windows
14:04:23-017701 INFO Version: app=sd.next updated=2023-12-12 hash=69bda18e
url=https://github.com/vladmandic/automatic/tree/master
14:04:23-405427 INFO Latest published version: 5fb290f443d5f38a5c9f6e6095aabeab8e3a991d 2024-01-13T13:47:29Z
14:04:23-411474 INFO Platform: arch=AMD64 cpu=Intel64 Family 6 Model 151 Stepping 2, GenuineIntel system=Windows
release=Windows-10-10.0.22621-SP0 python=3.10.6
14:04:23-413433 INFO nVidia CUDA toolkit detected: nvidia-smi present
14:04:24-752317 INFO Extensions: disabled=[]
14:04:24-753317 INFO Extensions: enabled=['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'stable-diffusion-webui-images-browser',
'stable-diffusion-webui-rembg'] extensions-builtin
14:04:24-756826 INFO Extensions: enabled=[] extensions
14:04:24-757825 INFO Startup: quick launch
14:04:24-758826 INFO Verifying requirements
14:04:24-768847 INFO Verifying packages
14:04:24-771846 INFO Extensions: disabled=[]
14:04:24-771846 INFO Extensions: enabled=['Lora', 'sd-extension-chainner', 'sd-extension-system-info',
'sd-webui-agent-scheduler', 'sd-webui-controlnet', 'stable-diffusion-webui-images-browser',
'stable-diffusion-webui-rembg'] extensions-builtin
14:04:24-773847 INFO Extensions: enabled=[] extensions
14:04:24-779366 INFO Extension preload: {'extensions-builtin': 0.0, 'extensions': 0.0}
14:04:24-780366 INFO Command line args: ['--use-cuda'] use_cuda=True
14:04:26-793681 INFO Load packages: torch=2.1.1+cu121 diffusers=0.24.0 gradio=3.43.2
14:04:27-259850 INFO Engine: backend=Backend.ORIGINAL compute=cuda mode=no_grad device=cuda
cross-optimization="xFormers"
14:04:27-299784 INFO Device: device=NVIDIA GeForce RTX 3070 Ti n=1 arch=sm_90 cap=(8, 6) cuda=12.1 cudnn=8801
driver=546.33
14:04:29-263894 INFO Available VAEs: path="models\VAE" items=1
14:04:29-266407 INFO Disabling uncompatible extensions: backend=Backend.ORIGINAL []
14:04:29-269410 INFO Available models: path="models\Stable-diffusion" items=15 time=0.00
14:04:30-307333 INFO Extension: script='extensions-builtin\sd-webui-agent-scheduler\scripts\task_scheduler.py' Using
sqlite file: extensions-builtin\sd-webui-agent-scheduler\task_scheduler.sqlite3
14:04:30-463970 INFO Extension: script='extensions-builtin\sd-webui-controlnet\scripts\controlnet.py' Warning:
ControlNet failed to load SGM - will use LDM instead.
14:04:30-465022 INFO Extension: script='extensions-builtin\sd-webui-controlnet\scripts\controlnet.py' ControlNet
preprocessor location: D:\sd\sdnext\extensions-builtin\sd-webui-controlnet\annotator\downloads
14:04:30-472064 INFO Extension: script='extensions-builtin\sd-webui-controlnet\scripts\hook.py' Warning: ControlNet
failed to load SGM - will use LDM instead.
14:04:31-015381 INFO Extensions time: 1.60 { Lora=0.45 sd-webui-agent-scheduler=0.37 sd-webui-controlnet=0.17
stable-diffusion-webui-rembg=0.50 }
14:04:31-111526 INFO Load UI theme: name="black-teal" style=Auto base=sdnext.css
Relevant log output
14:05:58-525741 INFO Cross-attention: optimization=xFormers options=[]
14:05:58-752561 INFO Load embeddings: loaded=1 skipped=0 time=0.22
14:05:58-757672 INFO Model loaded in 10.90 { load=0.05 config=0.06 create=9.36 apply=0.27 vae=0.61 move=0.31
embeddings=0.23 }
14:05:59-031647 INFO Model load finished: {'ram': {'used': 7.59, 'total': 63.86}, 'gpu': {'used': 3.12, 'total':
8.0}, 'retries': 0, 'oom': 0} cached=0
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:02<00:00, 6.91it/s]
14:06:03-602012 INFO Upscaler loaded: type=RealESRGAN 4x+ Anime6B
model=models\RealESRGAN\RealESRGAN_x4plus_anime_6B.pth
Upscaling ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:01
100%|██████████████████████████████████████████████████████████████████████████████████| 22/22 [00:09<00:00, 2.36it/s]
14:06:18-676996 ERROR Failed to validate samples: sample=(1024, 1024, 3) invalid=3145728
14:06:18-693374 WARNING Attempted to correct samples: min=0.00 max=0.00 mean=0.00
14:06:18-708269 INFO Processed: images=1 time=19.50 its=1.03 memory={'ram': {'used': 2.43, 'total': 63.86}, 'gpu':
{'used': 3.75, 'total': 8.0}, 'retries': 0, 'oom': 0}
Backend
Original
Branch
Master
Model
SD 1.5
Acknowledgements
- [X] I have read the above and searched for existing issues
- [X] I confirm that this is classified correctly and its not an extension issue
does this happen with all upscalers or specific ones only?
to test you can generate image and then run different upscalers using xyz grid script.
also, please run with --debug and upload logs.
@vladmandic seems to be with all upscalers. I just tried 8 over 2 xyz grids.
Here is the log with --debug (didn't make much difference to output)
14:45:54-923164 INFO Applying LoRA: ['gradient monsters'] patch=0.00 load=0.33
14:45:54-964113 DEBUG Sampler: sampler="DPM++ 2M" config={'scheduler': 'karras', 'brownian_noise': False}
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:02<00:00, 7.71it/s]
14:45:57-643054 DEBUG Init hires: upscaler="RealESRGAN 4x+" sampler="DPM++ 2M" resize=0x0 upscale=1024x1024
14:45:58-951685 INFO Upscaler loaded: type=RealESRGAN 4x+ model=models\RealESRGAN\RealESRGAN_x4plus.pth
Upscaling ━━━━━━━━━━━━━╺━━━━━━━━━━━━━━━━━━━━━━━━━━ 33% 0:00:02 0:00:0014:46:00-075868 DEBUG Server: alive=True jobs=0 requests=157 uptime=164 memory=4.46/63.86 backend=Backend.ORIGINAL state=idle
Upscaling ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:02
14:46:02-538574 DEBUG Sampler: sampler="DPM++ 2M" config={'scheduler': 'karras', 'brownian_noise': False}
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 22/22 [00:09<00:00, 2.40it/s]
14:46:14-519138 ERROR Failed to validate samples: sample=(1024, 1024, 3) invalid=3145728
14:46:14-540378 WARNING Attempted to correct samples: min=0.00 max=0.00 mean=0.00
14:46:14-563529 DEBUG Saving: image="outputs\text\01189-masterpiece best quality lora gradient monsters 1 0.jpg" type=JPEG size=1024x1024
14:46:14-572049 INFO Processed: images=1 time=19.98 its=1.00 memory={'ram': {'used': 4.4, 'total': 63.86}, 'gpu': {'used': 4.04, 'total': 8.0}, 'retries': 0, 'oom': 0}
Loading model: models\Lora\char\gradient monsters.safetensors ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/75.6 MB -:--:--
14:46:14-920323 INFO Applying LoRA: ['gradient monsters'] patch=0.00 load=0.35
14:46:14-960707 DEBUG Sampler: sampler="DPM++ 2M" config={'scheduler': 'karras', 'brownian_noise': False}
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:01<00:00, 12.55it/s]
14:46:16-558918 DEBUG Init hires: upscaler="RealESRGAN 4x+ Anime6B" sampler="DPM++ 2M" resize=0x0 upscale=1024x1024
14:46:16-678587 DEBUG Upscaler cached: type=RealESRGAN model=models\RealESRGAN\RealESRGAN_x4plus_anime_6B.pth
Upscaling ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:00
14:46:17-293294 DEBUG Sampler: sampler="DPM++ 2M" config={'scheduler': 'karras', 'brownian_noise': False}
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 22/22 [00:07<00:00, 2.83it/s]
14:46:25-815258 ERROR Failed to validate samples: sample=(1024, 1024, 3) invalid=3145728
14:46:25-830781 WARNING Attempted to correct samples: min=0.00 max=0.00 mean=0.00
14:46:25-842294 DEBUG Saving: image="outputs\text\01190-masterpiece best quality lora gradient monsters 1 0.jpg" type=JPEG size=1024x1024
14:46:25-846581 INFO GPU high memory utilization: 100% {'ram': {'used': 4.64, 'total': 63.86}, 'gpu': {'used': 8.0, 'total': 8.0}, 'retries': 0, 'oom': 0}
14:46:26-101241 DEBUG gc: collected=21079 device=cuda {'ram': {'used': 4.62, 'total': 63.86}, 'gpu': {'used': 3.53, 'total': 8.0}, 'retries': 0, 'oom': 0}
14:46:26-103241 INFO Processed: images=1 time=11.53 its=1.73 memory={'ram': {'used': 4.62, 'total': 63.86}, 'gpu': {'used': 3.53, 'total': 8.0}, 'retries': 0, 'oom': 0}
Loading model: models\Lora\char\gradient monsters.safetensors ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/75.6 MB -:--:--
14:46:26-282058 INFO Applying LoRA: ['gradient monsters'] patch=0.00 load=0.18
14:46:26-305585 DEBUG Sampler: sampler="DPM++ 2M" config={'scheduler': 'karras', 'brownian_noise': False}
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:01<00:00, 12.83it/s]
14:46:27-872799 DEBUG Init hires: upscaler="ESRGAN 4x GAN" sampler="DPM++ 2M" resize=0x0 upscale=1024x1024
14:46:27-996453 INFO Downloading: url="https://github.com/cszn/KAIR/releases/download/v1.0/ESRGAN.pth" file=D:\sd\sdnext\models\ESRGAN\ESRGAN.pth
Downloading ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:04
14:46:32-714223 INFO Upscaler loaded: type=ESRGAN model=D:\sd\sdnext\models\ESRGAN\ESRGAN.pth
Upscaling ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:00
14:46:34-176814 DEBUG Sampler: sampler="DPM++ 2M" config={'scheduler': 'karras', 'brownian_noise': False}
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 22/22 [00:07<00:00, 2.83it/s]
14:46:43-349762 ERROR Failed to validate samples: sample=(1024, 1024, 3) invalid=3145728
14:46:43-365376 WARNING Attempted to correct samples: min=0.00 max=0.00 mean=0.00
14:46:43-377206 DEBUG Saving: image="outputs\text\01191-masterpiece best quality lora gradient monsters 1 0.jpg" type=JPEG size=1024x1024
14:46:43-382206 INFO GPU high memory utilization: 100% {'ram': {'used': 6.03, 'total': 63.86}, 'gpu': {'used': 8.0, 'total': 8.0}, 'retries': 0, 'oom': 0}
14:46:43-746569 DEBUG gc: collected=10788 device=cuda {'ram': {'used': 4.93, 'total': 63.86}, 'gpu': {'used': 3.59, 'total': 8.0}, 'retries': 0, 'oom': 0}
14:46:43-748602 INFO Processed: images=1 time=17.64 its=1.13 memory={'ram': {'used': 4.93, 'total': 63.86}, 'gpu': {'used': 3.59, 'total': 8.0}, 'retries': 0, 'oom': 0}
Loading model: models\Lora\char\gradient monsters.safetensors ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/75.6 MB -:--:--
14:46:43-969823 INFO Applying LoRA: ['gradient monsters'] patch=0.00 load=0.22
14:46:43-988478 DEBUG Sampler: sampler="DPM++ 2M" config={'scheduler': 'karras', 'brownian_noise': False}
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 20/20 [00:01<00:00, 13.01it/s]
14:46:45-529330 DEBUG Init hires: upscaler="chaiNNer 4x HAT" sampler="DPM++ 2M" resize=0x0 upscale=1024x1024
14:46:45-646309 INFO Downloading: url="https://huggingface.co/vladmandic/sdnext-upscalers/resolve/main/HAT-4x.pth" file=D:\sd\sdnext\models\chaiNNer\HAT-4x.pth
Downloading ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:04
14:46:51-353100 INFO Upscaler loaded: type=chaiNNer model='D:\sd\sdnext\models\chaiNNer\HAT-4x.pth'
Upscaling ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:05
D:\sd\sdnext\extensions-builtin\sd-extension-chainner\nodes\impl\image_utils.py:126: RuntimeWarning: invalid value encountered in cast
return (img * 255).round().astype(np.uint8)
14:46:57-006064 DEBUG Sampler: sampler="DPM++ 2M" config={'scheduler': 'karras', 'brownian_noise': False}
100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 22/22 [00:07<00:00, 2.83it/s]
14:47:06-222400 ERROR Failed to validate samples: sample=(1024, 1024, 3) invalid=3145728
14:47:06-239423 WARNING Attempted to correct samples: min=0.00 max=0.00 mean=0.00
14:47:06-251254 DEBUG Saving: image="outputs\text\01192-masterpiece best quality lora gradient monsters 1 0.jpg" type=JPEG size=1024x1024
14:47:06-255254 INFO GPU high memory utilization: 100% {'ram': {'used': 6.09, 'total': 63.86}, 'gpu': {'used': 8.0, 'total': 8.0}, 'retries': 0, 'oom': 0}
14:47:06-570443 DEBUG gc: collected=499 device=cuda {'ram': {'used': 4.78, 'total': 63.86}, 'gpu': {'used': 3.73, 'total': 8.0}, 'retries': 0, 'oom': 0}
14:47:06-572443 INFO Processed: images=1 time=22.82 its=0.88 memory={'ram': {'used': 4.78, 'total': 63.86}, 'gpu': {'used': 3.73, 'total': 8.0}, 'retries': 0, 'oom': 0}
14:47:06-623075 DEBUG Saving: image="outputs\grids\xyz_grid-xyz_grid-0000-masterpiece best quality lora gradient monsters 1 0.jpg" type=JPEG size=4096x1288
debug would show me your settings during server startup without needing to ask for every single one. anyhow, i cannot reproduce the problem. does it go away if you run in fp32? i know that's not a solution, just need to know. any chance you can try out dev branch? there have been several device/dtype mapping fixed in upscalers recently?
@vladmandic whats fp32? How can I try that?
In settings, set precision to fp32 instead of default fp16.
fp32 was no change.
Using the dev branch appears to have fixed it, but generations are now half as fast and it isn't picking up any of my vae files
Fp32 is half as fast as it runs in double the precision. If dev branch works, go back to fp16.
figured that might be the case, so I had already switched back.
Edit: To confirm, generations are half the amount of iterations in fp16 mode on dev branch
I restarted everything. Good news: it/s are back to normal and the vae has appeared on dev branch.
Bad news: back to the original issue
Master:
16:25:03-318846 INFO LoRA apply: ['gradient monsters'] patch=0.00 load=0.31
16:25:03-540186 DEBUG Sampler: sampler="DPM++ 2M" config={'scheduler': 'karras', 'brownian_noise': False}
35%|█████████████████████████████ | 7/20 [00:02<00:02, 4.87it/s]16:25:05-955594 DEBUG Load VAE decode approximate: model="models\VAE-approx\model.pt"
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:03<00:00, 5.94it/s]
16:25:07-000665 DEBUG Init hires: upscaler="RealESRGAN 4x+ Anime6B" sampler="UniPC" resize=0x0 upscale=947x947
16:25:07-936947 DEBUG Image resize: mode=1 resolution=947x947 upscaler=RealESRGAN 4x+ Anime6B function=sample
16:25:08-021946 INFO Upscaler loaded: type=RealESRGAN 4x+ Anime6B
model=models\RealESRGAN\RealESRGAN_x4plus_anime_6B.pth
Upscaling ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:01
16:25:10-969463 DEBUG Sampler: sampler="UniPC" config={}
Progress 2.75it/s ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:08
16:25:21-983568 ERROR Failed to validate samples: sample=(944, 944, 3) invalid=2673408
16:25:22-000957 WARNING Attempted to correct samples: min=0.00 max=0.00 mean=0.00
16:25:22-015928 DEBUG Saving: image="outputs\text\01198-masterpiece best quality lora gradient
monsters 1 0.jpg" type=JPEG resolution=944x944 size=0
16:25:22-020928 INFO Processed: images=1 time=19.01 its=1.05 memory={'ram': {'used': 4.22, 'total': 63.86}, 'gpu':
{'used': 3.7, 'total': 8.0}, 'retries': 0, 'oom': 0}
dev:
Loading model: D:\sd\sdnext\models\Lora\char\gradient monsters.safetensors ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0.0/75.6 MB -:--:--
16:28:19-055191 INFO LoRA apply: ['gradient monsters'] patch=0.00 load=0.47
16:28:19-768983 DEBUG Diffuser pipeline: StableDiffusionPipeline task=DiffusersTaskType.TEXT_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type': 'latent',
'num_inference_steps': 20, 'eta': 1.0, 'guidance_rescale': 0.7, 'width': 512, 'height': 512, 'parser': 'Full parser'}
16:28:19-829206 DEBUG Sampler: sampler="DPM++ 2M" config={'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'prediction_type': 'epsilon', 'thresholding': False, 'sample_max_value': 1.0, 'algorithm_type': 'sde-dpmsolver++',
'solver_type': 'midpoint', 'lower_order_final': True, 'use_karras_sigmas': True}
Progress 7.14it/s ██████████████████▏ 55% 11/20 00:03 00:01 Base16:28:23-056385 DEBUG Load VAE decode approximate: model="models\VAE-approx\model.pt"
Progress 5.36it/s █████████████████████████████████ 100% 20/20 00:03 00:00 Base
16:28:23-752926 DEBUG Init hires: upscaler="RealESRGAN 4x+ Anime6B" sampler="Default" resize=0x0 upscale=947x947
16:28:23-754926 INFO Hires: upscaler=RealESRGAN 4x+ Anime6B width=947 height=947 images=1
16:28:24-688607 DEBUG Image resize: input=<PIL.Image.Image image mode=RGB size=512x512 at 0x1E618B3D8A0> mode=1 target=947x947 upscaler=RealESRGAN 4x+ Anime6B function=hires_resize
16:28:24-785055 INFO Upscaler loaded: type=RealESRGAN 4x+ Anime6B model=models\RealESRGAN\RealESRGAN_x4plus_anime_6B.pth
Upscaling ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 100% 0:00:00 0:00:01
16:28:26-553115 DEBUG Pipeline class change: original=StableDiffusionPipeline target=StableDiffusionImg2ImgPipeline
16:28:26-603115 DEBUG Diffuser pipeline: StableDiffusionImg2ImgPipeline task=DiffusersTaskType.IMAGE_2_IMAGE set={'prompt_embeds': torch.Size([1, 77, 768]), 'negative_prompt_embeds': torch.Size([1, 77, 768]), 'guidance_scale': 6, 'generator': device(type='cuda'), 'output_type':
'latent', 'num_inference_steps': 63, 'eta': 1.0, 'image': <class 'list'>, 'strength': 0.35, 'parser': 'Full parser'}
Progress 3.03it/s ████████████████████████████████ 100% 22/22 00:07 00:00 Hires
16:28:37-540159 ERROR Failed to validate samples: sample=(944, 944, 3) invalid=2673408
16:28:37-553159 WARNING Attempted to correct samples: min=0.00 max=0.00 mean=0.00
16:28:37-562160 DEBUG Saving: image="outputs\text\01199-masterpiece best quality lora gradient monsters 1 0.jpg" type=JPEG resolution=944x944 size=0
16:28:37-567159 INFO Processed: images=1 time=18.99 its=1.05 memory={'ram': {'used': 3.88, 'total': 63.86}, 'gpu': {'used': 3.8, 'total': 8.0}, 'retries': 0, 'oom': 0}
I can replicate this issue with
RuntimeWarning: invalid value encountered in cast
return (img * 255).round().astype(np.uint8)
while trying forced hires with Chainner in current (c1dfb1b2) stable SD.Next. Images will come out totally black, that's in fp16 with SDP optimization, of course. I'm still on Windows 10.🙂
I can replicate this issue with ...
using which engine/backend/torch/etc? i cannot reproduce the problem and problem is highly specific to combination of gpu/torch/etc.
Oh, sorry, I'm using 'original' backend, SD 1.5, Torch 2.1.0+cu121 Autocast half. I'm avoiding diffusers backend (hence no XL, etc. for me), since extensions tend to freak out after restart. :)
06:05:32-589594 INFO Python 3.10.9 on Windows
06:05:33-365593 INFO Version: app=sd.next updated=2024-02-24 hash=c1dfb1b2 url=https://github.com/vladmandic/automatic/tree/master
06:05:35-445594 INFO Platform: arch=AMD64 cpu=Intel64 Family 6 Model 85 Stepping 4, GenuineIntel system=Windows
release=Windows-10-10.0.19045-SP0 python=3.10.9
06:05:35-471594 INFO nVidia CUDA toolkit detected: nvidia-smi present
@mart-hill i cant reproduce, see screenshot:
totally off-topic, which extensions you find critical that are missing/not-working with backend diffusers? i really want to move on from legacy, diffusers backend offers soo much more - and pretty much all new features added in the past few months (which are a lot of) are diffusers-only.
closing as this issue has been idle for a year. i'm sure that black images still occur given how (still) frequent is sdxl-vae which is not fp-16 safe. but for any such problems, lets start with new issue.