multidiffusion-upscaler-for-automatic1111 icon indicating copy to clipboard operation
multidiffusion-upscaler-for-automatic1111 copied to clipboard

Not compatible with xformers 0.0.18

Open LovElizabeth opened this issue 1 year ago • 19 comments

xformers officially upgrade to version 0.0.18. It seems multidiffusion-upscaler not compatible with newly version of xformers . I got this in console:

Traceback (most recent call last): File "D:\stable-diffusion-webui\modules\call_queue.py", line 56, in f res = list(func(*args, **kwargs)) File "D:\stable-diffusion-webui\modules\call_queue.py", line 37, in f res = func(*args, **kwargs) File "D:\stable-diffusion-webui\modules\img2img.py", line 172, in img2img processed = process_images(p) File "D:\stable-diffusion-webui\modules\processing.py", line 503, in process_images res = process_images_inner(p) File "D:\stable-diffusion-webui\modules\processing.py", line 653, in process_images_inner samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts) File "D:\stable-diffusion-webui\modules\processing.py", line 1087, in sample samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning) File "D:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 331, in sample_img2img samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "D:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 234, in launch_sampling return func() File "D:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 331, in samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs)) File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context return func(*args, **kwargs) File "D:\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 553, in sample_dpmpp_sde denoised = model(x, sigmas[i] * s_in, **extra_args) File "D:\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl return forward_call(*args, **kwargs) File "D:\stable-diffusion-webui\modules\sd_samplers_kdiffusion.py", line 152, in forward devices.test_for_nans(x_out, "unet") File "D:\stable-diffusion-webui\modules\devices.py", line 152, in test_for_nans raise NansException(message) modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.

I am using python: 3.10.10  •  torch: 2.0.0+cu118  •  xformers: 0.0.18  •  gradio: 3.23.0 and windows10 as env.

Hope someone could fix this. Thanks!

LovElizabeth avatar Apr 05 '23 23:04 LovElizabeth