[Bug]: Cannot render or upscale images above 512x512 on 1.1.0
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What happened?
txtimg, Hires fix & SD Upscale throw errors below when rendering an image above 512x512:
Error completing request
Arguments: ('task(r9rrr409jrpiyo5)', 'test pattern', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, True, 0.7, 2, 'Latent', 0, 0, 0, [], 0, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0) {}
Traceback (most recent call last):
File "F:\automatic1111\stable-diffusion-webui-directml\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "F:\automatic1111\stable-diffusion-webui-directml\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "F:\automatic1111\stable-diffusion-webui-directml\modules\txt2img.py", line 56, in txt2img
processed = process_images(p)
File "F:\automatic1111\stable-diffusion-webui-directml\modules\processing.py", line 515, in process_images
res = process_images_inner(p)
File "F:\automatic1111\stable-diffusion-webui-directml\modules\processing.py", line 669, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "F:\automatic1111\stable-diffusion-webui-directml\modules\processing.py", line 961, in sample
samples = self.sampler.sample_img2img(self, samples, noise, conditioning, unconditional_conditioning, steps=self.hr_second_pass_steps or self.steps, image_conditioning=image_conditioning)
File "F:\automatic1111\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 350, in sample_img2img
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "F:\automatic1111\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 251, in launch_sampling
return func()
File "F:\automatic1111\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 350, in
The above ONLY happens when you are attempting to upscale past 512x512. For example, if you upscale an image from 256x256 to 512x512, it works. But even 1px above 512, it returns the above errors. This is observed on SD Upscale & Hires fix both. Neither will upscale past 512x512.
Steps to reproduce the problem
- Go to txt2img
- Press Generate
- Image generates, but as soon as upscaling starts, it errors with RuntimeError: The parameter is incorrect.
What should have happened?
It should just upscale the image after it is finished generating.
Commit where the problem happens
https://github.com/AUTOMATIC1111/stable-diffusion-webui/commit/6d08363f4320cc3699863fb6bccf8a6a15b3188b
What platforms do you use to access the UI ?
Windows
What browsers do you use to access the UI ?
Mozilla Firefox
Command Line Arguments
Have tested with a brand new install with no command lines, same issue.
List of extensions
N/A - tested on fresh install also.
Console logs
venv "F:\automatic1111\stable-diffusion-webui-directml\venv\Scripts\Python.exe"
Installing DirectML
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Commit hash: 6d08363f4320cc3699863fb6bccf8a6a15b3188b
Installing requirements
Launching Web UI with arguments:
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Loading weights [6ce0161689] from F:\automatic1111\stable-diffusion-webui-directml\models\Stable-diffusion\v1-5-pruned-emaonly.safetensors
Creating model from config: F:\automatic1111\stable-diffusion-webui-directml\configs\v1-inference.yaml
LatentDiffusion: Running in eps-prediction mode
DiffusionWrapper has 859.52 M params.
Applying scaled dot product cross attention optimization.
Textual inversion embeddings loaded(0):
Model loaded in 1.4s (load weights from disk: 0.2s, create model: 0.3s, apply weights to model: 0.5s, apply half(): 0.4s).
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 8.2s (import torch: 2.1s, import gradio: 1.2s, import ldm: 0.5s, other imports: 1.0s, load scripts: 1.0s, load SD checkpoint: 1.7s, create ui: 0.6s).
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:21<00:00, 1.10s/it]
0%| | 0/20 [00:00<?, ?it/s]
Error completing request
Arguments: ('task(rkw9m7crwmuxjrv)', 'test pattern', '', [], 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, True, 0.7, 2, 'Latent', 0, 0, 0, [], 0, False, False, 'positive', 'comma', 0, False, False, '', 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0) {}
Traceback (most recent call last):
File "F:\automatic1111\stable-diffusion-webui-directml\modules\call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "F:\automatic1111\stable-diffusion-webui-directml\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "F:\automatic1111\stable-diffusion-webui-directml\modules\txt2img.py", line 56, in txt2img
processed = process_images(p)
File "F:\automatic1111\stable-diffusion-webui-directml\modules\processing.py", line 515, in process_images
res = process_images_inner(p)
File "F:\automatic1111\stable-diffusion-webui-directml\modules\processing.py", line 669, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "F:\automatic1111\stable-diffusion-webui-directml\modules\processing.py", line 961, in sample
samples = self.sampler.sample_img2img(self, samples, noise, conditioning, unconditional_conditioning, steps=self.hr_second_pass_steps or self.steps, image_conditioning=image_conditioning)
File "F:\automatic1111\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 350, in sample_img2img
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "F:\automatic1111\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 251, in launch_sampling
return func()
File "F:\automatic1111\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 350, in <lambda>
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "F:\automatic1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "F:\automatic1111\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\sampling.py", line 153, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "F:\automatic1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "F:\automatic1111\stable-diffusion-webui-directml\modules\sd_samplers_kdiffusion.py", line 135, in forward
x_out = self.inner_model(x_in, sigma_in, cond=make_condition_dict([cond_in], image_cond_in))
File "F:\automatic1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "F:\automatic1111\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "F:\automatic1111\stable-diffusion-webui-directml\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "F:\automatic1111\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 17, in <lambda>
setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
File "F:\automatic1111\stable-diffusion-webui-directml\modules\sd_hijack_utils.py", line 26, in __call__
return self.__sub_func(self.__orig_func, *args, **kwargs)
File "F:\automatic1111\stable-diffusion-webui-directml\modules\sd_hijack_unet.py", line 45, in apply_model
return orig_func(self, x_noisy.to(devices.dtype_unet), t.to(devices.dtype_unet), cond, **kwargs).float()
File "F:\automatic1111\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "F:\automatic1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1538, in _call_impl
result = forward_call(*args, **kwargs)
File "F:\automatic1111\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1335, in forward
out = self.diffusion_model(x, t, context=cc)
File "F:\automatic1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "F:\automatic1111\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 797, in forward
h = module(h, emb, context)
File "F:\automatic1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "F:\automatic1111\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 84, in forward
x = layer(x, context)
File "F:\automatic1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "F:\automatic1111\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 334, in forward
x = block(x, context=context[i])
File "F:\automatic1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "F:\automatic1111\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 269, in forward
return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
File "F:\automatic1111\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 121, in checkpoint
return CheckpointFunction.apply(func, len(inputs), *args)
File "F:\automatic1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\autograd\function.py", line 506, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
File "F:\automatic1111\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\util.py", line 136, in forward
output_tensors = ctx.run_function(*ctx.input_tensors)
File "F:\automatic1111\stable-diffusion-webui-directml\repositories\stable-diffusion-stability-ai\ldm\modules\attention.py", line 272, in _forward
x = self.attn1(self.norm1(x), context=context if self.disable_self_attn else None) + x
File "F:\automatic1111\stable-diffusion-webui-directml\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "F:\automatic1111\stable-diffusion-webui-directml\modules\sd_hijack_optimizations.py", line 390, in scaled_dot_product_attention_forward
hidden_states = torch.nn.functional.scaled_dot_product_attention(
RuntimeError: The parameter is incorrect.
Additional information
- Tested on install that was simply updated as well as a fresh install.
Have discovered it also can't render images at all above 512x512, even without SD upscale or hires fix. Same error occurs.
Latest commit seems to have fixed it for the most part. Still struggles with the number 1024 for some reason, but I can upscale 1.9x to 998 or seemingly any number over 1024 as well as I used the SD upscale by accident for a 4x upscale and it did not error. Will leave this open though since the problem is still somewhat active and really odd behavior.
still have the same issue, i was able to render 1560x1240 images with txt2img before now i cant even do 1024x1024 above.
still have the same issue, i was able to render 1560x1240 images with txt2img before now i cant even do 1024x1024 above.
ive 11gb vram btw
It looks like you are using direct-ml, it would be more appropriate to file an issue with the corresponding repository.
just FYI, I removed /venv and re-run then worked
This is likely a directml-specific issue, so closing.