stable-diffusion-webui-forge
stable-diffusion-webui-forge copied to clipboard
[Bug] LDSR fails every time with RuntimeError
Every time I try to use LDSR (Extras - Upscale), I get this error: RuntimeError: mat1 and mat2 must have the same dtype, but got Half and Float
Full log:
Begin to load 1 model
[Memory Management] Current Free GPU Memory (MB) = 21273.337600708008
[Memory Management] Model Memory (MB) = 454.2076225280762
[Memory Management] Minimal Inference Memory (MB) = 1024.0
[Memory Management] Estimated Remaining GPU Memory (MB) = 19795.12997817993
Moving model(s) has taken 0.17 seconds
Model loaded in 12.3s (calculate hash: 0.3s, load weights from disk: 0.3s, forge load real models: 10.4s, load textual inversion embeddings: 0.9s, calculate empty prompt: 0.4s).
Cleanup all memory.
Loading model from C:\Users\dsent\AI\Forge\webui\models\LDSR\model.ckpt
LatentDiffusionV1: Running in eps-prediction mode
Keeping EMAs of 308.
Down sample rate is 1 from 4 / 4 (Not downsampling)
Plotting: Switched to EMA weights
Sampling with eta = 1.0; steps: 100
Data shape for DDIM sampling is (1, 3, 576, 832), eta 1.0
Running DDIM Sampling with 100 timesteps
DDIM Sampler: 0%| | 0/100 [00:00<?, ?it/s]
Plotting: Restored training weights
*** Error completing request
*** Arguments: ('task(i6agrg8rc70p9rb)', 0, <PIL.Image.Image image mode=RGBA size=828x524 at 0x1AA458D5A50>, None, '', '', True, True, 0, 4, 0, 512, 512, True, 'LDSR', 'None', 0, False, 1, False, 1, 0, False, 0.5, 0.2, False, 0.9, 0.15, 0.5, False, False, 384, 768, 4096, 409600, 'Maximize area', 0.1, False, ['Horizontal'], False, ['Deepbooru']) {}
Traceback (most recent call last):
File "C:\Users\dsent\AI\Forge\webui\modules\call_queue.py", line 74, in f
res = list(func(*args, **kwargs))
File "C:\Users\dsent\AI\Forge\webui\modules\call_queue.py", line 53, in f
res = func(*args, **kwargs)
File "C:\Users\dsent\AI\Forge\webui\modules\call_queue.py", line 37, in f
res = func(*args, **kwargs)
File "C:\Users\dsent\AI\Forge\webui\modules\postprocessing.py", line 136, in run_postprocessing_webui
return run_postprocessing(*args, **kwargs)
File "C:\Users\dsent\AI\Forge\webui\modules\postprocessing.py", line 76, in run_postprocessing
scripts.scripts_postproc.run(initial_pp, args)
File "C:\Users\dsent\AI\Forge\webui\modules\scripts_postprocessing.py", line 198, in run
script.process(single_image, **process_args)
File "C:\Users\dsent\AI\Forge\webui\scripts\postprocessing_upscale.py", line 152, in process
upscaled_image = self.upscale(pp.image, pp.info, upscaler1, upscale_mode, upscale_by, max_side_length, upscale_to_width, upscale_to_height, upscale_crop)
File "C:\Users\dsent\AI\Forge\webui\scripts\postprocessing_upscale.py", line 107, in upscale
image = upscaler.scaler.upscale(image, upscale_by, upscaler.data_path)
File "C:\Users\dsent\AI\Forge\webui\modules\upscaler.py", line 68, in upscale
img = self.do_upscale(img, selected_model)
File "C:\Users\dsent\AI\Forge\webui\extensions-builtin\LDSR\scripts\ldsr_model.py", line 60, in do_upscale
return ldsr.super_resolution(img, ddim_steps, self.scale)
File "C:\Users\dsent\AI\Forge\webui\extensions-builtin\LDSR\ldsr_model_arch.py", line 137, in super_resolution
logs = self.run(model["model"], im_padded, diffusion_steps, eta)
File "C:\Users\dsent\AI\Forge\webui\extensions-builtin\LDSR\ldsr_model_arch.py", line 96, in run
logs = make_convolutional_sample(example, model,
File "c:\Users\dsent\AI\Forge\system\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\dsent\AI\Forge\webui\extensions-builtin\LDSR\ldsr_model_arch.py", line 228, in make_convolutional_sample
sample, intermediates = convsample_ddim(model, c, steps=custom_steps, shape=z.shape,
File "c:\Users\dsent\AI\Forge\system\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\dsent\AI\Forge\webui\extensions-builtin\LDSR\ldsr_model_arch.py", line 184, in convsample_ddim
samples, intermediates = ddim.sample(steps, batch_size=bs, shape=shape, conditioning=cond, callback=callback,
File "c:\Users\dsent\AI\Forge\system\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\dsent\AI\Forge\webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddim.py", line 104, in sample
samples, intermediates = self.ddim_sampling(conditioning, size,
File "c:\Users\dsent\AI\Forge\system\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\dsent\AI\Forge\webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddim.py", line 164, in ddim_sampling
outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,
File "c:\Users\dsent\AI\Forge\system\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "C:\Users\dsent\AI\Forge\webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddim.py", line 189, in p_sample_ddim
model_output = self.model.apply_model(x, t, c)
File "C:\Users\dsent\AI\Forge\webui\extensions-builtin\LDSR\sd_hijack_ddpm_v1.py", line 964, in apply_model
output_list = [self.model(z_list[i], t, **cond_list[i]) for i in range(z.shape[-1])]
File "C:\Users\dsent\AI\Forge\webui\extensions-builtin\LDSR\sd_hijack_ddpm_v1.py", line 964, in <listcomp>
output_list = [self.model(z_list[i], t, **cond_list[i]) for i in range(z.shape[-1])]
File "c:\Users\dsent\AI\Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "c:\Users\dsent\AI\Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\dsent\AI\Forge\webui\extensions-builtin\LDSR\sd_hijack_ddpm_v1.py", line 1400, in forward
out = self.diffusion_model(xc, t)
File "c:\Users\dsent\AI\Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "c:\Users\dsent\AI\Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\dsent\AI\Forge\webui\modules\sd_unet.py", line 84, in UNetModel_forward
return original_forward(self, x, timesteps, context, *args, **kwargs)
File "C:\Users\dsent\AI\Forge\webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 789, in forward
emb = self.time_embed(t_emb)
File "c:\Users\dsent\AI\Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "c:\Users\dsent\AI\Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "c:\Users\dsent\AI\Forge\system\python\lib\site-packages\torch\nn\modules\container.py", line 215, in forward
input = module(input)
File "c:\Users\dsent\AI\Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "c:\Users\dsent\AI\Forge\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
File "c:\Users\dsent\AI\Forge\system\python\lib\site-packages\torch\nn\modules\linear.py", line 114, in forward
return F.linear(input, self.weight, self.bias)
RuntimeError: mat1 and mat2 must have the same dtype, but got Half and Float