Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What happened?
I have followed the recommended instruction (including using the latest commit) and suggestion to use the original YAML file and change to the developer mode (for window), etc and still, I cannot get the new SD v2 (768) to work in the Web UI. It keeps erroring out.
I have even removed the -xformers/dark theme too, still didn't work. I've also downloaded a fresh Auto 1111 source files, still not working. Remove all models and left just the two files in the folder, still doesn't work.
Steps to reproduce the problem
Loading weights [XXXXXX] from C:\Users\XXXXX\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\models\Stable-diffusion\768-v-ema.ckpt
Traceback (most recent call last):
File "C:\Users\XXXXX\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 284, in run_predict
output = await app.blocks.process_api(
File "C:\Users\XXXXX\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 982, in process_api
result = await self.call_function(fn_index, inputs, iterator)
File "C:\Users\XXXXX\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 824, in call_function
prediction = await anyio.to_thread.run_sync(
File "C:\Users\XXXXX\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "C:\Users\XXXXX\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "C:\Users\XXXXX\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "C:\Users\XXXXX\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\modules\ui.py", line 1664, in
fn=lambda value, k=k: run_settings_single(value, key=k),
File "C:\Users\XXXXX\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\modules\ui.py", line 1505, in run_settings_single
if not opts.set(key, value):
File "C:\Users\XXXXX\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\modules\shared.py", line 476, in set
self.data_labels[key].onchange()
File "C:\Users\XXXXX\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\webui.py", line 45, in f
res = func(*args, **kwargs)
File "C:\Users\XXXXX\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\webui.py", line 87, in
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: modules.sd_models.reload_model_weights()))
File "C:\Users\XXXXX\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\modules\sd_models.py", line 302, in reload_model_weights
load_model_weights(sd_model, checkpoint_info)
File "C:\Users\XXXXX\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\modules\sd_models.py", line 192, in load_model_weights
model.load_state_dict(sd, strict=False)
File "C:\Users\XXXXX\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1604, in load_state_dict
raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format(
RuntimeError: Error(s) in loading state_dict for LatentDiffusion:
size mismatch for model.diffusion_model.input_blocks.1.1.proj_in.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
size mismatch for model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
size mismatch for model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
size mismatch for model.diffusion_model.input_blocks.1.1.proj_out.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
size mismatch for model.diffusion_model.input_blocks.2.1.proj_in.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
size mismatch for model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
size mismatch for model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
size mismatch for model.diffusion_model.input_blocks.2.1.proj_out.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
size mismatch for model.diffusion_model.input_blocks.4.1.proj_in.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
size mismatch for model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
size mismatch for model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
size mismatch for model.diffusion_model.input_blocks.4.1.proj_out.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
size mismatch for model.diffusion_model.input_blocks.5.1.proj_in.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
size mismatch for model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
size mismatch for model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
size mismatch for model.diffusion_model.input_blocks.5.1.proj_out.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
size mismatch for model.diffusion_model.input_blocks.7.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
size mismatch for model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
size mismatch for model.diffusion_model.input_blocks.7.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.input_blocks.8.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
size mismatch for model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
size mismatch for model.diffusion_model.input_blocks.8.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.middle_block.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
size mismatch for model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
size mismatch for model.diffusion_model.middle_block.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.3.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
size mismatch for model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
size mismatch for model.diffusion_model.output_blocks.3.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.4.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
size mismatch for model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
size mismatch for model.diffusion_model.output_blocks.4.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.5.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
size mismatch for model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]).
size mismatch for model.diffusion_model.output_blocks.5.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.6.1.proj_in.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
size mismatch for model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
size mismatch for model.diffusion_model.output_blocks.6.1.proj_out.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.7.1.proj_in.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
size mismatch for model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
size mismatch for model.diffusion_model.output_blocks.7.1.proj_out.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.8.1.proj_in.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
size mismatch for model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]).
size mismatch for model.diffusion_model.output_blocks.8.1.proj_out.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.9.1.proj_in.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
size mismatch for model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
size mismatch for model.diffusion_model.output_blocks.9.1.proj_out.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.10.1.proj_in.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
size mismatch for model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
size mismatch for model.diffusion_model.output_blocks.10.1.proj_out.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.11.1.proj_in.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
size mismatch for model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
size mismatch for model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).
size mismatch for model.diffusion_model.output_blocks.11.1.proj_out.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).
What should have happened?
The 786 ckpt should load in the web UI and work.
Commit where the problem happens
8c8ad93bb5f
What platforms do you use to access UI ?
Windows
What browsers do you use to access the UI ?
Google Chrome
Command Line Arguments
No response
Additional information, context and logs
No response
same issue it loaded corectly but when i want to generate i have this :
Error completing request
Arguments: ('a dog', '', 'None', 'None', 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 768, 768, False, 0.7, 0, 0, 0, 0.9, 5, '0.0001', False, 'None', '', 0.1, False, 0.9, 5, '0.0001', False, 'None', '', 0.1, False, '
Running in txt2img mode:
Render these video formats:
', '
Animation Parameters
', '
Initial Parameters
', '
Prompt Template, applied to each keyframe below
', '
Props, Stamps
', '
Supported Keyframes:
time_s | source | video, images, img2img | path
time_s | prompt | positive_prompts | negative_prompts
time_s | template | positive_prompts | negative_prompts
time_s | prompt_from_png | file_path
time_s | transform | zoom | x_shift | y_shift | rotation
time_s | seed | new_seed_int
time_s | noise | added_noise_strength
time_s | denoise | denoise_value
time_s | set_text | textblock_name | text_prompt | x | y | w | h | fore_color | back_color | font_name
time_s | clear_text | textblock_name
time_s | prop | prop_name | prop_filename | x pos | y pos | scale | rotation
time_s | set_stamp | stamp_name | stamp_filename | x pos | y pos | scale | rotation
time_s | clear_stamp | stamp_name
time_s | col_set
time_s | col_clear
time_s | model | 768-v-ema, Anything-V3.0-pruned, Cyberpunk-Anime-Diffusion, Mdjrny-pprct_step_7000, arcane-diffusion-v3, arcane-v4 le-mien, classicAnim-v1, comic-diffusion, comic-diffusion-V2, discoElysium-pruned, discodiffusion, eldenRing-v3-pruned, eldenring-v2-pruned, f222, ghibli-diffusion-v1, mdjrny-v4, model, nitroDiffusion-v1, redshift-diffusion-v1, robo-diffusion-v1, sd-v1-5-inpainting, spiderverse-v1-pruned, superhero-diffusion, v1-5-pruned-emaonly, wulf1
', '10.0', '15', False, False, True, '1.0', '', '', '', 0.4, '0', '0', '0', '', False, 0, False, 0.1, False, False, False, False, '', 1, '', 0, '', True, False, False, '{inspiration}', None) {}
Traceback (most recent call last):
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\ui.py", line 185, in f
res = list(func(*args, **kwargs))
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\webui.py", line 56, in f
res = func(*args, **kwargs)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\txt2img.py", line 49, in txt2img
processed = process_images(p)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\processing.py", line 430, in process_images
res = process_images_inner(p)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\processing.py", line 531, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\processing.py", line 664, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\sd_samplers.py", line 485, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\sd_samplers.py", line 387, in launch_sampling
return func()
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\sd_samplers.py", line 485, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\sd_samplers.py", line 311, in forward
x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond={"c_crossattn": [cond_in[a:b]], "c_concat": [image_cond_in[a:b]]})
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 167, in forward
return self.get_v(input * c_in, self.sigma_to_t(sigma), **kwargs) * c_out + input * c_skip
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 177, in get_v
return self.inner_model.apply_model(x, t, cond)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1148, in _call_impl
result = forward_call(*input, **kwargs)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1329, in forward
out = self.diffusion_model(x, t, context=cc)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 776, in forward
h = module(h, emb, context)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 86, in forward
x = layer(x)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 457, in forward
return self._conv_forward(input, self.weight, self.bias)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 453, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.cuda.HalfTensor) and weight type (torch.cuda.FloatTensor) should be the same
same issue it loaded corectly but when i want to generate i have this : Error completing request Arguments: ('a dog', '', 'None', 'None', 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 768, 768, False, 0.7, 0, 0, 0, 0.9, 5, '0.0001', False, 'None', '', 0.1, False, 0.9, 5, '0.0001', False, 'None', '', 0.1, False, '
Running in txt2img mode:Render these video formats:
', '
Animation Parameters
', '
Initial Parameters
', '
Prompt Template, applied to each keyframe below
', '
Props, Stamps
', '
Supported Keyframes:time_s | source | video, images, img2img | pathtime_s | prompt | positive_prompts | negative_promptstime_s | template | positive_prompts | negative_promptstime_s | prompt_from_png | file_pathtime_s | transform | zoom | x_shift | y_shift | rotationtime_s | seed | new_seed_inttime_s | noise | added_noise_strengthtime_s | denoise | denoise_valuetime_s | set_text | textblock_name | text_prompt | x | y | w | h | fore_color | back_color | font_nametime_s | clear_text | textblock_nametime_s | prop | prop_name | prop_filename | x pos | y pos | scale | rotationtime_s | set_stamp | stamp_name | stamp_filename | x pos | y pos | scale | rotationtime_s | clear_stamp | stamp_nametime_s | col_settime_s | col_cleartime_s | model | 768-v-ema, Anything-V3.0-pruned, Cyberpunk-Anime-Diffusion, Mdjrny-pprct_step_7000, arcane-diffusion-v3, arcane-v4 le-mien, classicAnim-v1, comic-diffusion, comic-diffusion-V2, discoElysium-pruned, discodiffusion, eldenRing-v3-pruned, eldenring-v2-pruned, f222, ghibli-diffusion-v1, mdjrny-v4, model, nitroDiffusion-v1, redshift-diffusion-v1, robo-diffusion-v1, sd-v1-5-inpainting, spiderverse-v1-pruned, superhero-diffusion, v1-5-pruned-emaonly, wulf1
', '10.0', '15', False, False, True, '1.0', '', '', '', 0.4, '0', '0', '0', '', False, 0, False, 0.1, False, False, False, False, '', 1, '', 0, '', True, False, False, '{inspiration}', None) {}
Traceback (most recent call last):
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\ui.py", line 185, in f
res = list(func(*args, **kwargs))
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\webui.py", line 56, in f
res = func(*args, **kwargs)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\txt2img.py", line 49, in txt2img
processed = process_images(p)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\processing.py", line 430, in process_images
res = process_images_inner(p)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\processing.py", line 531, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\processing.py", line 664, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\sd_samplers.py", line 485, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\sd_samplers.py", line 387, in launch_sampling
return func()
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\sd_samplers.py", line 485, in
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\sd_samplers.py", line 311, in forward
x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond={"c_crossattn": [cond_in[a:b]], "c_concat": [image_cond_in[a:b]]})
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 167, in forward
return self.get_v(input * c_in, self.sigma_to_t(sigma), **kwargs) * c_out + input * c_skip
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 177, in get_v
return self.inner_model.apply_model(x, t, cond)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1148, in _call_impl
result = forward_call(*input, **kwargs)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1329, in forward
out = self.diffusion_model(x, t, context=cc)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 776, in forward
h = module(h, emb, context)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 86, in forward
x = layer(x)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 457, in forward
return self._conv_forward(input, self.weight, self.bias)
File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 453, in _conv_forward
return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.cuda.HalfTensor) and weight type (torch.cuda.FloatTensor) should be the same
your issue is not the same.. here is the bug about not loading at start and erroring out.
your issue comes after loading the model.