stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[Bug]: Runtime error with updated to latest commit, downloaded 768-v-ema.ckpt and 768-v-ema.yaml, placed in \models\Stable-diffusion

Open Shangooriginal opened this issue 3 years ago • 4 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues and checked the recent builds/commits

What happened?

I have followed the recommended instruction (including using the latest commit) and suggestion to use the original YAML file and change to the developer mode (for window), etc and still, I cannot get the new SD v2 (768) to work in the Web UI. It keeps erroring out.

I have even removed the -xformers/dark theme too, still didn't work. I've also downloaded a fresh Auto 1111 source files, still not working. Remove all models and left just the two files in the folder, still doesn't work.

Steps to reproduce the problem

Loading weights [XXXXXX] from C:\Users\XXXXX\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\models\Stable-diffusion\768-v-ema.ckpt Traceback (most recent call last): File "C:\Users\XXXXX\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\routes.py", line 284, in run_predict output = await app.blocks.process_api( File "C:\Users\XXXXX\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 982, in process_api result = await self.call_function(fn_index, inputs, iterator) File "C:\Users\XXXXX\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\gradio\blocks.py", line 824, in call_function prediction = await anyio.to_thread.run_sync( File "C:\Users\XXXXX\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\anyio\to_thread.py", line 31, in run_sync return await get_asynclib().run_sync_in_worker_thread( File "C:\Users\XXXXX\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread return await future File "C:\Users\XXXXX\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\anyio_backends_asyncio.py", line 867, in run result = context.run(func, *args) File "C:\Users\XXXXX\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\modules\ui.py", line 1664, in fn=lambda value, k=k: run_settings_single(value, key=k), File "C:\Users\XXXXX\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\modules\ui.py", line 1505, in run_settings_single if not opts.set(key, value): File "C:\Users\XXXXX\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\modules\shared.py", line 476, in set self.data_labels[key].onchange() File "C:\Users\XXXXX\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\webui.py", line 45, in f res = func(*args, **kwargs) File "C:\Users\XXXXX\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\webui.py", line 87, in shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: modules.sd_models.reload_model_weights())) File "C:\Users\XXXXX\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\modules\sd_models.py", line 302, in reload_model_weights load_model_weights(sd_model, checkpoint_info) File "C:\Users\XXXXX\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\modules\sd_models.py", line 192, in load_model_weights model.load_state_dict(sd, strict=False) File "C:\Users\XXXXX\Documents\WebUI-Stable-Diffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1604, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for LatentDiffusion: size mismatch for model.diffusion_model.input_blocks.1.1.proj_in.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]). size mismatch for model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]). size mismatch for model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]). size mismatch for model.diffusion_model.input_blocks.1.1.proj_out.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]). size mismatch for model.diffusion_model.input_blocks.2.1.proj_in.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]). size mismatch for model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]). size mismatch for model.diffusion_model.input_blocks.2.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]). size mismatch for model.diffusion_model.input_blocks.2.1.proj_out.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]). size mismatch for model.diffusion_model.input_blocks.4.1.proj_in.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]). size mismatch for model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]). size mismatch for model.diffusion_model.input_blocks.4.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]). size mismatch for model.diffusion_model.input_blocks.4.1.proj_out.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]). size mismatch for model.diffusion_model.input_blocks.5.1.proj_in.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]). size mismatch for model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]). size mismatch for model.diffusion_model.input_blocks.5.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]). size mismatch for model.diffusion_model.input_blocks.5.1.proj_out.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]). size mismatch for model.diffusion_model.input_blocks.7.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]). size mismatch for model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]). size mismatch for model.diffusion_model.input_blocks.7.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]). size mismatch for model.diffusion_model.input_blocks.7.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]). size mismatch for model.diffusion_model.input_blocks.8.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]). size mismatch for model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]). size mismatch for model.diffusion_model.input_blocks.8.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]). size mismatch for model.diffusion_model.input_blocks.8.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]). size mismatch for model.diffusion_model.middle_block.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]). size mismatch for model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]). size mismatch for model.diffusion_model.middle_block.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]). size mismatch for model.diffusion_model.middle_block.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]). size mismatch for model.diffusion_model.output_blocks.3.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]). size mismatch for model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]). size mismatch for model.diffusion_model.output_blocks.3.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]). size mismatch for model.diffusion_model.output_blocks.3.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]). size mismatch for model.diffusion_model.output_blocks.4.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]). size mismatch for model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]). size mismatch for model.diffusion_model.output_blocks.4.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]). size mismatch for model.diffusion_model.output_blocks.4.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]). size mismatch for model.diffusion_model.output_blocks.5.1.proj_in.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]). size mismatch for model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]). size mismatch for model.diffusion_model.output_blocks.5.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([1280, 1024]) from checkpoint, the shape in current model is torch.Size([1280, 768]). size mismatch for model.diffusion_model.output_blocks.5.1.proj_out.weight: copying a param with shape torch.Size([1280, 1280]) from checkpoint, the shape in current model is torch.Size([1280, 1280, 1, 1]). size mismatch for model.diffusion_model.output_blocks.6.1.proj_in.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]). size mismatch for model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]). size mismatch for model.diffusion_model.output_blocks.6.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]). size mismatch for model.diffusion_model.output_blocks.6.1.proj_out.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]). size mismatch for model.diffusion_model.output_blocks.7.1.proj_in.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]). size mismatch for model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]). size mismatch for model.diffusion_model.output_blocks.7.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]). size mismatch for model.diffusion_model.output_blocks.7.1.proj_out.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]). size mismatch for model.diffusion_model.output_blocks.8.1.proj_in.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]). size mismatch for model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]). size mismatch for model.diffusion_model.output_blocks.8.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([640, 1024]) from checkpoint, the shape in current model is torch.Size([640, 768]). size mismatch for model.diffusion_model.output_blocks.8.1.proj_out.weight: copying a param with shape torch.Size([640, 640]) from checkpoint, the shape in current model is torch.Size([640, 640, 1, 1]). size mismatch for model.diffusion_model.output_blocks.9.1.proj_in.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]). size mismatch for model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]). size mismatch for model.diffusion_model.output_blocks.9.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]). size mismatch for model.diffusion_model.output_blocks.9.1.proj_out.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]). size mismatch for model.diffusion_model.output_blocks.10.1.proj_in.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]). size mismatch for model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]). size mismatch for model.diffusion_model.output_blocks.10.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]). size mismatch for model.diffusion_model.output_blocks.10.1.proj_out.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]). size mismatch for model.diffusion_model.output_blocks.11.1.proj_in.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]). size mismatch for model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]). size mismatch for model.diffusion_model.output_blocks.11.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]). size mismatch for model.diffusion_model.output_blocks.11.1.proj_out.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]).

What should have happened?

The 786 ckpt should load in the web UI and work.

Commit where the problem happens

8c8ad93bb5f

What platforms do you use to access UI ?

Windows

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

No response

Additional information, context and logs

No response

Shangooriginal avatar Nov 27 '22 13:11 Shangooriginal

same,

yamasoo avatar Nov 27 '22 14:11 yamasoo

same, selected sd2 inpaint network. related to #5116

Loading weights [a1385830] from E:\aDiffusion\stable-diffusion-webui\models\Stable-diffusion\01_sfw_v2_512-inpainting-ema.ckpt Traceback (most recent call last): File "E:\aDiffusion\stable-diffusion-webui\launch.py", line 273, in start() File "E:\aDiffusion\stable-diffusion-webui\launch.py", line 268, in start webui.webui() File "E:\aDiffusion\stable-diffusion-webui\webui.py", line 152, in webui initialize() File "E:\aDiffusion\stable-diffusion-webui\webui.py", line 86, in initialize modules.sd_models.load_model() File "E:\aDiffusion\stable-diffusion-webui\modules\sd_models.py", line 261, in load_model load_model_weights(sd_model, checkpoint_info) File "E:\aDiffusion\stable-diffusion-webui\modules\sd_models.py", line 192, in load_model_weights model.load_state_dict(sd, strict=False) File "E:\aDiffusion\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1604, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for LatentDiffusion: size mismatch for model.diffusion_model.input_blocks.0.0.weight: copying a param with shape torch.Size([320, 9, 3, 3]) from checkpoint, the shape in current model is torch.Size([320, 4, 3, 3]). size mismatch for model.diffusion_model.input_blocks.1.1.proj_in.weight: copying a param with shape torch.Size([320, 320]) from checkpoint, the shape in current model is torch.Size([320, 320, 1, 1]). size mismatch for model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_k.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]). size mismatch for model.diffusion_model.input_blocks.1.1.transformer_blocks.0.attn2.to_v.weight: copying a param with shape torch.Size([320, 1024]) from checkpoint, the shape in current model is torch.Size([320, 768]).

crckdns avatar Nov 27 '22 14:11 crckdns

but it must be some local issue ..as @AUTOMATIC1111 doesn't have it it seems..

however, that's a critical issue breaking the working state. tried to revert to commit c33b9a6 and it didn't solve the issue

crckdns avatar Nov 27 '22 16:11 crckdns

Same issue also.. even running other repos same issue

Maki9009 avatar Nov 27 '22 16:11 Maki9009

glad to see that I'm not the only one having this issue

CoiloverCat avatar Nov 27 '22 18:11 CoiloverCat

same-ish

rethink-studios avatar Nov 27 '22 18:11 rethink-studios

I actually fixed my issue, i added the .yaml file incorrectly... i saved it and renamed it incorrectly... i added. yaml at the end when i renamed which is think what was causing the issue..

Maki9009 avatar Nov 27 '22 18:11 Maki9009

I'm getting the same issue, and pulled the yaml file out just to be sure, and am STILL getting the error.

Mousewrites avatar Nov 27 '22 19:11 Mousewrites

For the record, fixed my issue: I had Lanczos upscaler chosen in settings, and it's busted right now, so it was throwing the error when I was using img2img, because it was checking for Lanczos.

Mousewrites avatar Nov 27 '22 20:11 Mousewrites

same issue it loaded corectly but when i want to generate i have this : Error completing request Arguments: ('a dog', '', 'None', 'None', 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 768, 768, False, 0.7, 0, 0, 0, 0.9, 5, '0.0001', False, 'None', '', 0.1, False, 0.9, 5, '0.0001', False, 'None', '', 0.1, False, '

Running in txt2img mode:

Render these video formats:

', '

Animation Parameters

', '

Initial Parameters

', '

Prompt Template, applied to each keyframe below

', '

Props, Stamps

', '

Supported Keyframes:
time_s | source | video, images, img2img | path
time_s | prompt | positive_prompts | negative_prompts
time_s | template | positive_prompts | negative_prompts
time_s | prompt_from_png | file_path
time_s | transform | zoom | x_shift | y_shift | rotation
time_s | seed | new_seed_int
time_s | noise | added_noise_strength
time_s | denoise | denoise_value
time_s | set_text | textblock_name | text_prompt | x | y | w | h | fore_color | back_color | font_name
time_s | clear_text | textblock_name
time_s | prop | prop_name | prop_filename | x pos | y pos | scale | rotation
time_s | set_stamp | stamp_name | stamp_filename | x pos | y pos | scale | rotation
time_s | clear_stamp | stamp_name
time_s | col_set
time_s | col_clear
time_s | model | 768-v-ema, Anything-V3.0-pruned, Cyberpunk-Anime-Diffusion, Mdjrny-pprct_step_7000, arcane-diffusion-v3, arcane-v4 le-mien, classicAnim-v1, comic-diffusion, comic-diffusion-V2, discoElysium-pruned, discodiffusion, eldenRing-v3-pruned, eldenring-v2-pruned, f222, ghibli-diffusion-v1, mdjrny-v4, model, nitroDiffusion-v1, redshift-diffusion-v1, robo-diffusion-v1, sd-v1-5-inpainting, spiderverse-v1-pruned, superhero-diffusion, v1-5-pruned-emaonly, wulf1

', '10.0', '15', False, False, True, '1.0', '', '', '', 0.4, '0', '0', '0', '', False, 0, False, 0.1, False, False, False, False, '', 1, '', 0, '', True, False, False, '{inspiration}', None) {} Traceback (most recent call last): File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\ui.py", line 185, in f res = list(func(*args, **kwargs)) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\webui.py", line 56, in f res = func(*args, **kwargs) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\txt2img.py", line 49, in txt2img processed = process_images(p) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\processing.py", line 430, in process_images res = process_images_inner(p) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\processing.py", line 531, in process_images_inner samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\processing.py", line 664, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\sd_samplers.py", line 485, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\sd_samplers.py", line 387, in launch_sampling return func() File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\sd_samplers.py", line 485, in samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral denoised = model(x, sigmas[i] * s_in, **extra_args) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\sd_samplers.py", line 311, in forward x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond={"c_crossattn": [cond_in[a:b]], "c_concat": [image_cond_in[a:b]]}) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 167, in forward return self.get_v(input * c_in, self.sigma_to_t(sigma), **kwargs) * c_out + input * c_skip File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 177, in get_v return self.inner_model.apply_model(x, t, cond) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, **cond) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1148, in _call_impl result = forward_call(*input, **kwargs) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1329, in forward out = self.diffusion_model(x, t, context=cc) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 776, in forward h = module(h, emb, context) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 86, in forward x = layer(x) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 457, in forward return self._conv_forward(input, self.weight, self.bias) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 453, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Input type (torch.cuda.HalfTensor) and weight type (torch.cuda.FloatTensor) should be the same

YUMEORIGINS avatar Nov 27 '22 20:11 YUMEORIGINS

Had same issue, solved by deleting yaml file and downloading it again by right-click "save link as". At first startup a 4gb files gets downloaded then.

drunga86 avatar Nov 27 '22 21:11 drunga86

same issue it loaded corectly but when i want to generate i have this : Error completing request Arguments: ('a dog', '', 'None', 'None', 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 768, 768, False, 0.7, 0, 0, 0, 0.9, 5, '0.0001', False, 'None', '', 0.1, False, 0.9, 5, '0.0001', False, 'None', '', 0.1, False, '

Running in txt2img mode:Render these video formats: ', '

Animation Parameters ', '

Initial Parameters ', '

Prompt Template, applied to each keyframe below ', '

Props, Stamps ', '

Supported Keyframes:time_s | source | video, images, img2img | pathtime_s | prompt | positive_prompts | negative_promptstime_s | template | positive_prompts | negative_promptstime_s | prompt_from_png | file_pathtime_s | transform | zoom | x_shift | y_shift | rotationtime_s | seed | new_seed_inttime_s | noise | added_noise_strengthtime_s | denoise | denoise_valuetime_s | set_text | textblock_name | text_prompt | x | y | w | h | fore_color | back_color | font_nametime_s | clear_text | textblock_nametime_s | prop | prop_name | prop_filename | x pos | y pos | scale | rotationtime_s | set_stamp | stamp_name | stamp_filename | x pos | y pos | scale | rotationtime_s | clear_stamp | stamp_nametime_s | col_settime_s | col_cleartime_s | model | 768-v-ema, Anything-V3.0-pruned, Cyberpunk-Anime-Diffusion, Mdjrny-pprct_step_7000, arcane-diffusion-v3, arcane-v4 le-mien, classicAnim-v1, comic-diffusion, comic-diffusion-V2, discoElysium-pruned, discodiffusion, eldenRing-v3-pruned, eldenring-v2-pruned, f222, ghibli-diffusion-v1, mdjrny-v4, model, nitroDiffusion-v1, redshift-diffusion-v1, robo-diffusion-v1, sd-v1-5-inpainting, spiderverse-v1-pruned, superhero-diffusion, v1-5-pruned-emaonly, wulf1 ', '10.0', '15', False, False, True, '1.0', '', '', '', 0.4, '0', '0', '0', '', False, 0, False, 0.1, False, False, False, False, '', 1, '', 0, '', True, False, False, '{inspiration}', None) {} Traceback (most recent call last): File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\ui.py", line 185, in f res = list(func(*args, **kwargs)) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\webui.py", line 56, in f res = func(*args, **kwargs) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\txt2img.py", line 49, in txt2img processed = process_images(p) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\processing.py", line 430, in process_images res = process_images_inner(p) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\processing.py", line 531, in process_images_inner samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\processing.py", line 664, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\sd_samplers.py", line 485, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\sd_samplers.py", line 387, in launch_sampling return func() File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\sd_samplers.py", line 485, in samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral denoised = model(x, sigmas[i] * s_in, **extra_args) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\sd_samplers.py", line 311, in forward x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond={"c_crossattn": [cond_in[a:b]], "c_concat": [image_cond_in[a:b]]}) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 167, in forward return self.get_v(input * c_in, self.sigma_to_t(sigma), **kwargs) * c_out + input * c_skip File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 177, in get_v return self.inner_model.apply_model(x, t, cond) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, **cond) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1148, in _call_impl result = forward_call(*input, **kwargs) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1329, in forward out = self.diffusion_model(x, t, context=cc) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 776, in forward h = module(h, emb, context) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 86, in forward x = layer(x) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 457, in forward return self._conv_forward(input, self.weight, self.bias) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 453, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Input type (torch.cuda.HalfTensor) and weight type (torch.cuda.FloatTensor) should be the same

your issue is not the same.. here is the bug about not loading at start and erroring out. your issue comes after loading the model.

crckdns avatar Nov 27 '22 21:11 crckdns

for now I've "fixed" it it seems that way: removed the 2.0 inpainting model after start it switched to the fallback model (the first in the file listing) and after that I could operate the webui again.

so it's really about some wrongly saved or incompatible model.

crckdns avatar Nov 27 '22 21:11 crckdns

your issue is not the same.. here is the bug about not loading at start and erroring out. your issue comes after loading the model.

sorry i'm so a noob in that !

YUMEORIGINS avatar Nov 28 '22 00:11 YUMEORIGINS

UPDATE: thanks to those whom offered their suggestions. Today, I was able to load the v2 768 model. Not sure why it worked today, but I did notice that Auto 1111 updated the modules folder and webui.py, so this may have been the reason it worked.

I had already tried this recommendation to "save as link" the YAML file and NOT to add the .yaml when renaming and still, it did not work yesterday or before then. However, maybe this recommendation may have helped as well. Eitherway, something worked as I can use the v2 model

Shangooriginal avatar Nov 28 '22 12:11 Shangooriginal