stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[Bug]: RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same (RTX 2080Ti)

Open praytheomnissiah opened this issue 2 years ago • 10 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues and checked the recent builds/commits

What happened?

Generate process crash upon finishing.

Steps to reproduce the problem

  1. git pull

  2. pip install -r requirements.txt

  3. Files installed manually:

user@linuxuser:~/stable-diffusion-webui$ find ./models -name 768*
./models/Stable-diffusion/768-v-ema.yaml
./models/Stable-diffusion/768-v-ema.ckpt
  1. launch './webui.sh' without arguments

  2. Action taken: Enter prompt, set options and hit 'Generate'.

  3. Error after generation bar fulfilled.

What should have happened?

A photo of a dog should appeared in ./output/txt2img-images/

Commit where the problem happens

Commit hash: 9e78d2c419732711e984c4478af15ece121d64fd

What platforms do you use to access UI ?

Linux

What browsers do you use to access the UI ?

Mozilla Firefox

Command Line Arguments

No arguments on command line.

Additional information, context and logs

OS version: Ubuntu 22.04.1 LTS

Output log:

linuxuser@computer:~/stable-diffusion-webui$ ./webui.sh 

################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye)
################################################################

################################################################
Running on linuxuser user
################################################################

################################################################
Repo already cloned, using it as install directory
################################################################

################################################################
Create and activate python venv
################################################################

################################################################
Launching launch.py...
################################################################
Python 3.8.13 (default, Mar 28 2022, 11:38:47) 
[GCC 7.5.0]
Commit hash: 6074175faa751dde933aa8e15cd687ca4e4b4a23
Installing requirements for Web UI
Installing requirements for Dreambooth
Checking Dreambooth requirements.
Dreambooth revision is c589a3596ade64228de8a7851f50c2470c7a76aa
Args: ['extensions/sd_dreambooth_extension/install.py']
[*] Diffusers version is 0.7.2.
[*] Torch version is 1.12.1+cu113.
[*] Torch vision version is 0.13.1+cu113.
[*] Transformers version is 4.21.0.
[*] Xformers


Launching Web UI with arguments: 
WARNING:root:A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
Patching transformers to fix kwargs errors.
Dreambooth API layer loaded
Loading config from: /home/linuxuser/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.yaml
LatentDiffusion: Running in v-prediction mode
DiffusionWrapper has 865.91 M params.
Loading weights [2c02b20a] from /home/linuxuser/stable-diffusion-webui/models/Stable-diffusion/768-v-ema.ckpt
Global Step: 140000
Applying cross attention optimization (Doggettx).
Model loaded.
Loaded a total of 0 textual inversion embeddings.
Embeddings: 
Running on local URL:  http://127.0.0.1:7860

To create a public link, set `share=True` in `launch()`.
100%|███████████████████████████████████████████| 20/20 [00:06<00:00,  3.04it/s]
Error completing request████████████████████████| 20/20 [00:04<00:00,  4.62it/s]
Arguments: ('A photo of a dog', '', 'None', 'None', 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 768, 768, False, 0.7, 0, 0, 0, False, False, False, '', 1, '', 0, '', True, False, False) {}
Traceback (most recent call last):
  File "/home/linuxuser/stable-diffusion-webui/modules/ui.py", line 169, in f
    res = list(func(*args, **kwargs))
  File "/home/linuxuser/stable-diffusion-webui/webui.py", line 58, in f
    res = func(*args, **kwargs)
  File "/home/linuxuser/stable-diffusion-webui/modules/txt2img.py", line 49, in txt2img
    processed = process_images(p)
  File "/home/linuxuser/stable-diffusion-webui/modules/processing.py", line 430, in process_images
    res = process_images_inner(p)
  File "/home/linuxuser/stable-diffusion-webui/modules/processing.py", line 534, in process_images_inner
    x_samples_ddim = decode_first_stage(p.sd_model, samples_ddim)
  File "/home/linuxuser/stable-diffusion-webui/modules/processing.py", line 368, in decode_first_stage
    x = model.decode_first_stage(x)
  File "/home/linuxuser/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/linuxuser/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/diffusion/ddpm.py", line 826, in decode_first_stage
    return self.first_stage_model.decode(z)
  File "/home/linuxuser/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/models/autoencoder.py", line 90, in decode
    dec = self.decoder(z)
  File "/home/linuxuser/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/linuxuser/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/model.py", line 631, in forward
    h = self.mid.attn_1(h)
  File "/home/linuxuser/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/linuxuser/stable-diffusion-webui/repositories/stable-diffusion-stability-ai/ldm/modules/diffusionmodules/model.py", line 267, in forward
    out = self.proj_out(out)
  File "/home/linuxuser/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/linuxuser/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 457, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "/home/linuxuser/stable-diffusion-webui/venv/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 453, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.cuda.HalfTensor) should be the same

praytheomnissiah avatar Nov 27 '22 12:11 praytheomnissiah

same here

shangdibufashi avatar Nov 27 '22 13:11 shangdibufashi

I think same here i've already post my bug in another post i hope here it is the good one for my bug Error completing request Arguments: ('a dog', '', 'None', 'None', 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 768, 768, False, 0.7, 0, 0, 0, 0.9, 5, '0.0001', False, 'None', '', 0.1, False, 0.9, 5, '0.0001', False, 'None', '', 0.1, False, '

Running in txt2img mode:

Render these video formats:

', '

Animation Parameters

', '

Initial Parameters

', '

Prompt Template, applied to each keyframe below

', '

Props, Stamps

', '

Supported Keyframes:
time_s | source | video, images, img2img | path
time_s | prompt | positive_prompts | negative_prompts
time_s | template | positive_prompts | negative_prompts
time_s | prompt_from_png | file_path
time_s | transform | zoom | x_shift | y_shift | rotation
time_s | seed | new_seed_int
time_s | noise | added_noise_strength
time_s | denoise | denoise_value
time_s | set_text | textblock_name | text_prompt | x | y | w | h | fore_color | back_color | font_name
time_s | clear_text | textblock_name
time_s | prop | prop_name | prop_filename | x pos | y pos | scale | rotation
time_s | set_stamp | stamp_name | stamp_filename | x pos | y pos | scale | rotation
time_s | clear_stamp | stamp_name
time_s | col_set
time_s | col_clear
time_s | model | 768-v-ema, Anything-V3.0-pruned, Cyberpunk-Anime-Diffusion, Mdjrny-pprct_step_7000, arcane-diffusion-v3, arcane-v4 le-mien, classicAnim-v1, comic-diffusion, comic-diffusion-V2, discoElysium-pruned, discodiffusion, eldenRing-v3-pruned, eldenring-v2-pruned, f222, ghibli-diffusion-v1, mdjrny-v4, model, nitroDiffusion-v1, redshift-diffusion-v1, robo-diffusion-v1, sd-v1-5-inpainting, spiderverse-v1-pruned, superhero-diffusion, v1-5-pruned-emaonly, wulf1

', '10.0', '15', False, False, True, '1.0', '', '', '', 0.4, '0', '0', '0', '', False, 0, False, 0.1, False, False, False, False, '', 1, '', 0, '', True, False, False, '{inspiration}', None) {} Traceback (most recent call last): File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\ui.py", line 185, in f res = list(func(*args, **kwargs)) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\webui.py", line 56, in f res = func(*args, **kwargs) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\txt2img.py", line 49, in txt2img processed = process_images(p) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\processing.py", line 430, in process_images res = process_images_inner(p) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\processing.py", line 531, in process_images_inner samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\processing.py", line 664, in sample samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x)) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\sd_samplers.py", line 485, in sample samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\sd_samplers.py", line 387, in launch_sampling return func() File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\sd_samplers.py", line 485, in samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={ File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_euler_ancestral denoised = model(x, sigmas[i] * s_in, **extra_args) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\modules\sd_samplers.py", line 311, in forward x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond={"c_crossattn": [cond_in[a:b]], "c_concat": [image_cond_in[a:b]]}) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 167, in forward return self.get_v(input * c_in, self.sigma_to_t(sigma), **kwargs) * c_out + input * c_skip File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\k-diffusion\k_diffusion\external.py", line 177, in get_v return self.inner_model.apply_model(x, t, cond) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 858, in apply_model x_recon = self.model(x_noisy, t, **cond) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1148, in _call_impl result = forward_call(*input, **kwargs) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 1329, in forward out = self.diffusion_model(x, t, context=cc) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 776, in forward h = module(h, emb, context) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\openaimodel.py", line 86, in forward x = layer(x) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 457, in forward return self._conv_forward(input, self.weight, self.bias) File "G:\SUPERSTABLEDIFFUSION\SUPER STABLE DIFFUSION\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 453, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Input type (torch.cuda.HalfTensor) and weight type (torch.cuda.FloatTensor) should be the same

YUMEORIGINS avatar Nov 28 '22 01:11 YUMEORIGINS

Still happening with hash 0b5dcb3d7ce397ad38312dbfc70febe7bb42dcc3

I wondering if reinstalling may solve the problem; did anyone try it?

praytheomnissiah avatar Nov 28 '22 09:11 praytheomnissiah

I did, on my linux workstation.

CounterFlow64 avatar Nov 29 '22 06:11 CounterFlow64

@SomeOrdinaryDude your problem solved by reinstalling?

shangdibufashi avatar Dec 02 '22 04:12 shangdibufashi

Having the same fatal failure with a RTX 3090 on Ubuntu Linux. A fresh clean cloned SD directory does NOT solve the failure. the model .ckpt does not matter.

git bisect shows the regression happened when SD 2.0 was added.

Bisecting: 0 revisions left to test after this (roughly 0 steps)
[ce6911158b5b2f9cf79b405a1f368f875492044d] Add support Stable Diffusion 2.0

spworley avatar Dec 02 '22 04:12 spworley

@shangdibufashi

No.

CounterFlow64 avatar Dec 02 '22 09:12 CounterFlow64

same wrone when I used gfpgan

wh9204 avatar Dec 03 '22 15:12 wh9204

Still not working on hash 44c46f0ed395967cd3830dd481a2db759fda5b3b

praytheomnissiah avatar Dec 08 '22 11:12 praytheomnissiah

I was able to fix this by forcing the devices.autocast function to run via a one-line change in modules/processing.py (if you're not familiar with git diffs, see the lines prefixed with - and + for the code change):

diff --git a/modules/processing.py b/modules/processing.py
index 24c537d..efdfcb9 100644
--- a/modules/processing.py
+++ b/modules/processing.py
@@ -396,7 +396,7 @@ def create_random_tensors(shape, seeds, subseeds=None, subseed_strength=0.0, see


 def decode_first_stage(model, x):
-    with devices.autocast(disable=x.dtype == devices.dtype_vae):
+    with devices.autocast():
         x = model.decode_first_stage(x)

     return x

My environment:

[ec2-user@ip-172-31-1-148 stable-diffusion-webui]$ python3.8 -m torch.utils.collect_env
Collecting environment information...
PyTorch version: 1.12.1+cu113
Is debug build: False
CUDA used to build PyTorch: 11.3
ROCM used to build PyTorch: N/A

OS: Amazon Linux 2 (x86_64)
GCC version: (GCC) 7.3.1 20180712 (Red Hat 7.3.1-15)
Clang version: Could not collect
CMake version: version 2.8.12.2
Libc version: glibc-2.26

Python version: 3.8.15 (default, Nov 16 2022, 23:39:29)  [GCC 7.3.1 20180712 (Red Hat 7.3.1-15)] (64-bit runtime)
Python platform: Linux-4.14.296-222.539.amzn2.x86_64-x86_64-with-glibc2.2.5
Is CUDA available: True
CUDA runtime version: 11.7.99
GPU models and configuration: GPU 0: NVIDIA A10G
Nvidia driver version: 515.65.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

Versions of relevant libraries:
[pip3] mypy-extensions==0.4.3
[pip3] numpy==1.23.3
[pip3] open-clip-torch==2.7.0
[pip3] pytorch-lightning==1.7.6
[pip3] torch==1.12.1+cu113
[pip3] torchdiffeq==0.2.3
[pip3] torchmetrics==0.11.0
[pip3] torchsde==0.2.5
[pip3] torchvision==0.13.1+cu113
[conda] No relevant packages

If I was sure this wouldn't degrade performance for people who don't have this problem, I'd be happy to open a PR — contributors feel free to offer guidance!

cameron avatar Dec 12 '22 20:12 cameron