stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[Bug]: Input type error running textual inversion with Inpainting model

Open willhsmit opened this issue 3 years ago • 0 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues and checked the recent builds/commits

What happened?

Created a TI embedding with sd-v1-5-inpainting model loaded, and tried training with a dataset that runs fine against non-inpainting model.

After 'Preparing dataset' completes, the first training step yields an error that "Input type (torch.cuda.HalfTensor) and weight type (torch.HalfTensor) should be the same". Full stack trace below.

Running on git pull from c4a221c405401eb7b3a6ade3de1175dfb346126c; I have --xformers flag; training with same settings on non-inpainting 1.5 model works fine on my install.

Traceback (most recent call last): File "C:\Users\willh\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 384, in train_embedding img_c = processing.txt2img_image_conditioning(shared.sd_model, c, training_width, training_height) File "C:\Users\willh\stable-diffusion-webui\modules\processing.py", line 88, in txt2img_image_conditioning image_conditioning = sd_model.get_first_stage_encoding(sd_model.encode_first_stage(image_conditioning)) File "C:\Users\willh\stable-diffusion-webui\venv\lib\site-packages\torch\autograd\grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "C:\Users\willh\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py", line 830, in encode_first_stage return self.first_stage_model.encode(x) File "C:\Users\willh\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\models\autoencoder.py", line 83, in encode h = self.encoder(x) File "C:\Users\willh\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\willh\stable-diffusion-webui\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\model.py", line 523, in forward hs = [self.conv_in(x)] File "C:\Users\willh\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "C:\Users\willh\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 457, in forward return self._conv_forward(input, self.weight, self.bias) File "C:\Users\willh\stable-diffusion-webui\venv\lib\site-packages\torch\nn\modules\conv.py", line 453, in _conv_forward return F.conv2d(input, weight, bias, self.stride, RuntimeError: Input type (torch.cuda.HalfTensor) and weight type (torch.HalfTensor) should be the same

Steps to reproduce the problem

  1. Go to Create embedding
  2. Load sd-v1-5-inpainting model
  3. Create an embedding with 10 vectors per token and name and initialization text the same
  4. Go to train
  5. Select the embedding, set dataset+learning rate+template, leave other settings at defaults including Max steps = 100000
  6. Press 'Train embedding'
  7. Preparing dataset runs, but the actual training exits with 'Training finished at 0 steps.' and error

What should have happened?

Expect embedding to proceed to training steps without stack trace.

Commit where the problem happens

c4a221c405401eb7b3a6ade3de1175dfb346126c

What platforms do you use to access UI ?

Windows

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

--gradio-img2img-tool color-sketch --xformers --test-lora

Additional information, context and logs

No response

willhsmit avatar Jan 07 '23 07:01 willhsmit