stablediffusion icon indicating copy to clipboard operation
stablediffusion copied to clipboard

How to use full precision?

Open etale-cohomology opened this issue 2 years ago • 4 comments

Using txt2img.py and setting --precision full

I get the following error

Traceback (most recent call last):
  File "scripts/txt2img.py", line 108, in <module>
    x_samples = torch.clamp((x_samples+1.0) / 2.0, min=0.0,max=1.0)
  File "/home/da/py38/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/da/git/stablediffusion/ldm/models/diffusion/ddim.py", line 103, in sample
    samples, intermediates = self.ddim_sampling(conditioning, size,
  File "/home/da/py38/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/da/git/stablediffusion/ldm/models/diffusion/ddim.py", line 163, in ddim_sampling
    outs = self.p_sample_ddim(img, cond, ts, index=index, use_original_steps=ddim_use_original_steps,
  File "/home/da/py38/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/home/da/git/stablediffusion/ldm/models/diffusion/ddim.py", line 211, in p_sample_ddim
    model_uncond, model_t = self.model.apply_model(x_in, t_in, c_in).chunk(2)
  File "/home/da/git/stablediffusion/ldm/models/diffusion/ddpm.py", line 850, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "/home/da/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/da/git/stablediffusion/ldm/models/diffusion/ddpm.py", line 1320, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "/home/da/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/da/git/stablediffusion/ldm/modules/diffusionmodules/openaimodel.py", line 776, in forward
    h = module(h, emb, context)
  File "/home/da/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/da/git/stablediffusion/ldm/modules/diffusionmodules/openaimodel.py", line 86, in forward
    x = layer(x)
  File "/home/da/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl
    return forward_call(*input, **kwargs)
  File "/home/da/py38/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 463, in forward
    return self._conv_forward(input, self.weight, self.bias)
  File "/home/da/py38/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 459, in _conv_forward
    return F.conv2d(input, weight, bias, self.stride,
RuntimeError: Input type (c10::Half) and bias type (float) should be the same

etale-cohomology avatar Jan 11 '23 05:01 etale-cohomology

I met the same problem with you. Have you solved this?

zhihui96 avatar Feb 14 '23 06:02 zhihui96

No

etale-cohomology avatar Feb 14 '23 09:02 etale-cohomology

Also have the same problem :(

lakejee-rebel avatar Mar 03 '23 00:03 lakejee-rebel

After days of troubleshooting, I was able to resolve this by upgrading tensorflow to 2.11.0 and editing the v2-inference.yaml file's parameter of use_fp16 to False

lakejee-rebel avatar Mar 03 '23 01:03 lakejee-rebel