stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

RuntimeError: view size is not compatible with input tensor's size and stride

Open vikas-movva opened this issue 1 year ago • 9 comments

Describe the bug getting an error message when trying to generate.

To Reproduce Steps to reproduce the behavior: click generate button on txt2image or img2img

Expected behavior image generates

Screenshots image

Desktop (please complete the following information):

  • OS: M1 macbook air
  • Browser: Brave
  • Commit revision: 74f940e8183e0745029ee38e7542237205c927d8
img2img: 
  0%|                                                                                                                                                                         | 0/16 [00:00<?, ?it/s]
Error completing request
Arguments: (0, '', '', 'None', 'None', <PIL.Image.Image image mode=RGB size=2560x1600 at 0x2B007EDA0>, None, None, None, 0, 20, 0, 4, 0, False, False, 1, 1, 7, 0.75, -1.0, -1.0, 0, 0, 0, False, 512, 512, 0, False, 32, 0, '', '', 0, 1, '', 4, '', True, False, 4, 1, 128, 4, 0, ['left', 'right', 'up', 'down'], '', '', 1, 50, 0, '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 4, ['left', 'right', 'up', 'down'], 1, 0.05, '<p style="margin-bottom:0.75em">Will upscale the image to twice the dimensions; use width and height sliders to set tile size</p>', 64, 0, False, None, '') {}
Traceback (most recent call last):
  File "/Users/vikas/stable-diffusion-webui/modules/ui.py", line 137, in f
    res = list(func(*args, **kwargs))
  File "/Users/vikas/stable-diffusion-webui/webui.py", line 59, in f
    res = func(*args, **kwargs)
  File "/Users/vikas/stable-diffusion-webui/modules/img2img.py", line 117, in img2img
    processed = process_images(p)
  File "/Users/vikas/stable-diffusion-webui/modules/processing.py", line 343, in process_images
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength)
  File "/Users/vikas/stable-diffusion-webui/modules/processing.py", line 621, in sample
    samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning)
  File "/Users/vikas/stable-diffusion-webui/modules/sd_samplers.py", line 272, in sample_img2img
    return self.func(self.model_wrap_cfg, xi, sigma_sched, extra_args={'cond': conditioning, 'uncond': unconditional_conditioning, 'cond_scale': p.cfg_scale}, disable=False, callback=self.callback_state)
  File "/opt/homebrew/Caskroom/miniforge/base/envs/web_ui/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/opt/homebrew/Caskroom/miniforge/base/envs/web_ui/lib/python3.10/site-packages/k_diffusion/sampling.py", line 77, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "/opt/homebrew/Caskroom/miniforge/base/envs/web_ui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/Users/vikas/stable-diffusion-webui/modules/sd_samplers.py", line 176, in forward
    uncond, cond = self.inner_model(x_in, sigma_in, cond=cond_in).chunk(2)
  File "/opt/homebrew/Caskroom/miniforge/base/envs/web_ui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/opt/homebrew/Caskroom/miniforge/base/envs/web_ui/lib/python3.10/site-packages/k_diffusion/external.py", line 100, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "/opt/homebrew/Caskroom/miniforge/base/envs/web_ui/lib/python3.10/site-packages/k_diffusion/external.py", line 126, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "/Users/vikas/stable-diffusion-webui/repositories/stable-diffusion/ldm/models/diffusion/ddpm.py", line 987, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "/opt/homebrew/Caskroom/miniforge/base/envs/web_ui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/Users/vikas/stable-diffusion-webui/repositories/stable-diffusion/ldm/models/diffusion/ddpm.py", line 1410, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "/opt/homebrew/Caskroom/miniforge/base/envs/web_ui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/Users/vikas/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/diffusionmodules/openaimodel.py", line 732, in forward
    h = module(h, emb, context)
  File "/opt/homebrew/Caskroom/miniforge/base/envs/web_ui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/Users/vikas/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/diffusionmodules/openaimodel.py", line 85, in forward
    x = layer(x, context)
  File "/opt/homebrew/Caskroom/miniforge/base/envs/web_ui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/Users/vikas/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/attention.py", line 258, in forward
    x = block(x, context=context)
  File "/opt/homebrew/Caskroom/miniforge/base/envs/web_ui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/Users/vikas/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/attention.py", line 209, in forward
    return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
  File "/Users/vikas/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/diffusionmodules/util.py", line 114, in checkpoint
    return CheckpointFunction.apply(func, len(inputs), *args)
  File "/Users/vikas/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/diffusionmodules/util.py", line 127, in forward
    output_tensors = ctx.run_function(*ctx.input_tensors)
  File "/Users/vikas/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/attention.py", line 212, in _forward
    x = self.attn1(self.norm1(x)) + x
  File "/opt/homebrew/Caskroom/miniforge/base/envs/web_ui/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/opt/homebrew/Caskroom/miniforge/base/envs/web_ui/lib/python3.10/site-packages/torch/nn/modules/normalization.py", line 189, in forward
    return F.layer_norm(
  File "/opt/homebrew/Caskroom/miniforge/base/envs/web_ui/lib/python3.10/site-packages/torch/nn/functional.py", line 2503, in layer_norm
    return torch.layer_norm(input, normalized_shape, weight, bias, eps, torch.backends.cudnn.enabled)
RuntimeError: view size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Use .reshape(...) instead.

vikas-movva avatar Sep 24 '22 04:09 vikas-movva

Getting the same thing on m1. bump

sagefarrenholz avatar Sep 29 '22 06:09 sagefarrenholz

I am getting the same thing on my M1 Mac. It was hard enough to get the web-ui to pop up after numerous install issues. Not sure what to do at this point but I guess its something about using MPS or CPU vs GPU.

idklolwhatever avatar Oct 02 '22 23:10 idklolwhatever

This issue is solved here https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/992#issuecomment-1257051711 by @dylancl:

Using the Preview (Nightly) version of PyTorch works for me. I uninstalled my Pytorch 12.1 and re-install Nightly version then solved this problem.

For Nightly version install:

conda install pytorch torchvision torchaudio -c pytorch-nightly

or

pip3 install --pre torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/nightly/cpu

tsu710785 avatar Oct 03 '22 03:10 tsu710785

I would not recommend installing the latest nightly version of PyTorch since I believe the performance regression has not been fixed yet.

Use pip install --pre torch==1.13.0.dev20220922 torchvision -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html --no-deps as in the installation script and performance should be good 👍

dylancl avatar Oct 03 '22 09:10 dylancl

Still getting the same problem as described in the issue

sagefarrenholz avatar Oct 08 '22 05:10 sagefarrenholz

pip install --pre torch==1.13.0.dev20220922 torchvision -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html --no-deps done! Thanks!

01Petard avatar Oct 08 '22 14:10 01Petard

Confirmed – working.

I also managed to generate images using protobuf 3.19.6

lgrkvst avatar Oct 09 '22 17:10 lgrkvst

You can also go on the stable version and apply the patch mentioned here.

gerstnr avatar Nov 04 '22 22:11 gerstnr

Hey everyone, if you are looking for possible solution. I think I know what the problem was on my machine. When I upgraded from an intel mac to m1 I had all my files transferred over. It turns out I was using the old version of brew, which apparently installs x86 versions, disregarding that your architecture has changes. So, somethings worked because of Rosetta but I had to reinstall brew and all my installs to get invokeai. I haven't tried stable-diffusion-webui but I would say it's highly likely this was the problem for me.

sagefarrenholz avatar Nov 04 '22 22:11 sagefarrenholz