stable-diffusion-webui icon indicating copy to clipboard operation
stable-diffusion-webui copied to clipboard

[Bug]: txt2img fails when using xformers on Google Colab

Open R-N opened this issue 2 years ago • 11 comments

Is there an existing issue for this?

  • [X] I have searched the existing issues and checked the recent builds/commits

What happened?

txt2img fails when using xformers on Google Colab.

  • Using xformers from pip (0.0.12) will give TypeError: memory_efficient_attention() got an unexpected keyword argument 'attn_bias'
  • Using xformers wheel from naifu (0.0.14) gives RuntimeError: Expected query.dim() == 3 to be true, but got false.
  • Not installing triton or installing it without --pre produces some warning, but the error on txt2img stays the same.
  • Webui won't automatically install xformers on Google Colab

Steps to reproduce the problem

  1. Install triton with --pre. The one from pip is old (1.1.1) and produces warning.
  2. Install xformers a. Installing from pip will install xformers 0.012 b. There's 0.014 from naifu colab: https://github.com/metrolobo/xformers_wheels/releases/download/1d31a3ac_various_6/xformers-0.0.14.dev0-cp37-cp37m-linux_x86_64.whl
  3. Load NAI model (with or without vae doesn't matter).
  4. Use txt2img. Here I'm using "ganyu (genshin impact )"

What should have happened?

It generates an image.

Commit where the problem happens

7f8ab1e

What platforms do you use to access UI ?

Other/Cloud

What browsers do you use to access the UI ?

Google Chrome

Command Line Arguments

--autolaunch --ckpt {workspace3}/model.ckpt --xformers
--autolaunch --ckpt {workspace3}/model.ckpt --vae-path {workspace3}/model.vae.pt --xformers

Additional information, context and logs

Using xformers from pip will give TypeError: memory_efficient_attention() got an unexpected keyword argument 'attn_bias'
Running on local URL:  http://127.0.0.1:7860/

To create a public link, set share=True in launch().
  0% 0/20 [00:03<?, ?it/s]
Error completing request
Arguments: ('ganyu \\(genshin impact\\), ', 'guan yu', 'None', 'None', 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 0, 0, 0, False, None, '', 'outputs', '', 1, True, 100, False, '<p style="margin-bottom:0.75em">Keyframe Format: <br>Seed | Prompt or just Prompt</p>', '', 25, True, 5.0, False, None, '', 10.0, True, 30.0, True, False, 1, '', 0, '', True, False, False) {}
Traceback (most recent call last):
  File "/content/nai/stable-diffusion-webui/modules/ui.py", line 212, in f
    res = list(func(*args, **kwargs))
  File "webui.py", line 63, in f
    res = func(*args, **kwargs)
  File "/content/nai/stable-diffusion-webui/modules/txt2img.py", line 44, in txt2img
    processed = process_images(p)
  File "/content/nai/stable-diffusion-webui/modules/processing.py", line 411, in process_images
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength)
  File "/content/nai/stable-diffusion-webui/modules/processing.py", line 549, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning)
  File "/content/nai/stable-diffusion-webui/modules/sd_samplers.py", line 417, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={'cond': conditioning, 'uncond': unconditional_conditioning, 'cond_scale': p.cfg_scale}, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "/content/nai/stable-diffusion-webui/modules/sd_samplers.py", line 326, in launch_sampling
    return func()
  File "/content/nai/stable-diffusion-webui/modules/sd_samplers.py", line 417, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={'cond': conditioning, 'uncond': unconditional_conditioning, 'cond_scale': p.cfg_scale}, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/content/nai/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 80, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/content/nai/stable-diffusion-webui/modules/sd_samplers.py", line 248, in forward
    x_out = self.inner_model(x_in, sigma_in, cond=cond_in)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/content/nai/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "/content/nai/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "/content/nai/stable-diffusion-webui/repositories/stable-diffusion/ldm/models/diffusion/ddpm.py", line 987, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/content/nai/stable-diffusion-webui/repositories/stable-diffusion/ldm/models/diffusion/ddpm.py", line 1410, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/content/nai/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/diffusionmodules/openaimodel.py", line 732, in forward
    h = module(h, emb, context)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/content/nai/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/diffusionmodules/openaimodel.py", line 85, in forward
    x = layer(x, context)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/content/nai/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/attention.py", line 258, in forward
    x = block(x, context=context)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/content/nai/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/attention.py", line 209, in forward
    return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
  File "/content/nai/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/diffusionmodules/util.py", line 114, in checkpoint
    return CheckpointFunction.apply(func, len(inputs), *args)
  File "/content/nai/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/diffusionmodules/util.py", line 127, in forward
    output_tensors = ctx.run_function(*ctx.input_tensors)
  File "/content/nai/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/attention.py", line 212, in _forward
    x = self.attn1(self.norm1(x)) + x
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/content/nai/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 227, in xformers_attention_forward
    out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None)
TypeError: memory_efficient_attention() got an unexpected keyword argument 'attn_bias'
Using xformers wheel from naifu gives RuntimeError: Expected query.dim() == 3 to be true, but got false.
Traceback (most recent call last):
  File "/content/nai/stable-diffusion-webui/modules/ui.py", line 212, in f
    res = list(func(*args, **kwargs))
  File "webui.py", line 63, in f
    res = func(*args, **kwargs)
  File "/content/nai/stable-diffusion-webui/modules/txt2img.py", line 44, in txt2img
    processed = process_images(p)
  File "/content/nai/stable-diffusion-webui/modules/processing.py", line 411, in process_images
    samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength)
  File "/content/nai/stable-diffusion-webui/modules/processing.py", line 549, in sample
    samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning)
  File "/content/nai/stable-diffusion-webui/modules/sd_samplers.py", line 417, in sample
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={'cond': conditioning, 'uncond': unconditional_conditioning, 'cond_scale': p.cfg_scale}, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "/content/nai/stable-diffusion-webui/modules/sd_samplers.py", line 326, in launch_sampling
    return func()
  File "/content/nai/stable-diffusion-webui/modules/sd_samplers.py", line 417, in <lambda>
    samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={'cond': conditioning, 'uncond': unconditional_conditioning, 'cond_scale': p.cfg_scale}, disable=False, callback=self.callback_state, **extra_params_kwargs))
  File "/usr/local/lib/python3.7/dist-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/content/nai/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 80, in sample_euler_ancestral
    denoised = model(x, sigmas[i] * s_in, **extra_args)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/content/nai/stable-diffusion-webui/modules/sd_samplers.py", line 248, in forward
    x_out = self.inner_model(x_in, sigma_in, cond=cond_in)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/content/nai/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
    eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
  File "/content/nai/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
    return self.inner_model.apply_model(*args, **kwargs)
  File "/content/nai/stable-diffusion-webui/repositories/stable-diffusion/ldm/models/diffusion/ddpm.py", line 987, in apply_model
    x_recon = self.model(x_noisy, t, **cond)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/content/nai/stable-diffusion-webui/repositories/stable-diffusion/ldm/models/diffusion/ddpm.py", line 1410, in forward
    out = self.diffusion_model(x, t, context=cc)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/content/nai/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/diffusionmodules/openaimodel.py", line 732, in forward
    h = module(h, emb, context)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/content/nai/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/diffusionmodules/openaimodel.py", line 85, in forward
    x = layer(x, context)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/content/nai/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/attention.py", line 258, in forward
    x = block(x, context=context)
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/content/nai/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/attention.py", line 209, in forward
    return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
  File "/content/nai/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/diffusionmodules/util.py", line 114, in checkpoint
    return CheckpointFunction.apply(func, len(inputs), *args)
  File "/content/nai/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/diffusionmodules/util.py", line 127, in forward
    output_tensors = ctx.run_function(*ctx.input_tensors)
  File "/content/nai/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/attention.py", line 212, in _forward
    x = self.attn1(self.norm1(x)) + x
  File "/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/content/nai/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 227, in xformers_attention_forward
    out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None)
  File "/usr/local/lib/python3.7/dist-packages/xformers/ops.py", line 575, in memory_efficient_attention
    query=query, key=key, value=value, attn_bias=attn_bias, p=p
  File "/usr/local/lib/python3.7/dist-packages/xformers/ops.py", line 196, in forward_no_grad
    causal=isinstance(attn_bias, LowerTriangularMask),
  File "/usr/local/lib/python3.7/dist-packages/torch/_ops.py", line 143, in __call__
    return self._op(*args, **kwargs or {})
RuntimeError: Expected query.dim() == 3 to be true, but got false.  (Could this error message be improved?  If so, please report an enhancement request to PyTorch.)
Not installing triton produces this message, but the error on txt2img is the same
WARNING:root:Triton is not available, some optimizations will not be enabled.
Error No module named 'triton'
Installing triton without --pre produces this message, but the error on txt2img is the same
WARNING:root:A matching Triton is not available, some optimizations will not be enabled.
Error caught was: module 'triton.language' has no attribute 'constexpr'
  • Webui won't install xformers automatically on Google Colab.

Related: #2731

R-N avatar Oct 20 '22 18:10 R-N

I'm having this issue too, but I'm running it on arch linux

catgirlcataclysm avatar Oct 20 '22 23:10 catgirlcataclysm

me too,in colab

Small-tailqwq avatar Oct 21 '22 06:10 Small-tailqwq

Same on Fedora 36 with an RTX 3070 mobile Any ideas on what's causing this?

AshtakaOOf avatar Oct 26 '22 21:10 AshtakaOOf

Idk what caused it, but some people say it's fixed with newer xformer. I can't build xformer on colab though, it just stucks. And the new version isn't available on pip for python 3.7 which colab uses.

Thankfully, the xformers wheel by camenduru works great for me. I've only tried it once though.

@Small-tailqwq can you try it and confirm as for the others, i'm not sure if the wheel can be used outside colab

R-N avatar Oct 28 '22 16:10 R-N

Auto1111 seems to install xformers 0.0.12 instead of 0.0.13 so that's my issue? Does anyone know where I could change the line that install xformers?

AshtakaOOf avatar Oct 29 '22 03:10 AshtakaOOf

Auto1111 seems to install xformers 0.0.12 instead of 0.0.13 so that's my issue? Does anyone know where I could change the line that install xformers?

xformers 0.0.13+ on pip doesnt seem to support the python version that colab uses, so 0.0.12 is the highest it can install

R-N avatar Oct 29 '22 04:10 R-N

uninstalling xformers==0.0.12 and installing pip install git+https://github.com/facebookresearch/xformers.git#egg=xformers works.

aycaecemgul avatar Nov 01 '22 11:11 aycaecemgul

I have tried downloading the latest, recreating the venv, downloading the latest git build, building myself. None worked I'm pretty sad about this, if only it worked without me struggling.

AshtakaOOf avatar Nov 06 '22 02:11 AshtakaOOf

uninstalling xformers==0.0.12 and installing pip install git+https://github.com/facebookresearch/xformers.git#egg=xformers works.

Thank you, your solution worked for me!

Halkyon64 avatar Dec 12 '22 18:12 Halkyon64

pip install git+https://github.com/facebookresearch/xformers.git#egg=xformers

Also worked for me! I'm using the official NVIDIA Drivers on a Arch System (GTX 1050 using Low VRAM)

hideki2k02 avatar Jan 23 '23 03:01 hideki2k02

`(py310) guest01@ubun:/data/ai/cnet/ControlNet/xformers$ pip install git+https://github.com/facebookresearch/xformers.git#egg=xformers Looking in indexes: https://pypi.douban.com/simple Collecting xformers Cloning https://github.com/facebookresearch/xformers.git to /tmp/pip-install-uz9w073j/xformers_4e043011103b4437a4fc7047a6bb37e8 Running command git clone --filter=blob:none --quiet https://github.com/facebookresearch/xformers.git /tmp/pip-install-uz9w073j/xformers_4e043011103b4437a4fc7047a6bb37e8 warning: filtering not recognized by server, ignoring Resolved https://github.com/facebookresearch/xformers.git to commit e23b369c094685bd42e11928649cc03b93b768d5 Running command git submodule update --init --recursive -q error: 服务器不允许请求未公开的对象 88dc2040a073b270773a4d2c71d91c78e9d837cb 获取了子模组路径 'third_party/flash-attention',但是它没有包含 88dc2040a073b270773a4d2c71d91c78e9d837cb。直接获取该提交失败。 error: subprocess-exited-with-error

× git submodule update --init --recursive -q did not run successfully. │ exit code: 1 ╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error

× git submodule update --init --recursive -q did not run successfully. │ exit code: 1 ╰─> See above for output.

note: This error originates from a subprocess, and is likely not a problem with pip. `

yxlwfds avatar Apr 19 '23 05:04 yxlwfds