stable-diffusion-webui
stable-diffusion-webui copied to clipboard
xformers, Expected query.dim() == 3 to be true, but got false
installed xformers using pip install , xformers version: 0.0.12 . I ran into the following error:
File "/content/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 227, in xformers_attention_forward
out = xformers.ops.memory_efficient_attention(q, k, v,)
File "/usr/local/lib/python3.7/dist-packages/xformers/ops.py", line 58, in memory_efficient_attention
return torch.ops.xformers.efficient_attention(query, key, value, False)[0]
File "/usr/local/lib/python3.7/dist-packages/torch/_ops.py", line 143, in __call__
return self._op(*args, **kwargs or {})
RuntimeError: Expected query.dim() == 3 to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.)
I checked query.dim() and it was 4
apparently I just needed to restart the runtime
oh, not resolved
I had this exact issue when trying to run directly on my Ubuntu 22.04 LTS machine with CUDA 11.8. I resolved it by building an nvidia-docker image with nvidia/cuda:11.7.0-devel-ubuntu22.04 as its parent image. I can't say exactly what the core issue was (CUDA minor version difference or otherwise) but this is what worked for me.
apparently I just needed to restart the runtime
I'm having the same issue on Google Colab. Restarting the runtime doesn't help.
What version of xformers are you using? I had the same problem with 0.0.13, but I upgraded to a 0.0.14-dev build (commit faa88b123514562dbe8e32ec72a25937b0dd9da5) which resolved the issue. I've been building all of this by hand so if you're using distro packages you may be out of luck :\
no error with new xformers build you can use like this
!pip install -qq https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.14/xformers-0.0.14.dev0-cp37-cp37m-linux_x86_64.whl
I tried installing it now and Im still getting the same error :/
!pip uninstall xformers
then
!pip install -qq https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.14/xformers-0.0.14.dev0-cp37-cp37m-linux_x86_64.whl
maybe like this works
ok i installed like this and it does work indeed!
pip install git+https://github.com/facebookresearch/xformers.git@faa88b123514562dbe8e32ec72a25937b0dd9da5
I did want to install it from the official source (not a random fork)
pip install takes too much time 🦥 you can do like this and use it later
!pip wheel git+https://github.com/facebookresearch/xformers#egg=xformers
no error with new xformers build you can use like this
!pip install -qq https://github.com/camenduru/stable-diffusion-webui-colab/releases/download/0.0.14/xformers-0.0.14.dev0-cp37-cp37m-linux_x86_64.whl
Thank you. Your wheel works for google colab. I can't build from source with colab, idk, it stucks. But whatever, your wheel works great.
build from source works for me. but take a long time (about 1 hour)
!pip install pyre-extensions==0.0.23
!pip install numpy
!git clone https://github.com/facebookresearch/xformers/
%cd xformers
!git submodule update --init --recursive
!pip install --verbose --no-deps -e .
This error is caused by running old xformers with newer CUDA. Try xformers 0.15 at least.
If your installed CUDA Tookit is 1.17 (run nvidia-smi command to see your CUDA version), you need this:
pip install git+https://github.com/facebookresearch/xformers.git@103e863db94f712a96c34fc8e78cfd58a40adeee
However, if your CUDA Toolkit is older (i.e. 1.15, 1.16), you probably need older xformers 0.14. Yet, try to install the above xformers first, see how it goes.
Also note that, if you build or run this in Docker, you need to run it in a Nvidia CUDA container using the image from nvcr.io/nvidia/cuda:11.7.0-devel-ubuntu22.04 (Make sure it matches with your host CUDA version). Also, if you're training Dreambooth with 8-bit bitsandbytes optimization in Docker, you definitely need to run it with nvcr.io/nvidia/cuda docker image. This paragraph is strictly for those who run inside Docker only, so if you're not running in Docker, this paragraph doesn't apply to you.
its good for google collub?
its good for google collub?
from subprocess import getoutput
gpu = getoutput('nvidia-smi')
if ('T4' in gpu):
%pip install -q https://github.com/TheLastBen/fast-stable-diffusion/raw/main/precompiled/T4/xformers-0.0.13.dev0-py3-none-any.whl
elif ('P100' in gpu):
%pip install -q https://github.com/TheLastBen/fast-stable-diffusion/raw/main/precompiled/P100/xformers-0.0.13.dev0-py3-none-any.whl
elif ('V100' in gpu):
%pip install -q https://github.com/TheLastBen/fast-stable-diffusion/raw/main/precompiled/V100/xformers-0.0.13.dev0-py3-none-any.whl
elif ('A100' in gpu):
%pip install -q https://github.com/TheLastBen/fast-stable-diffusion/raw/main/precompiled/A100/xformers-0.0.13.dev0-py3-none-any.whl