stable-diffusion-webui
stable-diffusion-webui copied to clipboard
[Bug]: xformers can't be installed by default on Ubuntu 20.02
Is there an existing issue for this?
- [X] I have searched the existing issues and checked the recent builds/commits
What happened?
Running ./webui.sh --xformers
causes application to crash. Full error output:
################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye)
################################################################
################################################################
Running on why user
################################################################
################################################################
Repo already cloned, using it as install directory
################################################################
################################################################
Create and activate python venv
################################################################
################################################################
Launching launch.py...
################################################################
Python 3.10.8 (main, Oct 12 2022, 19:14:26) [GCC 9.4.0]
Commit hash: 98947d173e3f1667eba29c904f681047dea9de90
Installing xformers==0.0.12
Traceback (most recent call last):
File "/home/why/stable-diffusion-webui/launch.py", line 255, in <module>
prepare_enviroment()
File "/home/why/stable-diffusion-webui/launch.py", line 194, in prepare_enviroment
run_pip("install xformers", "xformers==0.0.12")
File "/home/why/stable-diffusion-webui/launch.py", line 63, in run_pip
return run(f'"{python}" -m pip {args} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}")
File "/home/why/stable-diffusion-webui/launch.py", line 34, in run
raise RuntimeError(message)
RuntimeError: Couldn't install xformers==0.0.12.
Command: "/home/why/stable-diffusion-webui/venv/bin/python3" -m pip install xformers --prefer-binary
Error code: 1
stdout: Collecting xformers
Using cached xformers-0.0.13.tar.gz (292 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'error'
stderr: error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [8 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-efl_te2y/xformers_ae3bda28b89743d682bf1a29bfcf4b41/setup.py", line 239, in <module>
ext_modules=get_extensions(),
File "/tmp/pip-install-efl_te2y/xformers_ae3bda28b89743d682bf1a29bfcf4b41/setup.py", line 157, in get_extensions
raise RuntimeError(
RuntimeError: CUTLASS submodule not found. Did you forget to run `git submodule update --init --recursive` ?
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
[notice] A new release of pip available: 22.2.2 -> 22.3.1
[notice] To update, run: pip install --upgrade pip
Steps to reproduce the problem
Run ./webui.sh --xformers
What should have happened?
It should launch the thing with xformers.
Commit where the problem happens
828438b
What platforms do you use to access UI ?
Ubuntu 20.02
What browsers do you use to access the UI ?
Naenara
Command Line Arguments
--xformers
Additional information, context and logs
I was able to force installation of xformers by explicitly stating version 0.0.12 in launch.py
:
if (not is_installed("xformers") or reinstall_xformers) and xformers:
if platform.system() == "Windows":
if platform.python_version().startswith("3.10"):
run_pip(f"install -U -I --no-deps {xformers_windows_package}", "xformers==0.0.12")
else:
print("Installation of xformers is not supported in this version of Python.")
print("You can also check this and build manually: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers#building-xformers-on-windows-by-duckness")
if not is_installed("xformers"):
exit(0)
elif platform.system() == "Linux":
run_pip("install xformers==0.0.12", "xformers==0.0.12")
However, it doesn't work, and produces following output:
Error completing request
Arguments: ('tabby cat, studio ghibli style', '', 'None', 'None', 20, 0, False, False, 1, 1, 7, -1.0, -1.0, 0, 0, 0, False, 512, 512, False, 0.7, 0, 0, 0, False, False, False, '', 1, '', 0, '', True, False, False) {}
Traceback (most recent call last):
File "/home/why/stable-diffusion-webui/modules/ui.py", line 185, in f
res = list(func(*args, **kwargs))
File "/home/why/stable-diffusion-webui/webui.py", line 54, in f
res = func(*args, **kwargs)
File "/home/why/stable-diffusion-webui/modules/txt2img.py", line 48, in txt2img
processed = process_images(p)
File "/home/why/stable-diffusion-webui/modules/processing.py", line 423, in process_images
res = process_images_inner(p)
File "/home/why/stable-diffusion-webui/modules/processing.py", line 519, in process_images_inner
samples_ddim = p.sample(conditioning=c, unconditional_conditioning=uc, seeds=seeds, subseeds=subseeds, subseed_strength=p.subseed_strength, prompts=prompts)
File "/home/why/stable-diffusion-webui/modules/processing.py", line 652, in sample
samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
File "/home/why/stable-diffusion-webui/modules/sd_samplers.py", line 476, in sample
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/home/why/stable-diffusion-webui/modules/sd_samplers.py", line 378, in launch_sampling
return func()
File "/home/why/stable-diffusion-webui/modules/sd_samplers.py", line 476, in <lambda>
samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args={
File "/home/why/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/why/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 80, in sample_euler_ancestral
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "/home/why/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/why/stable-diffusion-webui/modules/sd_samplers.py", line 298, in forward
x_out = self.inner_model(x_in, sigma_in, cond={"c_crossattn": [cond_in], "c_concat": [image_cond_in]})
File "/home/why/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/why/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 112, in forward
eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
File "/home/why/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/external.py", line 138, in get_eps
return self.inner_model.apply_model(*args, **kwargs)
File "/home/why/stable-diffusion-webui/repositories/stable-diffusion/ldm/models/diffusion/ddpm.py", line 987, in apply_model
x_recon = self.model(x_noisy, t, **cond)
File "/home/why/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/why/stable-diffusion-webui/repositories/stable-diffusion/ldm/models/diffusion/ddpm.py", line 1410, in forward
out = self.diffusion_model(x, t, context=cc)
File "/home/why/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/why/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/diffusionmodules/openaimodel.py", line 732, in forward
h = module(h, emb, context)
File "/home/why/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/why/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/diffusionmodules/openaimodel.py", line 85, in forward
x = layer(x, context)
File "/home/why/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/why/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/attention.py", line 258, in forward
x = block(x, context=context)
File "/home/why/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/why/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/attention.py", line 209, in forward
return checkpoint(self._forward, (x, context), self.parameters(), self.checkpoint)
File "/home/why/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/diffusionmodules/util.py", line 114, in checkpoint
return CheckpointFunction.apply(func, len(inputs), *args)
File "/home/why/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/diffusionmodules/util.py", line 127, in forward
output_tensors = ctx.run_function(*ctx.input_tensors)
File "/home/why/stable-diffusion-webui/repositories/stable-diffusion/ldm/modules/attention.py", line 212, in _forward
x = self.attn1(self.norm1(x)) + x
File "/home/why/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/why/stable-diffusion-webui/modules/sd_hijack_optimizations.py", line 227, in xformers_attention_forward
out = xformers.ops.memory_efficient_attention(q, k, v, attn_bias=None)
TypeError: memory_efficient_attention() got an unexpected keyword argument 'attn_bias'
build xformer manually activate venv and build it.
in this case, u better upgrade pip first
https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers
https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers
The link says "If you use a Pascal, Turing, Ampere, Lovelace or Hopper card with Python 3.10, you shouldn't need to build manually anymore. Uninstall your existing xformers and launch the repo with --xformers. A compatible wheel will be installed."
This is wrong, btw:
$ ./webui.sh --port=1234 --xformers
################################################################
Install script for stable-diffusion + Web UI
Tested on Debian 11 (Bullseye)
################################################################
################################################################
Running on XXXX user
################################################################
################################################################
Repo already cloned, using it as install directory
################################################################
################################################################
Create and activate python venv
################################################################
################################################################
Launching launch.py...
################################################################
Python 3.10.4 (main, Sep 21 2022, 21:03:14) [GCC 11.3.0]
Commit hash: 828438b4a190759807f9054932cae3a8b880ddf1
Installing xformers
Traceback (most recent call last):
File "/home/strube1/stable-diffusion-webui/launch.py", line 250, in <module>
prepare_enviroment()
File "/home/strube1/stable-diffusion-webui/launch.py", line 192, in prepare_enviroment
run_pip("install xformers", "xformers")
File "/home/strube1/stable-diffusion-webui/launch.py", line 63, in run_pip
return run(f'"{python}" -m pip {args} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}")
File "/home/strube1/stable-diffusion-webui/launch.py", line 34, in run
raise RuntimeError(message)
RuntimeError: Couldn't install xformers.
Command: "/home/strube1/stable-diffusion-webui/venv/bin/python3" -m pip install xformers --prefer-binary
Error code: 1
stdout: Collecting xformers
Using cached xformers-0.0.13.tar.gz (292 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'error'
stderr: error: subprocess-exited-with-error
× python setup.py egg_info did not run successfully.
│ exit code: 1
╰─> [8 lines of output]
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "/tmp/pip-install-5lhfgy4w/xformers_c5a25e633f80438abd1c242588724677/setup.py", line 239, in <module>
ext_modules=get_extensions(),
File "/tmp/pip-install-5lhfgy4w/xformers_c5a25e633f80438abd1c242588724677/setup.py", line 157, in get_extensions
raise RuntimeError(
RuntimeError: CUTLASS submodule not found. Did you forget to run `git submodule update --init --recursive` ?
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
I'm getting this behaviour as well. But I wasn't getting until I updated to commit b5050ad. I'm using Ubuntu linux 22.04.
After pulling the latest and restarting, I got the message:
NotImplementedError: Could not run 'xformers::efficient_attention_forward_cutlass' with arguments from the 'CUDA' backend. This could be because the operator doesn't exist for this backend, or was omitted during the selective/custom build process (if using custom build)
This message goes away and I can generate images again if I delete the xformers repository.
If I run ./webui.sh --xformers
then I get an output similar to the output from ProfaneServitor:
I get the same traceback, and xformers does not install.
Re-installing xformers manually by deleting the repository directory and rebuilding it goes smoothly. But also results in the 'NotImplementedError' again.
it works for ubuntu 22.04 and python 3.10:
cd stable-diffusion-webui
python3 -m venv venv
source venv/bin/activate
pip install https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/download/linux/xformers-0.0.14.dev0-cp310-cp310-linux_x86_64.whl
deactivate
I'm still having this issue on win10, WSL, ubuntu 20.04.5 Just pulled the a1111 repo so hash 44c46f0ed395967cd3830dd481a2db759fda5b3b
Installing xformers
Traceback (most recent call last):
File "launch.py", line 294, in <module>
prepare_enviroment()
File "launch.py", line 230, in prepare_enviroment
run_pip("install xformers", "xformers")
File "launch.py", line 78, in run_pip
return run(f'"{python}" -m pip {args} --prefer-binary{index_url_line}', desc=f"Installing {desc}", errdesc=f"Couldn't install {desc}")
File "launch.py", line 49, in run
raise RuntimeError(message)
RuntimeError: Couldn't install xformers.
Command: "/home/tenabraex/stable-diffusion-webui/venv/bin/python3" -m pip install xformers --prefer-binary
Error code: 1
stdout: Collecting xformers
Using cached xformers-0.0.13.tar.gz (292 kB)
stderr: ERROR: Command errored out with exit status 1:
command: /home/tenabraex/stable-diffusion-webui/venv/bin/python3 -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-5m584dff/xformers/setup.py'"'"'; __file__='"'"'/tmp/pip-install-5m584dff/xformers/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-install-5m584dff/xformers/pip-egg-info
cwd: /tmp/pip-install-5m584dff/xformers/
Complete output (7 lines):
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/tmp/pip-install-5m584dff/xformers/setup.py", line 239, in <module>
ext_modules=get_extensions(),
File "/tmp/pip-install-5m584dff/xformers/setup.py", line 157, in get_extensions
raise RuntimeError(
RuntimeError: CUTLASS submodule not found. Did you forget to run git submodule update --init --recursive ?
----------------------------------------
ERROR: Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.`
```
I tried installing xformers manually from the repo which worked fine
```
`(base) tenabraex@SacrariumR:~/stable-diffusion-webui$ python -m xformers.info
A matching Triton is not available, some optimizations will not be enabled.
Error caught was: No module named 'triton'
xFormers 0.0.15.dev+4c06c79.d20221205
memory_efficient_attention.flshatt: available - requires GPU with compute capability 7.5+
memory_efficient_attention.cutlass: available
memory_efficient_attention.small_k: available
swiglu.fused.p.cpp: available
is_triton_available: False
is_functorch_available: False
pytorch.version: 1.12.1+cu116
pytorch.cuda: available
gpu.compute_capability: 7.5
gpu.name: NVIDIA GeForce RTX 2080 Ti`
```
it works for ubuntu 22.04 and python 3.10:
cd stable-diffusion-webui python3 -m venv venv source venv/bin/activate pip install https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/download/linux/xformers-0.0.14.dev0-cp310-cp310-linux_x86_64.whl deactivate
It works!! thanks!
it works for ubuntu 22.04 and python 3.10:
cd stable-diffusion-webui python3 -m venv venv source venv/bin/activate pip install https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/download/linux/xformers-0.0.14.dev0-cp310-cp310-linux_x86_64.whl deactivate
Works on 20.04 but using compilation instead wheel: cd stable-diffusion-webui python3 -m venv venv source venv/bin/activate sudo pip3 install git+https://github.com/facebookresearch/xformers.git#egg=xformers deactivate
Having same issue Ubuntu 22.04.1 LTS
For anyone like myself who initially attempted the fix here https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/2073 you may need to do a bit of cleanup.
For context: I am on Ubuntu 22.04 with an RTX 3080. I have several versions of CUDA installed although none of it is hard referenced in environment variables.
First I had to unset the environment variables:
unset FORCE_CUDA
unset TORCH_CUDA_ARCH_LIST
I'm not sure if the second one was necessary, but the first was actually causing the build to fail strangely indicating it couldn't find CUDA.
Like the windows xformers build process described in the wiki here I used a venv to build xformers (adjusting to source ./venv/bin/activate
for linux).
I'm not clear if this was necessary or even worked in the end, but at some point I had success building xformers by installing CUDA 11.7 (it worked with 11.8 but had a ton of warnings and resulting install didn't seem to work - but maybe I did something different) and running CUDA_HOME=/usr/local/cuda-11.7 python setup.py develop
(if that doesn't work just build the normal way as documented).
Once I got a successful build the trick was ⚠️ staying in the xformers dir, but activating / switching to the stable diffusion webui venv ⚠️ and then installing the wheel file I just built like so:
source ../../venv/bin/activate
pip install -e . --install-option develop
finally I was able to get it up and running with no errors
bash webui.sh --xformers
not sure if there's any advantage to this over using https://github.com/C43H66N12O12S2/stable-diffusion-webui/releases/download/linux/xformers-0.0.14.dev0-cp310-cp310-linux_x86_64.whl
🤷
Closing as stale.