InvokeAI
InvokeAI copied to clipboard
[bug]: Problem with xFormers loading
Is there an existing issue for this?
- [X] I have searched the existing issues
OS
Windows
GPU
cuda
VRAM
4
What version did you experience this issue on?
2.3.5.post2
What happened?
I updated to the newest version 2.3.5.post2 by using the launcher script, choosing option #8 and giving the following command: pip install invokeai[xformers] --use-pep517 --upgrade and now xFormers do not seem to want to work. This is what I get when I try to run the browser based UI: **_Starting the InvokeAI browser-based UI.. WARNING[XFORMERS]: xFormers can't load C++/CUDA extensions. xFormers was built for: PyTorch 2.0.1+cu118 with CUDA 1108 (you have 2.0.1+cpu) Python 3.10.11 (you have 3.10.11) Please reinstall xformers (see https://github.com/facebookresearch/xformers#installing-xformers) Memory-efficient attention, SwiGLU, sparse and more won't be available. Set XFORMERS_MORE_DETAILS=1 for more details
- Initializing, be patient..._
Initialization file C:\Users\InvokeAI\invokeai.init found. Loading... Internet connectivity is True InvokeAI, version 2.3.5.post2 InvokeAI runtime directory is "C:\Users\InvokeAI" GFPGAN Initialized CodeFormer Initialized ESRGAN Initialized Using device_type cpu xformers memory-efficient attention is available but disabled NSFW checker is disabled Current VRAM usage: 0.00G_** - (screenshot below)
Yet I have the PyTorch 2.0.1+cu118 installed on my computer not the cpu version (I have a screenshot of this below).
Can someone please tell me how I can fix this issue? Thank you in advance.
Screenshots
Additional context
No response
Contact Details
No response
Chances are likely that you installed the CUDA version of pytorch outside of InvokeAI's venv, leaving the InvokeAI copy ofp ytorch untouched. The easiest fix for this would be to just download the latest InvokeAI 2.3.5post2 installer, and choose to install to the same location that yours is already installed - this will properly update the libraries within the venv. Otherwise, you can manually do so, but just make sure that you're within InvokeAI's venv (option 8 as you mentioned) for any steps involving installing xformers and pytorch. It'd be a lot easier to just use the installer.