Adding Xformers to improve procesing time
Adding Xformers to improve procesing time
Yes please this
I've tried to do this myself, but forgot about the conda environment. Will try again. when I did this on AUTO1111 I used a system-wide Python install, CUDA and Visual Studio to integrate .... performance went through the roof.
Xformers is being worked on in the SD2 branch (not yet public).
@JeLuF I updated to the latest SD2 through enabling the beta setting from the V1 which had xformers installed and working. but now I see in the command prompt after the update to SD2 the following error: No module 'xformers'. Proceeding without it. I have a GTX 970 videocard. How can I get xformers to work with SD2 because it does with SD1 before the upgrade.
Also i've right OK installed the xformers in c:...\stable-diffusion-ui\installer_files\env**\pkgs**\xformers seems ok except a torchvision not possible .. also it gets the right packages .. there was an incompatibility with some cuda may be
- i verify with > nvcc --version and cuda 11.8 detected : i suite this process from git xformers: https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Xformers 👍 cd repositories (here i find the \pkgs directory as similar ways) so there installed the xformers dir git clone https://github.com/facebookresearch/xformers.git cd xformers git submodule update --init --recursive pip install -r requirements.txt pip install -e .
BUT now, my GPU is not detected:
requesting for render_devices auto WARNING: Could not find a compatible GPU. Using the CPU, but this will be very slow! devices_to_start {'cpu'} devices_to_stop set() Start new Rendering Thread on device cpu Render device CPU available as AMD64 Family 23 Model 113 Stepping 0, AuthenticAMD loading c:...\stable-diffusion-ui\models\stable-diffusion\analog-diffusion-1.0st.safetensors to device cpu using precision full Loading model from c:...\stable-diffusion-ui\models\stable-diffusion\analog-diffusion-1.0st.safetensors Loading from safetensors UNet: Running in eps-prediction mode active devices {'cpu': {'name': 'AMD64 Family 23 Model 113 Stepping 0, AuthenticAMD'}} INFO: Started server process [14440] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://127.0.0.1:9000 (Press CTRL+C to quit)
**** what means the code server process [14440] ? As all is ok .. but just GPU not recognised and CPU ready to calculate. I was in beta version too .. then re to the standard version.. What can i do to get back GPU recon ? 2 - may be uninstalling xformers also ? as it's bugged ? 3 - i put that : python -c "import torch; print(torch.version)" 1.11.0+cpu so means i've to update to a torch with gpu also ! ? .. near solution it is ... next as i go bck to beta, xformers not detected once more .. seems linked to SD2.0 (hey you go to SD2.1 ? as V1.5 is which i could like to stay (far more creative !) 4 - as >conda list done : cudnn is not here !! ?? tried to install with pip or conda, but none does it ... next to view 5 - defeated .. tried things and others but nothing is redone. So re-installation ? ez ? no cudnn in conda liust is perhaps not the pb.. ? but ? 6 - lastly is done: re-installation completed - all seems now re-OK. no xformers indicated .. also not desinstalled but start doens't indicate "(error: No module 'xformers'.)".. may be in used ? but HOW to know ?
By installing xformers, you've installed versions of torch and torchvision that are not compatible and which are missing CUDA support. You need to install the right versions of these libraries and from extra repositories from which you get CUDA support. If you do so, the entire setup will break every time any of the packages gets updated, because the dependencies don't match and you manually overrule them. That's why we're currently not shipping xformers as part of our installation.
If you're interested, there's a lengthy thread in Discord with all the details. https://discord.com/channels/1014774730907209781/1052671621556609105
If you're using xformers, you will see about a dozen lines like these when rendering the first image:
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is 768 and using 8 heads.
Setting up MemoryEfficientCrossAttention. Query dim is 320, context_dim is None and using 8 heads.
You can now install xFormers by following the steps on this page: https://github.com/easydiffusion/easydiffusion/wiki/xFormers