xformers missing?
Hello, I noticed that there is no xformers info on the bottom of the page today, as well in settings under Optimizations, there is only Automatic.
Already up to date. venv "D:/AINOVO/stable-diffusion-webui/venv\Scripts\Python.exe" Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)] Version: f1.0.2v1.10.1-previous-191-g1802850e Commit hash: 1802850eb124886dfc3d158aa242d69ae4b00195 Total VRAM 11264 MB, total RAM 32690 MB pytorch version: 2.1.2+cu121 xformers version: 0.0.23.post1 Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce GTX 1080 Ti : native VAE dtype preferences: [torch.float32] -> torch.float32 Installing requirements for Face Editor Faceswaplab : Use GPU requirements Checking faceswaplab requirements Install protobuf>=3.20.2 Installing sd-webui-faceswaplab requirement: protobuf>=3.20.2 1.768671399971936 CUDA 12.1 Launching Web UI with arguments: --xformers --force-enable-xformers --ckpt-dir D:/AINOVO/stable-diffusion-webui/models/Stable-diffusion --hypernetwork-dir D:/AINOVO/stable-diffusion-webui/models/hypernetworks --embeddings-dir D:/AINOVO/stable-diffusion-webui/embeddings --lora-dir D:/AINOVO/stable-diffusion-webui/models/Lora --controlnet-dir D:/AINOVO/stable-diffusion-webui/extensions/sd-webui-controlnet/models Total VRAM 11264 MB, total RAM 32690 MB pytorch version: 2.1.2+cu121 WARNING:xformers:A matching Triton is not available, some optimizations will not be enabled. Error caught was: No module named 'triton' xformers version: 0.0.23.post1 Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce GTX 1080 Ti : native VAE dtype preferences: [torch.float32] -> torch.float32 CUDA Stream Activated: False Using xformers cross attention Using xformers attention for VAE ControlNet preprocessor location: D:\AINOVO\WebForge\webui\models\ControlNetPreprocessor CHv1.8.10: Get Custom Model Folder Tag Autocomplete: Cannot reload embeddings instantly: module 'modules.sd_hijack' has no attribute 'model_hijack' Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu. [-] ADetailer initialized. version: 24.8.0, num models: 41 18:40:40 - ReActor - STATUS - Running v0.7.1-a1 on Device: CUDA CHv1.8.10: Set Proxy: 2024-08-08 18:40:43,308 - ControlNet - INFO - ControlNet UI callback registered. Loading weights [ec41bd2a82] from D:\AINOVO\stable-diffusion-webui\models\Stable-diffusion\photon_v1.safetensors StateDict Keys: {'unet': 686, 'vae': 248, 'text_encoder': 197, 'ignore': 0} Running on local URL: http://127.0.0.1:7860
To create a public link, set share=True in launch().
Startup time: 37.2s (prepare environment: 14.0s, launcher: 2.1s, import torch: 3.5s, initialize shared: 0.1s, other imports: 0.5s, list SD models: 0.4s, load scripts: 4.7s, initialize extra networks: 1.2s, create ui: 6.4s, gradio launch: 4.3s).
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
K-Model Created: {'storage_dtype': torch.float16, 'computation_dtype': torch.float16, 'manual_cast': False}
tag_autocomplete_helper: Old webui version or unrecognized model shape, using fallback for embedding completion.
Model loaded in 7.7s (calculate hash: 0.2s, load weights from disk: 1.3s, forge model load: 6.1s).
the dev did mention that xformers is currently broken
the dev did mention that xformers is currently broken
this report is from 3 days ago, the update about xformers is less than 6 hours old.
What about the rest of the optimizations, like SDP, doggetrix and plenty of others Automatic had in that dropdown? Do they need to be installed separately?