fast-stable-diffusion
fast-stable-diffusion copied to clipboard
ImportError: cannot import name 'is_xformers_available' from 'diffusers.utils.import_utils'
Traceback (most recent call last):
File "/content/diffusers/examples/dreambooth/train_dreambooth.py", line 18, in
I have run all the steps in the proper order
rerun the dependencies cell
rerun the dependencies cell
I just have, plus disconnect, reconnect, start from scratch. The issue persists
Run
!nvidia-smi and what GPU you're running on
I'm on google colab pro+, no idea what GPU

The A100 GPU seems to not work with the current xformers pre-compiled files, I will fix that shortly
Awesome, thanks for the great support, I'm also looking forward this fix!
Try it now and see if it's fixed, use the official link : https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb
I just tried again with the updated notebook but still giving me the same error with a A100 on Colab. And btw thank you for the prompt support!
Run !nvidia-smi I don't think it's the A100 this time
Yeah I checked and I indeed had an A100 instance. To double check I just tried to run the exact same notebook on a lesser Tesla T4 on Colab again and it works (although way slower of course) so it appears to be related to the A100 GPU on my side.
you're getting xfromers error ?
Yes I am getting the the exact same xformers error (with the SD model downloaded with my HF token) when starting the DreamBooth training.
make a screenshot of the xformer cell's code
oh wait, I just figured the problem Are you sure you're using my fork of diffusers ?
Try it now and see if it's fixed, use the official link : https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb
I am using the notebook from here without changing anything. When my other training jobs are finished I'll try again and send you a screenshot of the xformer cell's code if will still be of relevance. Thx again.
File "/usr/local/lib/python3.7/dist-packages/diffusers/models/attention.py", line 21, in
from diffusers.utils.import_utils import is_xformers_available
My version of the attention.py doesn't contain : from diffusers.utils.import_utils import is_xformers_available
I am sorry, but I am experiencing the same
Link is https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb#scrollTo=1-9QbkfAVYYU
Tried closing, refreshing, reconnecting...
Traceback (most recent call last):
File "/content/diffusers/examples/dreambooth/train_dreambooth.py", line 18, in
#@markdown # Dependencies %%capture %cd /content/ !git clone https://github.com/TheLastBen/diffusers !pip install -q git+https://github.com/TheLastBen/diffusers !pip install -q accelerate==0.12.0 !pip install -q OmegaConf !pip install -q wget !wget https://github.com/TheLastBen/fast-stable-diffusion/raw/main/Dreambooth/Deps !mv Deps Deps.7z !7z x Deps.7z !cp -r /content/usr/local/lib/python3.7/dist-packages /usr/local/lib/python3.7/ !rm Deps.7z !rm -r /content/usr !sed -i 's@else prefix + ": "@else prefix + ""@g' /usr/local/lib/python3.7/dist-packages/tqdm/std.py
==========
#@markdown # xformers
from subprocess import getoutput from IPython.display import HTML from IPython.display import clear_output import wget import time
s = getoutput('nvidia-smi') if 'T4' in s: gpu = 'T4' elif 'P100' in s: gpu = 'P100' elif 'V100' in s: gpu = 'V100' elif 'A100' in s: gpu = 'A100'
while True: try: gpu=='T4'or gpu=='P100'or gpu=='V100'or gpu=='A100' break except: pass print('[1;31mit seems that your GPU is not supported at the moment') time.sleep(5)
if (gpu=='T4'): %pip install -q https://github.com/TheLastBen/fast-stable-diffusion/raw/main/precompiled/T4/xformers-0.0.13.dev0-py3-none-any.whl
elif (gpu=='P100'): %pip install -q https://github.com/TheLastBen/fast-stable-diffusion/raw/main/precompiled/P100/xformers-0.0.13.dev0-py3-none-any.whl
elif (gpu=='V100'): %pip install -q https://github.com/TheLastBen/fast-stable-diffusion/raw/main/precompiled/V100/xformers-0.0.13.dev0-py3-none-any.whl
elif (gpu=='A100'): %cd /usr/local/lib/python3.7/diffusers/ !rm /usr/local/lib/python3.7/diffusers/models/attention.py wget.download('https://raw.githubusercontent.com/huggingface/diffusers/main/src/diffusers/models/attention.py')
clear_output() print('[1;32mDONE !')
File "/usr/local/lib/python3.7/dist-packages/diffusers/models/attention.py", line 21, in from diffusers.utils.import_utils import is_xformers_availableMy version of the attention.py doesn't contain :
from diffusers.utils.import_utils import is_xformers_available
it's running this attention.py, it seems like the colab doc only just broke in the past few hours?
!wget -O attention.py https://raw.githubusercontent.com/huggingface/diffusers/main/src/diffusers/models/attention.py
I'm on it
disconnect from the colab and reconnect and check if it's fixed
@johnrees was it working before with the A100 ? (yesterday or before)
I'm not 100% sure as I've been running a few different things recently but pretty sure it was A100.
I think was using (Hardware Accelerator = GPU, GPU Class = Premium) with the specs below, in a colab using https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb about 6 hours ago
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 A100-SXM4-40GB Off | 00000000:00:04.0 Off | 0 |
| N/A 30C P0 45W / 400W | 0MiB / 40536MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
if I wasn't using that then the settings were just (Hardware Accelerator = GPU and standard class GPU)
@TheLastBen
I Had the same problem, seems fixed now.
The training is now starting with the A100.
it seems to be fixed here too, thanks for creating this repo @TheLastBen 🙏
Can't confirm at the moment as I've started the training on the standard GPU but will report back if similar issues occur. Thank you!
Off-topic, is it just me or the standard and premium GPUs have roughly the same seconds / iteration? I don't remember the A100 training, but with standard GPU I get approx 1 sec / iteration at a cost of 1.96 computing units / hour. A100 is mind blowingly expensive at 13 units / hour, I can't afford that rate.
If I have the Pro+ subscription but use the standard GPU, do I still get background execution?
Most likely you can get background execution as it's related to your plan, not the GPU
HI,
I am getting same errror. I am using Custom GCE VM. (Created using offical colab guide.)
Traceback (most recent call last):
File "/content/diffusers/examples/dreambooth/train_dreambooth.py", line 18, in <module>
from diffusers import AutoencoderKL, DDPMScheduler, StableDiffusionPipeline, UNet2DConditionModel
File "/usr/local/lib/python3.7/dist-packages/diffusers/__init__.py", line 21, in <module>
from .models import AutoencoderKL, UNet2DConditionModel, UNet2DModel, VQModel
File "/usr/local/lib/python3.7/dist-packages/diffusers/models/__init__.py", line 19, in <module>
from .unet_2d import UNet2DModel
File "/usr/local/lib/python3.7/dist-packages/diffusers/models/unet_2d.py", line 11, in <module>
from .unet_blocks import UNetMidBlock2D, get_down_block, get_up_block
File "/usr/local/lib/python3.7/dist-packages/diffusers/models/unet_blocks.py", line 20, in <module>
from .attention import AttentionBlock, SpatialTransformer
File "/usr/local/lib/python3.7/dist-packages/diffusers/models/attention.py", line 21, in <module>
from diffusers.utils.import_utils import is_xformers_available
ImportError: cannot import name 'is_xformers_available' from 'diffusers.utils.import_utils' (/usr/local/lib/python3.7/dist-packages/diffusers/utils/import_utils.py)
Traceback (most recent call last):
File "/usr/local/bin/accelerate", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/accelerate_cli.py", line 43, in main
args.func(args)
File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/launch.py", line 837, in launch_command
simple_launcher(args)
File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/launch.py", line 354, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python3', '/content/diffusers/examples/dreambooth/train_dreambooth.py', '--image_captions_filename', '--train_text_encoder', '--save_starting_step=500', '--stop_text_encoder_training=1200', '--save_n_steps=0', '--pretrained_model_name_or_path=/content/stable-diffusion-v1-5', '--instance_data_dir=/content/gdrive/MyDrive/Fast-Dreambooth/Sessions/vikas/instance_images', '--output_dir=/content/models/vikas', '--instance_prompt=', '--seed=96576', '--resolution=512', '--mixed_precision=no', '--train_batch_size=1', '--gradient_accumulation_steps=1', '--use_8bit_adam', '--learning_rate=2e-6', '--lr_scheduler=polynomial', '--center_crop', '--lr_warmup_steps=0', '--max_train_steps=3000']' returned non-zero exit status 1.
Something went wrong
Wed Nov 2 21:50:54 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 460.32.03 Driver Version: 460.32.03 CUDA Version: 11.2 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 A100-SXM4-40GB Off | 00000000:00:04.0 Off | 0 |
| N/A 30C P0 47W / 400W | 0MiB / 40536MiB | 0% Default |
| | | Disabled |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+