diffusers icon indicating copy to clipboard operation
diffusers copied to clipboard

[launch_inpaint.sh] AttributeError: 'LatentsDataset' object has no attribute 'class_images_path'

Open ZeroCool22 opened this issue 2 years ago • 2 comments

Describe the bug

Screenshot_11

My launch_inpaint.sh

export LD_LIBRARY_PATH=/usr/lib/wsl/lib:$LD_LIBRARY_PATH export MODEL_NAME="runwayml/stable-diffusion-inpainting" export INSTANCE_DIR="training" export OUTPUT_DIR="my_model"

accelerate launch train_inpainting_dreambooth.py
--pretrained_vae_name_or_path="stabilityai/sd-vae-ft-mse"
--pretrained_model_name_or_path=$MODEL_NAME
--instance_data_dir=$INSTANCE_DIR
--output_dir=$OUTPUT_DIR
--instance_prompt="Greyalieninpa"
--resolution=512
--train_batch_size=1
--learning_rate=1e-6
--lr_scheduler="constant"
--lr_warmup_steps=0
--gradient_accumulation_steps=4 --gradient_checkpointing
--use_8bit_adam
--save_interval=500
--max_train_steps=5000

xzxzzzz

Reproduction

No response

Logs

No response

System Info

  • diffusers version: 0.9.0
  • Platform: Linux-5.10.102.1-microsoft-standard-WSL2-x86_64-with-glibc2.31
  • Python version: 3.9.13
  • PyTorch version (GPU?): 1.12.1+cu116 (True)
  • Huggingface_hub version: 0.10.0
  • Transformers version: 4.24.0
  • Using GPU in script?: <1080 TI>
  • Using distributed or parallel set-up in script?:

ZeroCool22 avatar Dec 12 '22 04:12 ZeroCool22

add --not_cache_latents to your launch_inpaint.sh and it should work

InB4DevOps avatar Dec 21 '22 23:12 InB4DevOps

--not_cache_latents leads to an out of memory error even on a 24gb system. Was there any workaround found to this? Running into the same issue with train_inpainting_dreambooth.py as the one mentioned i.e. 'LatentsDataset' object has no attribute 'class_images_path'

vivek-hirer-ai avatar Feb 02 '23 09:02 vivek-hirer-ai