lora icon indicating copy to clipboard operation
lora copied to clipboard

Please do u know how to resolve this

Open ChefEase opened this issue 8 months ago • 1 comments
trafficstars

Traceback (most recent call last): File "/content/lora/training_scripts/train_lora_dreambooth.py", line 1008, in main(args) File "/content/lora/training_scripts/train_lora_dreambooth.py", line 489, in main accelerator = Accelerator( ^^^^^^^^^^^^ TypeError: Accelerator.init() got an unexpected keyword argument 'logging_dir' Traceback (most recent call last): File "/usr/local/bin/accelerate", line 8, in sys.exit(main()) ^^^^^^ File "/usr/local/lib/python3.11/dist-packages/accelerate/commands/accelerate_cli.py", line 48, in main args.func(args) File "/usr/local/lib/python3.11/dist-packages/accelerate/commands/launch.py", line 1199, in launch_command simple_launcher(args) File "/usr/local/lib/python3.11/dist-packages/accelerate/commands/launch.py", line 778, in simple_launcher raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) subprocess.CalledProcessError: Command '['/usr/bin/python3', 'lora/training_scripts/train_lora_dreambooth.py', '--pretrained_model_name_or_path=runwayml/stable-diffusion-v1-5', '--instance_data_dir=/content/lora/datasets/', '--output_dir=/content/lora/output', '--instance_prompt=onepieceartstyle', '--resolution=512', '--use_8bit_adam', '--mixed_precision=fp16', '--train_batch_size=1', '--gradient_accumulation_steps=1', '--learning_rate=0.0003', '--lr_scheduler=constant', '--lr_warmup_steps=0', '--max_train_steps=3000', '--train_text_encoder', '--lora_rank=16', '--learning_rate_text=1e-05']' returned non-zero exit status 1

ChefEase avatar Mar 10 '25 10:03 ChefEase