fast-stable-diffusion
fast-stable-diffusion copied to clipboard
Old method bug
Trying old method with default settings, got this error:
usage: train_dreambooth.py [-h] --pretrained_model_name_or_path
PRETRAINED_MODEL_NAME_OR_PATH
[--tokenizer_name TOKENIZER_NAME]
--instance_data_dir INSTANCE_DATA_DIR
[--class_data_dir CLASS_DATA_DIR]
[--instance_prompt INSTANCE_PROMPT]
[--class_prompt CLASS_PROMPT]
[--with_prior_preservation]
[--prior_loss_weight PRIOR_LOSS_WEIGHT]
[--num_class_images NUM_CLASS_IMAGES]
[--output_dir OUTPUT_DIR] [--seed SEED]
[--resolution RESOLUTION] [--center_crop]
[--train_text_encoder]
[--train_batch_size TRAIN_BATCH_SIZE]
[--sample_batch_size SAMPLE_BATCH_SIZE]
[--num_train_epochs NUM_TRAIN_EPOCHS]
[--max_train_steps MAX_TRAIN_STEPS]
[--gradient_accumulation_steps GRADIENT_ACCUMULATION_STEPS]
[--gradient_checkpointing]
[--learning_rate LEARNING_RATE] [--scale_lr]
[--lr_scheduler LR_SCHEDULER]
[--lr_warmup_steps LR_WARMUP_STEPS]
[--use_8bit_adam] [--adam_beta1 ADAM_BETA1]
[--adam_beta2 ADAM_BETA2]
[--adam_weight_decay ADAM_WEIGHT_DECAY]
[--adam_epsilon ADAM_EPSILON]
[--max_grad_norm MAX_GRAD_NORM] [--push_to_hub]
[--hub_token HUB_TOKEN]
[--hub_model_id HUB_MODEL_ID]
[--logging_dir LOGGING_DIR]
[--mixed_precision {no,fp16,bf16}]
[--save_n_steps SAVE_N_STEPS]
[--save_starting_step SAVE_STARTING_STEP]
[--stop_text_encoder_training STOP_TEXT_ENCODER_TRAINING]
[--image_captions_filename]
[--dump_only_text_encoder] [--train_only_unet]
[--Session_dir SESSION_DIR]
[--local_rank LOCAL_RANK]
train_dreambooth.py: error: argument --save_starting_step: invalid int value: ''
Traceback (most recent call last):
File "/usr/local/bin/accelerate", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/accelerate_cli.py", line 43, in main
args.func(args)
File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/launch.py", line 837, in launch_command
simple_launcher(args)
File "/usr/local/lib/python3.7/dist-packages/accelerate/commands/launch.py", line 354, in simple_launcher
raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python3', '/content/diffusers/examples/dreambooth/train_dreambooth.py', '--save_starting_step=', '--save_n_steps=', '--Session_dir=', '--pretrained_model_name_or_path=', '--instance_data_dir=', '--class_data_dir=', '--output_dir=', '--with_prior_preservation', '--prior_loss_weight=1.0', '--instance_prompt=', '--class_prompt=', '--seed=', '--resolution=512', '--mixed_precision=', '--train_batch_size=1', '--gradient_accumulation_steps=1', '--gradient_checkpointing', '--use_8bit_adam', '--learning_rate=2e-6', '--lr_scheduler=constant', '--lr_warmup_steps=0', '--center_crop', '--max_train_steps=', '--num_class_images=']' returned non-zero exit status 2.
Something went wrong
are you using the latest colab ?
I have the excact same bug... Just tried 5 minutes ago
confirmed broken, many undefined vars
I'll move the old method to a new colab
confirmed broken, many undefined vars
which vars exactly ?
every variable passed to train_dreambooth.py
regardless prior preservation
only one var blocked it but I can't find which one
now trying with an older rev 4155e9ccd7f322f70be5314f68222ca6b3f65343, i'll tell in a few minutes if it works
confirmed 4155e9ccd7f322f70be5314f68222ca6b3f65343 as working, so it broke between this and the most recent commits. anyway splitting the "old method" into a separate notebook makes totally sense.
where can I find a notebook that works?
where can I find a notebook that works?
If you want to use the old method : https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/Dreambooth/fast_DreamBooth-Old-Method.ipynb
new method (better one) : https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb
where can I find a notebook that works?
If you want to use the old method : https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/Dreambooth/fast_DreamBooth-Old-Method.ipynb
new method (better one) : https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb
ok so in the NEW METHOD page, there are no CLASS images upload section....??? just asking..... i have no problem using the OLD model, since i had GREAT results using PRIOR preservation and Captionned_instance_images. SO there is no need to use CLASS pictures on the NEW method?
If you set "contains_faces" to male, female or both, it will use a method of prior preservation, applied only on the text encoder which yields better results in my experience. But you need to correctly rename your input images as the example photo shows.
If you set "contains_faces" to male, female or both, it will use a method of prior preservation, applied only on the text encoder which yields better results in my experience. But you need to correctly rename your input images as the example photo shows.
ok i tried the NEW method and the OLDER, BOTH with all as instructed, the results on the NEW one are a MORPHED version of me and my friend.... on the older COLLAB, i did the SAME training but using OLDER TRAINING, and the results were completely accurate...... the MORPHED version needs some PROMPT HELP to get to my actual face, but not so accurate as the older method, in BOTH i used prior preservation and captioned instances.,.... the morphed version looks NICE but its not always me, on the old version its ALWAYS ME, so please, never remove the old method page, i appreciate it.. and many others willl!!!
What did you name the intance pictures in the new method ?
one of them is 4ZEEEddie_man_08.jpg they work flawless on the OLD model and NEW, but on the NEW it does NOT looks like me, it has characteristic of me, like glasses and beard, but NOT me..... same as my friend...... 71ZEEEstefania_woman_04.jpg works great
You didn't follow the most important rule of the new method, never use known name or class, you used eddie and man, so you turned the new method into a bad version of the old method.
I wrote it in giant font so that no one would miss it :
The most importent step is to rename the instance picture to the same instance unique identifier for each subject
those instructions were for the old method, the new method's instruction were written in the new method cell.
those instructions were for the old method, the new method's instruction were written in the new method cell.
ok dont worry about that, seems im mixing METHODS, sorry im fine, i now have a question about the Enable_text_encoder_training, is it necessary for CAPTIONED INSTANCES or has nothing do to with that? What is it for?
Keep it between 10-20% if you want an easy style transfer, if you want quick results at lower steps, push it to 100%
you skipped the dependencies cell
the second cell is the dependencies cell
the second cell is the dependencies cell
ok, it got fixed somehow, but i am running it now withthe Enable_text_encoder_training activatad at 500 steps.......
@LIQUIDMIND111 did you end up getting better results after doing the new method correctly?
@LIQUIDMIND111 did you end up getting better results after doing the new method correctly?
not really, i get fat and ugly and NOT my actual face..... on the OLD model, i get perfect results, BUt i noticed that my instance names has NUMBERS, and when training, the instance names that are shown at the training moment, are missing THOSE numbers, what i believe is that you CANNOT use numbers with letters as INSTANCE NAMES.... not sure why, but got BETTER results whe NOT using numbers on the instance names like VREWGVEG(1).jpg compared to 345GFGFD43(1).jpg
Is the old collab gone? =( Any idea where I can find it?
Try with the new one, if you're not getting good results, I'll walk you through it.
Try with the new one, if you're not getting good results, I'll walk you through it.
thats really sad you removed the old one, since we were getting BETTER results than the NEW one.... ouch. that hurts
@LIQUIDMIND111 did you end up getting better results after doing the new method correctly?
NOT BETTER... the old method looked better, but in the new it ONLY worked SOMEWHAT if i use text encoder at 100%/////
@TheLastBen I can't seem to get as good results as the old version.
- Without the class images that I can upload myself, it seems that pets or art styles can't seem to work as well. And when I put in more class images previously, it greatly improves the results too. (actually I'm not too sure what to enter for faces option for pets or art styles)
- I can't really understand how the text encoder works or how much to put in, but it seems to really affect how much I can stylize and well the results look. =(