Dreambooth-Stable-Diffusion
Dreambooth-Stable-Diffusion copied to clipboard
local use setup guide?
I'm confused. I was under the impression we could use this with a local GPU but I didn't see a guide for setting things up locally.
What am I missing? Sorry if this is dumb, like Joe said, this is me getting lost in a jungle without a torch. Please help.
I was able to get it to work by reading through one of the notebook files, e.g., https://github.com/JoePenna/Dreambooth-Stable-Diffusion/blob/main/dreambooth_colab_joepenna.ipynb and recreating locally what it was doing. This is relatively straightforward if you're familiar with the Linux command line and a tiny bit of Python... much less so if you aren't :\
Here's an untested attempt to distil out the important bits if it helps:
cd path/to/Dreambooth-Stable-Diffusion
python3 -m venv env
source ./env/bin/activate
pip install omegaconf
pip install einops
pip install pytorch-lightning==1.6.5
pip install test-tube
pip install transformers
pip install kornia
pip install -e git+https://github.com/CompVis/taming-transformers.git@master#egg=taming-transformers
pip install -e git+https://github.com/openai/CLIP.git@main#egg=clip
pip install setuptools==59.5.0
pip install pillow==9.0.1
pip install torchmetrics==0.6.0
pip install -e .
pip install protobuf==3.20.1
pip install gdown
pip install pydrive
pip install -qq diffusers["training"]==0.3.0 transformers ftfy
pip install -qq "ipywidgets>=7,<8"
pip install huggingface_hub
pip install ipywidgets==7.7.1
pip install captionizer==1.0.1
python "main.py" \
--base configs/stable-diffusion/v1-finetune_unfrozen.yaml \
-t \
--actual_resume path/to/sd-v1-4.ckpt \
--reg_data_root path/to/regularization-images \
-n someProjectNameJustMakeSomethingUp \
--gpus 0, \
--data_root path/to/training-images \
--max_training_steps 2000 \
--class_word yourClassWord \
--token yourNewToken \
--no-test
and the output checkpoints end up under logs/
.
@jwatzman sorry man, im on the 2nd group you mentioned. No clue how to do any of this. Guess I'll have to play the waiting game until wiser minds figure this out.
A workaround for this is running the jupyter notebook locally. Open a terminal from the Dreambooth folder, install jupyter with conda install jupyter
and open a notebook server with jupyter notebook
. Here you can run the whole notebook. Hope this helps
I've tried running it with a local jupyter, but couldn't solve the "Trainer not defined..." error. Any ideas?
okay, figured it out and started training....
https://github.com/JoePenna/Dreambooth-Stable-Diffusion/issues/96#issuecomment-1296292055
pip install torch==1.12.1+cu116 -f https://download.pytorch.org/whl/torch_stable.html
You need to set up the regularization images also;
git clone https://github.com/djbielejeski/Stable-Diffusion-Regularization-Images-person_ddim
mkdir -p regularization_images/person_ddim
mv -v Stable-Diffusion-Regularization-Images-person_ddim/person_ddim/*.* regularization_images/person_ddim
then run this badboy;
CUDA_VISIBLE_DEVICES=0 python "main.py" --base configs/stable-diffusion/v1-finetune_unfrozen.yaml -t --actual_resume sd-models/model.ckpt -n ChrisBWardProject --gpus 0, --reg_data_root ./regularization_images/person_ddim --data_root ./training-images/resized --max_training_steps 2000 --class_word person --token chrisbward --no-test
I've tried running it with a local jupyter, but couldn't solve the "Trainer not defined..." error. Any ideas?
This was my issue /w Jupyter locally, tried everything.
Any chance that this could run on windows locally?
I have tried Dreambooth-SD-optimized on windows locally and couldn't get good results. Joe Penna's dreambooth produces the best results I have seen.