brian6091

Results 52 comments of brian6091
trafficstars

@JohnnyRacer So the class prompt should represent the class you don't want your model to forget. For example, if you are training for a specific person, instance_prompt = "raretoken person"...

Ah sorry, I forgot about that. I don't think gradient checkpointing is working yet (if you want to train the text encoder).

You can sample a few prompts at intermediate checkpoints, say, every 500 or 1000 iterations. You can also track the loss, although that is very noisy.

@G-force78 Not sure, depends on what you're using to run the training. But basically, around the part where you save a checkpoint, you need to: 1) construct an inference pipeline...

@G-force78 Looks right, now just run the pipe with a prompt

@pedrogengo @cloneofsimo the script train_lora_dreambooth.py seems to be missing a call for accelerate to manage the accumulation context: https://huggingface.co/docs/accelerate/v0.13.2/en/package_reference/accelerator#accelerate.Accelerator.accumulate so I'm not sure passing the parameter will do anything.

Even if you set return_grad=None or filter the parameters?

Just gonna drop a link to more training/tuning discussion here: https://github.com/cloneofsimo/lora/discussions/37

@scaraffe You can try this colab notebook: https://github.com/brian6091/Dreambooth that allows using captions with @cloneofsimo's lora training

You can already do this, disabling train_text_encoder will only train the unet, enabling it will train both. Or perhaps you mean you want to train the text_encoder by itself?