diffusers
diffusers copied to clipboard
DreamBooth: Not applying accelerator.accumulate on text_encoder
Describe the bug
At examples/dreambooth/train_dreambooth_lora_sdxl.py#L1618, only unet
is applied with accelerator.accumulate()
.
for step, batch in enumerate(train_dataloader):
with accelerator.accumulate(unet): # HERE
pixel_values = batch["pixel_values"].to(dtype=vae.dtype)
prompts = batch["prompts"]
However, when using --train_text_encoder
, the text encoders text_encoder_one
and text_encoder_two
are also training. Shouldn't they be also applied with accelerator.accumulate()
?
Reproduction
None. It is like a logical one instead of a behavior, so no need to reproduce.
Logs
No response
System Info
None. It is like a logical one instead of a behavior, so no need to mention the environment.
Who can help?
@sayakpaul
I remember we did this because accelerate
wasn't able to accumulate for multiple models. But this has changed now. Ccing @muellerzr for confirmation.
Yes, all models being trained should be passed to accumulate
@immortalCO would you be able to submit a PR for this?
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread.
Please note that issues that do not follow the contributing guidelines are likely to be ignored.
Closing this issue because of inactivity. Feel free to reopen.