Suraj Patil
Suraj Patil
We'll also need to resolve the conflict and rebase to `main`. Let me know if you need any help.
@DtYXs checkpointing is now added to all scripts, for example, #1668. And to answer your question, you should not assign unwrapped `model` to `model`; instead, we could directly pass `accelerator.unwrap_model(model)`...
Similar issue #1734, answered here https://github.com/huggingface/diffusers/issues/1734#issuecomment-1366017170.
Sounds good to me!
Hey @shileims you can configure accelerate to use mixed precision, use `accelerate config` and it'll ask you bunch of question when it asks about mixed precision, choose `fp16`. Or you...
thank you for sharing!
Did you install the `xla` version of torch. On TPUs we need to install `torch_xla`, cf https://github.com/pytorch/xla/ Also, we haven't tested the scripts on TPUs so there might some rough...
Don't have access to TPU V4 at the time, and it's not a priority to support TPUs for PyTorch scripts at the moment. Adding this to my todo list, though....
Hey @isamu-isozaki we just updated some style dependencies cf #2279 , so you'll need to update the diffusers with ``` pip install --upgrade -e .["quality"] ``` Rebase the branch and...
awesome, feel free to start @kabachuha we'll def help! Maybe for now we could keep it as researc_project since it's still experimental and the arch might change in future.