Pedro Cuenca
Pedro Cuenca
I agree with @patil-suraj. We could maybe clarify the behaviour in the docstrings?
Thanks @anton-l, I'll take a look :)
Hi @djdookie! Thanks for raising this question, that was actually a bad explanation I wrote. It is indeed possible to convert a checkpoint to an inference pipeline using something like...
Hi @fakerybakery! Unfortunately we don't currently support fine-tuning using the `mps` device as there are some limitations in PyTorch that make it very challenging to work. See, for example, [this...
Hi @rahilparsana, please make sure you follow the JAX installation instructions for CUDA: https://github.com/google/jax#pip-installation-gpu-cuda
> are we missing a `train_dreambooth_inpaint.py` in the folder? You are right, it's in `examples/research_projects/dreambooth_inpaint/`. I'll remove the whole section and link to the other directory. Thanks a lot!
@yiyixuxu can you take a final look? :)
I think it's an interesting use case indeed! What would the solution entail, uploading the model files to the Hub, and then have `from_pretrained` use them? Sounds good to me!
I agree with @Lime-Cakes, it's usually more efficient to use Flax on TPUs rather than PyTorch XLA. @ssusie do you think there are use cases that cannot be supported with...
I have no experience training with DEIS, but the PR looks fine to me. The change is isolated to this use case and shouldn't impact anything, unless the user manually...