Naga Sai Abhinay
Naga Sai Abhinay
Image Interpolation is looking good. I'm getting results in line with Dall-e 2. Notebook: https://colab.research.google.com/drive/1eN-oy3N6amFT48hhxvv02Ad5798FDvHd?usp=sharing Results:-  Inputs:-   Will open a PR tomorrow.
Opened the PR for UnCLIPImageInterpolation: https://github.com/huggingface/diffusers/pull/2400 @williamberman @patrickvonplaten
While #2400 is under review, I wanted to share the basic outline for the UnCLIP text diff flow: 1. Take the original image `x0` and generate the inverted noise `xT`...
UnCLIP Image Interpolation demo space is up and running at https://huggingface.co/spaces/NagaSaiAbhinay/UnCLIP_Image_Interpolation_Demo Do check it out !
Thanks @patrickvonplaten, @osanseviero !
Can you share the contents of your ```.cache/huggingface/accelerate/default_config.yaml``` file ? It'll help in understanding if accelerate is able to find both your GPU's The path to the file should be...
@sayakpaul I’d like to work on this.
@sayakpaul what is the expected outcome ? My understanding is: 1. We make the `TuneAVideoPipeline` and it's dependency, `UNet3DConditionModel` available via diffusers. 2. We provide some trained TuneAVideoPipeline compatible checkpoints...
Ohh Right. Well, I'll start and open a draft PR.
@jorgemcgomes thanks for the inputs. Will keep this in mind. I'm sure we'll need these details down the line.