Naga Sai Abhinay

Results 57 comments of Naga Sai Abhinay

Image Interpolation is looking good. I'm getting results in line with Dall-e 2. Notebook: https://colab.research.google.com/drive/1eN-oy3N6amFT48hhxvv02Ad5798FDvHd?usp=sharing Results:- ![starry_to_dog](https://user-images.githubusercontent.com/24771261/219422627-637a9d05-4d78-4863-885d-4ffddfeed109.png) Inputs:- ![starry_night](https://user-images.githubusercontent.com/24771261/219424665-d950e304-f6d6-431c-8510-0bc0aeb7dbfc.jpg) ![dogs](https://user-images.githubusercontent.com/24771261/219424577-8d2f0450-6aa5-426c-824f-fed3cad13c1e.jpg) Will open a PR tomorrow.

Opened the PR for UnCLIPImageInterpolation: https://github.com/huggingface/diffusers/pull/2400 @williamberman @patrickvonplaten

While #2400 is under review, I wanted to share the basic outline for the UnCLIP text diff flow: 1. Take the original image `x0` and generate the inverted noise `xT`...

UnCLIP Image Interpolation demo space is up and running at https://huggingface.co/spaces/NagaSaiAbhinay/UnCLIP_Image_Interpolation_Demo Do check it out !

Thanks @patrickvonplaten, @osanseviero !

Can you share the contents of your ```.cache/huggingface/accelerate/default_config.yaml``` file ? It'll help in understanding if accelerate is able to find both your GPU's The path to the file should be...

@sayakpaul what is the expected outcome ? My understanding is: 1. We make the `TuneAVideoPipeline` and it's dependency, `UNet3DConditionModel` available via diffusers. 2. We provide some trained TuneAVideoPipeline compatible checkpoints...

Ohh Right. Well, I'll start and open a draft PR.

@jorgemcgomes thanks for the inputs. Will keep this in mind. I'm sure we'll need these details down the line.