diffusers icon indicating copy to clipboard operation
diffusers copied to clipboard

VQ-Diffusion

Open patrickvonplaten opened this issue 1 year ago • 17 comments

Model/Pipeline/Scheduler description

VQ-Diffusion is based on a VQ-VAE whose latent space is modeled by a conditional variant of the recently developed Denoising Diffusion Probabilistic Model (DDPM). It produces significantly better text-to-image generation results when compared with Autoregressive models with similar numbers of parameters. Compared with previous GAN-based methods, VQ-Diffusion can handle more complex scenes and improve the synthesized image quality by a large margin.

https://github.com/microsoft/VQ-Diffusion

Open source status

  • [X] The model implementation is available
  • [X] The model weights are available (Only relevant if addition is not a scheduler).

Provide useful links for the implementation

VQ-Diffusion would be a super cool addition to diffusers. cc @cientgu and @zzctan .

Also cc @patil-suraj here

patrickvonplaten avatar Sep 01 '22 12:09 patrickvonplaten

Hi @patrickvonplaten, would love to take this up!

unography avatar Sep 06 '22 18:09 unography

This would be great! Let me know if you need any help :-) To begin with I think we should try to get it running with the original codebase and then port the code to diffusers

patrickvonplaten avatar Sep 09 '22 15:09 patrickvonplaten

Hey @unography awesome! Happy to help here if you have any questions.

patil-suraj avatar Sep 09 '22 15:09 patil-suraj

Any progress here @unography ? Do you already have an open PR :-) Otherwise let's maybe open it again to the community

patrickvonplaten avatar Sep 16 '22 13:09 patrickvonplaten

Hi, I will be happy to contribute / collaborate on this :)

345ishaan avatar Sep 19 '22 06:09 345ishaan

Hi @patrickvonplaten, unfortunately, I've been unable to spend time on this right now due to some other commitments, we can open this up again to the community

unography avatar Sep 19 '22 14:09 unography

No worries! @345ishaan would you be interested in giving it a go?

patrickvonplaten avatar Sep 22 '22 15:09 patrickvonplaten

@patrickvonplaten Yes, happy to start with this. Do you have any documentation / suggestions / reference CLs on how to quickstart?

345ishaan avatar Sep 23 '22 04:09 345ishaan

Update: I was getting familiarized with the paper and author's code. I also checked how other models are integrated into diffuser's pipeline for inference only mode, so plan to do the same for VQ-Diffusion as next step using original code impln.

345ishaan avatar Sep 26 '22 06:09 345ishaan

That's awesome, @345ishaan! Let us know if you need any help :)

pcuenca avatar Sep 27 '22 10:09 pcuenca

Hello, super sorry wasn't aware someone was already working on this! I ported the VQVAE for the ITHQ dataset. Would love to help contribute if possible :)

I put up a draft PR https://github.com/huggingface/diffusers/pull/658 for the VQVAE with docs on how to compare it against VQ-diffusion. Is the standard to wait until the whole pipeline is complete before merging anything, or is it ok to incrementally merge functionality? I.e. for VQ-diffusion, it might be easier to get the individual autoencoders to work one at a time in their own commits before moving on to the rest of the model.

Any advice is appreciated, thanks!

williamberman avatar Sep 27 '22 19:09 williamberman

Hmm ok, if you have crossed the finish line, then go ahead! I was mostly working on adding the implentation to diffusers in inference mode. If you need any help further, happy to collaborate.

Going forward, what is the best way to avoid such overlaps? I thought it was via proposing/updating through issues.

345ishaan avatar Sep 28 '22 05:09 345ishaan

@345ishaan definitely not over the finish line, just ported the autoencoder for one of the models! Happy to collaborate :)

williamberman avatar Sep 28 '22 05:09 williamberman

SG! I will check your CL. Do you want to chat over discord?

345ishaan avatar Sep 28 '22 07:09 345ishaan

@cientgu @zzctan

Could I have some help parsing q_posterior?

https://github.com/microsoft/VQ-Diffusion/blob/3c98e77f721db7c787b76304fa2c96a36c7b00af/image_synthesis/modeling/transformers/diffusion_transformer.py#L235-L267

I believe it's computing equation 11 in log space, but I still have a few questions. I understand it's adapted from https://github.com/ehoogeboom/multinomial_diffusion/blob/9d907a60536ad793efd6d2a6067b3c3d6ba9fce7/diffusion_utils/diffusion_multinomial.py#L171-L193 which provides the initial derivation that makes sense.

        # q(xt-1 | xt, x0) = q(xt | xt-1, x0) * q(xt-1 | x0) / q(xt | x0)
        # where q(xt | xt-1, x0) = q(xt | xt-1).

However, the later comment is a bit vague :)

        # Note: _NOT_ x_tmin1, which is how the formula is typically used!!!
        # Not very easy to see why this is true. But it is :)
        unnormed_logprobs = log_EV_qxtmin_x0 + self.q_pred_one_timestep(log_x_t, t)

Because it seems like the actual equation it's using is q(xt+1 | xt) * q(xt-1 | x0) / q(xt | x0).

Additional questions,

  1. Some context on how you're handling masks in q_posterior would be helpful
  2. What is the summation over in equation 11 and how does it map to q_posterior?
  3. I don't see an analog for the lines starting from 262 onward in multinomial diffusion, could you provide some additional context there as well?

Lmk if any of that wasn't clear, thank you!

williamberman avatar Sep 30 '22 17:09 williamberman

@williamberman I will be able to take some tasks today and tomorrow. I just checked your CL, it seems like you ported the vq-vae encoder there. Do you want to chat over discord to split tasks? My username is 345ishaan#9676

345ishaan avatar Oct 01 '22 19:10 345ishaan

pinged you in discord @345ishaan!

williamberman avatar Oct 01 '22 19:10 williamberman