prompt-to-prompt
prompt-to-prompt copied to clipboard
Support for half precision?
Are there any instructions on how to get this code working for half precision? If I'm not mistaken, diffusers==0.3.0
might be problematic for this (I think the VAE couldn't handle it) so I upgraded it the diffusers version which should fix that. Currently running into other errors that I'm slowly debugging. A little worried that the version upgrading might be causing more problems than necessary, so if there are specific instructions on how to get this code working for half precision that would be great to hear.
I've been working on this in colab. If anyone could help me out... it's making black images at the mo... that'd be great!
https://github.com/Luke2642/prompt-to-prompt-colab/blob/main/null_text_w_ptp_colab_fp16.ipynb
You'll have to paste in your own huggingface token but other than that it's good to go.
I should probably start over by first incorporating the suggestions from https://github.com/google/prompt-to-prompt/issues/29 and the work of https://github.com/ouhenio/null-text-inversion-colab
Hi @Luke2642 , actually that's where I stopped too. I was able to get the code to run without runtime errors but always ended with black images (also ended up using oehenio's notebook iirc). I didn't end up continuing debugging because I realized that at the rate the images were generating, my specific use case might have benefitted from a different method instead. Sorry that I can't help further with this.
@ryan-caesar-ramos thanks! So you got exactly the same, original and vqvae image fine, but reconstruction and attention maps come out black too?
I've posted in a few places so hopefully someone with more experience can help us out :-)
@Luke2642 Sorry for the late reply: got the exact same thing you did ("original and vqvae image fine, but reconstruction and attention maps come out black too"), down to the letter.
Hi @Luke2642 can I ask if you ever got this working?
@ryan-caesar-ramos I got this prompt to prompt working on FP16, but it's missing the localised attention:
https://colab.research.google.com/drive/1DGcWAz_s5wJFBfmWr2T8zmJ96p0ijn1W?usp=sharing
And here are some other links that might be useful:
https://github.com/cloneofsimo/inversion_edits/blob/master/example_scripts/ddim_inv.ipynb https://colab.research.google.com/drive/1SRSD6GXGqU0eO2CoTNY-2WykB9qRZHJv?usp=sharing
Sorry, completely missed this, thanks @Luke2642!