feed_forward_vqgan_clip
feed_forward_vqgan_clip copied to clipboard
How to get more variation in the null image
I've been generating images using this model, which is delightfully fast, but I've noticed that it produces images that are all alike. I tried generating the "null" image by doing:
H = perceptor.encode_text(toks.to(device)).float()
z = net(0 * H)
This resulted in:
And indeed, everything I generated kind of matched that: you can see the fleshly protrusion on the left in "gold coin":
The object and matching mini-object in "tent":
And it always seems to try to caption the image with nonsense lettering ("lion"):
So I'm wondering if there's a way to "prime" the model and suggest it use a different zero image for each run. Is there a variable I can set, or is this deeply ingrained in training data?
Any advice would be appreciated, thank you!
(Apologies if this is the same as #8, but it sounded like #8 was solved by using priors which doesn't seem to help with this.)