semantic-object-accuracy-for-generative-text-to-image-synthesis icon indicating copy to clipboard operation
semantic-object-accuracy-for-generative-text-to-image-synthesis copied to clipboard

How to use the pre-trained model to generate images from specific captions?

Open Astatine-213-Tian opened this issue 5 years ago • 10 comments

Screenshot from your paper For example, how can I generate an image that is corresponding to the caption "a person skateboarding in the street with some people looking on"?

Astatine-213-Tian avatar Jun 15 '20 13:06 Astatine-213-Tian

Could you please upload the code you use to get the images presented in your paper? Thanks!

Astatine-213-Tian avatar Jun 16 '20 09:06 Astatine-213-Tian

Hi, the easiest way of doing this is to go through the captions in the validation set (captions.pickle) until you find the sentence for which you want to generate images. If you load the captions the same way as the dataloader does you can iterate through them directly until you find the correct caption.

cap = captions[0].data.cpu().numpy() sentence = "" for j in range(len(cap)): if cap[j] == 0: break word = self.ixtoword[cap[j]].encode('ascii', 'ignore').decode('ascii') sentence += word + " "

tohinz avatar Jun 17 '20 12:06 tohinz

Hi, thanks for your response. Can I generate image from my own sentences in the same way?

Astatine-213-Tian avatar Jun 19 '20 08:06 Astatine-213-Tian

Hi, have a look at how they do it in the original AttnGAN here: https://github.com/taoxugit/AttnGAN/blob/0d000e652b407e976cb88fab299e8566f3de8a37/code/main.py#L146 You can use it in the same way for our model since we use the same text encoder. You'll need to provide the bounding boxes and object labels for our model though, or use another network that predicts these from the caption.

tohinz avatar Jun 22 '20 13:06 tohinz

Hello,

Thanks for the code. Do you have any pointers where could I find such a model which gives bounding box and labels from caption? For bounding box I got this from other Issue: https://github.com/jamesli1618/Obj-GAN/, but not able to find for labels?

Thanks.

RoshanTanisha avatar Jun 22 '20 13:06 RoshanTanisha

I haven't used it personally, but something like LayoutVAE or Seq-SG2SL and their related work should help with this (I don't know which papers have available implementations, so you might have to check the related work of these papers, too.

tohinz avatar Jun 23 '20 09:06 tohinz

Hi both the papers ( LayoutVAE or Seq-SG2SL) you had mentioned doesnt have the code.Can you please suggest how to get bounding box (semantic layout) from the caption.

savitha91 avatar Aug 04 '20 16:08 savitha91

Hi, you can use the code from https://github.com/jamesli1618/Obj-GAN/ to get bounding boxes + object labels from the captions. They have a model pretrained on COCO, which should be a good starting point for most settings.

tohinz avatar Aug 05 '20 08:08 tohinz

Hi I checked this code https://github.com/jamesli1618/Obj-GAN/tree/master/box_generation/bbox_proc , which is mentioned to extract the bound-box info .I am checking on this. Thanks

savitha91 avatar Aug 05 '20 09:08 savitha91

Hi Tobias, would like to know whether u checked the bbox_proc code for boundary box generation. I have raised a query in the obj-GAN repo https://github.com/jamesli1618/Obj-GAN/issues/24. It would be great if you can help me with a sample code to generate a file similar to 'input_val2014.txt', so that i can use the semantic object model

savitha91 avatar Aug 06 '20 08:08 savitha91