AAmmy

Results 9 comments of AAmmy

Hi, @intuinno, @snakeztc, @yaxingwang my flickr8k scores with parameters in capgen.train() are BLEU = 0.504 / 0.270 / 0.145 / 0.082 Max are BLEU = 0.550 / 0.296 / 0.164...

@yaxingwang I did not normalize Dataset. My preprocess: 1. center crop images 2. resize images to 224x224 3. extract features with VGG_ILSVRC_19_layers token and train, valid, test are the same...

@yaxingwang I think intuinno's evaluate_flickr8k.py's parameters are for coco and flickr30k, the parameters for flickr8k and for flickr30k, coco are not same.(5.2 in paper) I think parameters in original capgen.py...

@yaxingwang I have the same memory problem on coco, and process for sparse to dense is too slow, so I extracted feature into one file for each image. I changed...

~~The BLEU result of multi-bleu.perl and pycocoevalcap are very different. I got 65% on BLEU1 with multi-bleu.perl, but bleu.py in pycocoevalcap showed around 50% on the same samples and GTs.~~...

Hi, @ozancaglayan Could you share your code handling batch normalization process, plese? ``` Validation: I normalized the validation loss w.r.t sequence lengths as well. This seems a better estimate of...

I got BLEU: 0.6887/0.5034/0.3588/0.2547 METEOR: 0.2234 on COCO with http://cs.stanford.edu/people/karpathy/deepimagesent/ the feature size is 4096, so I used them by reshaping 8x512. However Flickr8k training was failed. I didn't try...

@Lorne0 @xinghedyc My result BLEU: 0.6887/0.5034/0.3588/0.2547 METEOR: 0.2234 is based on beam_size 1. I checked only on epoch 19. May be there some other epoch results show more better score....

@wahahayawahaha You can google those names and download them.