a-PyTorch-Tutorial-to-Image-Captioning icon indicating copy to clipboard operation
a-PyTorch-Tutorial-to-Image-Captioning copied to clipboard

Show, Attend, and Tell | a PyTorch Tutorial to Image Captioning

Results 119 a-PyTorch-Tutorial-to-Image-Captioning issues
Sort by recently updated
recently updated
newest added

metric blue-4 doesn't increase more than 0.147 with coco data and 0.099 with flickr8k data, is there any way to improve the result. thank

encoder_dim = encoder_out.size(3) Dimension out of range (expected to be in range of [-2, 1], but got 3)

There are some issues with the code, including fatal errors, making it difficult to complete the experiment well.....

can you explain how you handle the pad token since in batches, we have to pad caption so that all have the same length then we feed that padded caption...

Why are the BLEU scores from train.py different from those in eval.py? In my experiments, the BLEU scores computed in train.py were around 0.16, whereas those computed in eval.py were...

Thanks for sharing the code I have a question since we use the cross entropy loss as the MAIN metric to determine the model performance on that train and validation...

While I am trying to get evaluation of the model through eval.py, BLEU 4 score is 0 for every beam. I noticed that no sequence is having token even after...

can you explain that do we use the teacher forcing in the validation phase EVEN with no parmater update in order to compute the cross entropy loss like we do...