a-PyTorch-Tutorial-to-Image-Captioning
a-PyTorch-Tutorial-to-Image-Captioning copied to clipboard
Show, Attend, and Tell | a PyTorch Tutorial to Image Captioning
The old link is stale.
Hi, thank you for the nice work. I'm new to image caption and the work helps me a lot. from Line 298 to Line in train.py when you wrap the...
Hello, I used the pre-trained model and word map provided, and it seems to work but in caption.py I obtain always the same sentence with whatever picture, that is: "...
Thanks for you tutorial! I found code in models.py: line 80: ``` att1 = self.encoder_att(encoder_out) # (batch_size, num_pixels, attention_dim) ``` and line 203: ``` attention_weighted_encoding, alpha = self.attention(encoder_out[:batch_size_t], h[:batch_size_t]) ```...
Anyone can explain 2 lines of code for me ? [https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Image-Captioning/blob/cc9c7e2f4017938d414178d3781fed8dbe442852/caption.py#L107](url) and the line below this line
I want to make a new dataset for training, who can help me know how to label the images
I need a model only for inference but your Best 5 epch model is really poor...
Hi, Thank you for the detailed tutorial, it helps me a lot. When I use your model to evaluate the performance on COCO data set. It return the bleu score...
Any chance there is a transformers-based version of this repository somewhere? (instead of LSTM)
I used these source code on google colab by porting. But these codes can't read the pretrained model. The error message is as follows: ` --------------------------------------------------------------------------- AttributeError Traceback (most recent...