a-PyTorch-Tutorial-to-Image-Captioning
a-PyTorch-Tutorial-to-Image-Captioning copied to clipboard
Show, Attend, and Tell | a PyTorch Tutorial to Image Captioning
I refered to the section 4.2.1 in the paper which says 'the soft attention model predicts a gating scalar beta'. However, the 'gate' in this code is a vector with...
If I donot include attention module in the model, can it be treat as the implementation of the model in the paper show and tell? And if I want to...
If I am training for Flickr8k dataset do i still need to do 20 epochs? How am i suppose to explore the best combinantions of hyper parameters for Flickr8k dataset...
I appreciate your working and it helped me lot to understand the flow of image captioning. I got stuck in the decoder part. Kindly help me to understand and debug...
` /envs/caption/lib/python3.7/site-packages/torch/nn/modules/module.py", line 539, in __getattr__ type(self).__name__, name)) AttributeError: 'DecoderWithAttention' object has no attribute 'decode_step' ` Hello, Thanks for this good repo . When I got my model, I load...
Hello, I am using the COCO dataset, A two-layer LSTM model, one layer for top-down attention, and one layer for language models. Extracting words with jieba I used all the...
``` bias = np.sqrt(3.0 / embeddings.size(1)) torch.nn.init.uniform_(embeddings, -bias, bias) ``` and the pytorch init is `init.normal_(self.weight)` why to do this and what is the refer? look forward for discuss
coded multi layer RNN. Using 2 layers as default, can change "num_layers" to adjust number of layers when calling DecoderWithAttention. Please check.
Hi, Thanks for that Tutorial. I'm learning image captioning. Obviously, it can easily add evaluation index Blue1-4 in eval.py. However, What should I do if I want to add evaluation...