Mario
Mario
This seems to be a recurring issue. C.f. [kenlm/issues/248](https://github.com/kpu/kenlm/issues/248), [/letter-based-language-model/33986](https://discourse.mozilla.org/t/letter-based-language-model/33986) Some suggestions: - there's a [discourse forum for DeepSpeech related issues](https://discourse.mozilla.org/c/mozilla-voice-stt/247) to get help from. - recheck the (correct installation...
Hey @hubin111 This situation you're describing is impossible! Because, for each image, we will always have a generated caption (hypothesis). This is a simple list `[hyp1, hyp2, ...]`. Accordingly, we'll...
Python assertion checks for lengths *only* at the top level. See the example in my description above.
For a fix see issue [#75](https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Image-Captioning/issues/75)
Did you happen to have a chance to read the [a-PyTorch-Tutorial-to-Image-Captioning#remarks](https://github.com/sgrvinod/a-PyTorch-Tutorial-to-Image-Captioning#remarks) section? This has been answered there. If you're however asking for a time estimate, then it depends on the...
> @sgrvinod hello, I have a question why Test BLEU is higher than Validation BLEU? It depends on several factors such as your model really generalizes better, beam size, test...
It seems like you've bug in your code. I guess you seem to be not binding the function to the object. In the `__init__`, ensure you've the following: self.decoder_att =...
This has already been addressed. Please see #104
@ruizhao1997 thanks! I'm reviewing this PR. In the meantime, would it also be possible for you to incorporate [SPICE](https://github.com/peteanderson80/SPICE) as an additional eval metric? :)
@ruizhao1997 that'd be great! you could also have a look at [coco-caption](https://github.com/tylin/coco-caption). Specifically, [pycocoevalcap](https://github.com/tylin/coco-caption/tree/master/pycocoevalcap) where a host of common eval metrics are available.