a-PyTorch-Tutorial-to-Image-Captioning
a-PyTorch-Tutorial-to-Image-Captioning copied to clipboard
I think this a bug. caption.py 140
incomplete_inds = [ind for ind, next_word in enumerate(next_word_inds) if next_word != word_map['
and then
complete_inds = list(set(range(len(next_word_inds))) - set(incomplete_inds))
complete_inds is empty
so complete_seqs is null.
my Model out is error
I have the same bug in caption.py . Did you fix it ?
i fix it , you should consider the situation that your model is too weak to generate the <end>
symbol for ending your sentences.
please add code
if len(complete_seqs_scores) == 0:
return seqs[0], 0
before
i = complete_seqs_scores.index(max(complete_seqs_scores))
in fuction caption_image_beam_search
why don't you use it like the following ? Is there any specific reason why you are returning 0 instead of matching attention map?
if len(complete_seqs_scores) == 0:
return seqs[0], seqs_alpha[0]
why don't you use it like the following ? Is there any specific reason why you are returning 0 instead of matching attention map?
if len(complete_seqs_scores) == 0: return seqs[0], seqs_alpha[0]
You are right. Actually, the code in my project doesn't need to care about seqs_alpha. So I just used '0' to replace its return. Your code are more complete and better.