seqs = torch.cuda.FloatTensor(torch.cat([seqs[prev_word_inds], next_word_inds.unsqueeze(1)], dim=1))# (s, step+1) IndexError: tensors used as indices must be long, byte or bool tensors
While running eval.py, this error occurs. Any help to change the type of tensor. I tried different types of tensor types but still same error occur.
EVALUATING AT BEAM SIZE 1: 0%| | 0/25000 [00:00<?, ?it/s]k previous words tensor([[9488]], device='cuda:0')
seq tensor([[9488]], device='cuda:0')
EVALUATING AT BEAM SIZE 1: 0%| | 0/25000 [00:00<?, ?it/s]
Traceback (most recent call last):
File "/home/Documents/Pytorch_IC_srgv_Nov5/a-PyTorch-Tutorial-to-Image-Captioning-master/eval.py", line 182, in
seqs = torch.cuda.FloatTensor(torch.cat([seqs[prev_word_inds], next_word_inds.unsqueeze(1)], dim=1))# (s, step+1) IndexError: tensors used as indices must be long, byte or bool tensors
Try to add .long() to a couple of tensors to transform them:
seqs = torch.cat([seqs[prev_word_inds.long()], next_word_inds.unsqueeze(1)], dim=1) # (s, step+1)
This line and a few others below.
I have the same problem.I solved it when I saw your suggestion and thank you.