video-caption.pytorch
video-caption.pytorch copied to clipboard
How to deal with this error? Thanks
I use 100 videos to check if I can run this project,but while training I got this error. I can't find what is wrong. Help me ,please.Thanks.
save opt details to ../data/save/opt_info.json
vocab size is 85
number of train videos: 100
number of val videos: 0
number of test videos: 0
load feats from ['../data/feats/resnet152']
max sequence length in data is 10
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/rnn.py:51: UserWarning: dropout option adds dropout after all but last recurrent layer, so non-zero dropout expects num_layers greater than 1, but got dropout=0.5 and num_layers=1
"num_layers={}".format(dropout, num_layers))
/usr/local/lib/python3.6/dist-packages/torch/nn/_reduction.py:43: UserWarning: size_average and reduce args will be deprecated, please use reduction='none' instead.
warnings.warn(warning.format(ret))
/usr/local/lib/python3.6/dist-packages/torch/optim/lr_scheduler.py:100: UserWarning: Detected call of lr_scheduler.step()
before optimizer.step()
. In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step()
before lr_scheduler.step()
. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule.See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
"https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
Traceback (most recent call last):
File "train.py", line 133, in
This error is due to "dataloader.py". You need to change the getitem function of the class VideoDataset.According to the "idx", get your video id
video_feat = np.load(os.path.join(self.feats_dir[0], your video id + ".npy"))
captions = self.captions[your video id]['final_captions']