TRN-pytorch icon indicating copy to clipboard operation
TRN-pytorch copied to clipboard

Error when testing

Open sparklingyueran opened this issue 6 years ago • 7 comments

I tried to download sample data. However, there is no file named bolei_juggling.mp4 but just one file named juggling.mp4 after downloading.

sparklingyueran avatar Nov 11 '18 11:11 sparklingyueran

I checked the downloaded sample_data, and found 'jugglinf.mp4' and a folder named 'juggling_frames'.

When I tried your test code directly on the mp4 file. It said that 'Video must have at least 8 frames'.

When I tried test code on the frame folder, it worked.

When I tried test code on a new frame folder that I created, it said the size of frames could not fit.

Why the code just can work on a folder for frames? How can I apply it to an mp4 file directly or even on different-sized frames?

The following part is my output.

~/gitfile/TRN-pytorch]$ CUDA_VISIBLE_DEVICES=1 python test_video.py --arch InceptionV4 --dataset moments --weight pretrain/TRN_moments_RGB_InceptionV3_TRNmultiscale_segment8_best.pth.tar --frame_folder sample_data/juggling.mp4 --rendered_output sample_data/predicted_video.mp4 ('Multi-Scale Temporal Relation Network Module in use', ['8-frame relation', '7-frame relation', '6-frame relation', '5-frame relation', '4-frame relation', '3-frame relation', '2-frame relation']) Freezing BatchNorm2D except the first one. Loading frames in sample_data/juggling.mp4 Traceback (most recent call last): File "test_video.py", line 130, in frames = load_frames(frame_paths) File "test_video.py", line 57, in load_frames raise ValueError('Video must have at least {} frames'.format(num_frames)) ValueError: Video must have at least 8 frames

~/gitfile/TRN-pytorch]$ CUDA_VISIBLE_DEVICES=1 python test_video.py --arch InceptionV4 --dataset moments --weight pretrain/TRN_moments_RGB_InceptionV3_TRNmultiscale_segment8_best.pth.tar --frame_folder sample_data/juggling_frames/ --rendered_output sample_data/predicted_video.mp4 ('Multi-Scale Temporal Relation Network Module in use', ['8-frame relation', '7-frame relation', '6-frame relation', '5-frame relation', '4-frame relation', '3-frame relation', '2-frame relation']) Freezing BatchNorm2D except the first one. Loading frames in sample_data/juggling_frames/ RESULT ON sample_data/juggling_frames/ 1.000 -> juggling 0.000 -> catching 0.000 -> balancing 0.000 -> performing 0.000 -> spinning [MoviePy] >>>> Building video sample_data/predicted_video.mp4 [MoviePy] Writing video sample_data/predicted_video.mp4 89%|███████████████████████████████████████ | 8/9 [00:00<00:00, 528.19it/s] [MoviePy] Done. [MoviePy] >>>> Video ready: sample_data/predicted_video.mp4

~/gitfile/TRN-pytorch]$ CUDA_VISIBLE_DEVICES=1 python test_video.py --arch InceptionV4 --dataset moments --weight pretrain/TRN_moments_RGB_InceptionV3_TRNmultiscale_segment8_best.pth.tar --frame_folder sample_data/frame_test/ --rendered_output sample_data/predicted_video.mp4 ('Multi-Scale Temporal Relation Network Module in use', ['8-frame relation', '7-frame relation', '6-frame relation', '5-frame relation', '4-frame relation', '3-frame relation', '2-frame relation']) Freezing BatchNorm2D except the first one. Loading frames in sample_data/frame_test/ Traceback (most recent call last): File "test_video.py", line 140, in logits = net(input_var) File "/home/wangwq/anaconda3/envs/python2/lib/python2.7/site-packages/torch/nn/modules/module.py", line 491, in call result = self.forward(*input, **kwargs) File "/home/pany/gitfile/TRN-pytorch/models.py", line 228, in forward base_out = base_out.view((-1, self.num_segments) + base_out.size()[1:]) RuntimeError: invalid argument 2: size '[-1 x 8 x 256]' is invalid for input with 12800 elements at /opt/conda/conda-bld/pytorch_1524577177097/work/aten/src/TH/THStorage.c:37

sparklingyueran avatar Nov 13 '18 03:11 sparklingyueran

I met the same problem , any advice ?

wishinger-li avatar Mar 13 '19 07:03 wishinger-li

looks like the files are under here. http://relation.csail.mit.edu/data/

And for loading video, Try to read the code test_video.py under # Obtain video frames, you will understand it. use args video_file

holopekochan avatar Mar 14 '19 08:03 holopekochan

hi, when I run 'test_models.py', I met similar error: "RuntimeError: invalid argument 2: size '[-1 x 8 x 256]' is invalid for input with 768 elements at /opt/conda/conda-bld/pytorch_1518244421288/work/torch/lib/TH/THStorage.c:37" I have checked my paramaters of 'num_segments' both in the trained model and test script, both of them are 8, but still triggers this error. Do you any idea why this error happens? I will apprecriate it if answering!

AndyStrongBoy avatar May 13 '19 13:05 AndyStrongBoy

hi, I solve the problem by change line 143 code of 'test_models.py' "input_var = torch.autograd.Variable(data.view(-1, length, data.size(2), data.size(3)), volatile=True)" into "input_var = torch.autograd.Variable(data, volatile=True)", then it works fine, didn't occur above error.

The reason I change this code is that line 223 code of 'main.py' "input_var = torch.autograd.Variable(input, volatile=True)" don't change the input data's dimension

I don't know if it's a bug or something else reason, because someone run the script 'test_models.py' well. Do you know why this happens? Thanks!

AndyStrongBoy avatar May 14 '19 02:05 AndyStrongBoy

It works,thx

liuyanyu00 avatar Sep 18 '19 07:09 liuyanyu00

Hello, sorry to interrupt, when I run"python test_video.py --arch InceptionV3 --dataset moment --weight pretrain/TRN_moments_RGB_InceptionV3_TRNmultiscale_segment8_best.pth.tar --frame_folder sample_data/juggling.mp4 --rendered_datapredict sample_data/juggling.mp4" encountered "AttributeError:module 'model_zoo' has no attribute 'InceptionV3'",I see that you have changed "InceptionV3" to "InceptionV4", how did you do this and is it convenient to provide the source code? My email: [email protected]。Thanks!

jin03041209 avatar Oct 28 '21 11:10 jin03041209