3D-ResNets-PyTorch
3D-ResNets-PyTorch copied to clipboard
Train the model from the scratch : NEED HELP with 0 accuracy
I want to use this model for research use, before that, i need to reproduce the experiment result as in the paper. I chose a small sample in kinetics, for every class label, I put into 3 videos for training, no validation or testing for now. I use two GPUs for training. In the terminal I typed like this:
python main.py --root_path .. --video_path kinetics_frame --annotation_path kinetics.json --result_path results --dataset kinetics --model resnet --model_depth 18 --n_classes 400 --batch_size 16 --n_threads 4 --checkpoint 5
However, in the training process, I found the loss converges very quickly to about 6, and the accuracy is in a very low level, nearly remains zero. I don't know what's wrong with my operation. I print out the final outputs and target to find what's happening, only to find every output label is almost the same in each batch. Has anyone encounter such problems? I am grateful if anyone can help. Thanks ! I attach running results below.
cni8@lcsr-thin6:~/Reproduce/3D-ResNets-PyTorch$ python main.py --root_path .. --video_path kinetics_frame --annotation_path kinetics.json --result_path results --dataset kinetics --model resnet --model_depth 18 --n_classes 400 --batch_size 16 --n_threads 4 --checkpoint 5 Namespace(annotation_path='../kinetics.json', arch='resnet-18', batch_size=16, begin_epoch=1, checkpoint=5, crop_position_in_test='c', dampening=0.9, dataset='kinetics', ft_begin_index=0, initial_scale=1.0, learning_rate=0.1, lr_patience=10, manual_seed=1, mean=[114.7748, 107.7354, 99.475], mean_dataset='activitynet', model='resnet', model_depth=18, momentum=0.9, n_classes=400, n_epochs=200, n_finetune_classes=400, n_scales=5, n_threads=4, n_val_samples=3, nesterov=False, no_cuda=False, no_hflip=False, no_mean_norm=False, no_softmax_in_test=False, no_train=False, no_val=False, norm_value=1, optimizer='sgd', pretrain_path='', resnet_shortcut='B', resnext_cardinality=32, result_path='../results', resume_path='', root_path='..', sample_duration=16, sample_size=112, scale_in_test=1.0, scale_step=0.84089641525, scales=[1.0, 0.84089641525, 0.7071067811803005, 0.5946035574934808, 0.4999999999911653], std=[38.7568578, 37.88248729, 40.02898126], std_norm=False, test=False, test_subset='val', train_crop='corner', video_path='../kinetics_frame', weight_decay=0.001, wide_resnet_k=2) /home/cni8/Reproduce/3D-ResNets-PyTorch/models/resnet.py:145: UserWarning: nn.init.kaiming_normal is now deprecated in favor of nn.init.kaiming_normal_. m.weight = nn.init.kaiming_normal(m.weight, mode='fan_out') DataParallel( (module): ResNet( (conv1): Conv3d(3, 64, kernel_size=(7, 7, 7), stride=(1, 2, 2), padding=(3, 3, 3), bias=False) (bn1): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (maxpool): MaxPool3d(kernel_size=(3, 3, 3), stride=2, padding=1, dilation=1, ceil_mode=False) (layer1): Sequential( (0): BasicBlock( (conv1): Conv3d(64, 64, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False) (bn1): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv3d(64, 64, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False) (bn2): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) (1): BasicBlock( (conv1): Conv3d(64, 64, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False) (bn1): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv3d(64, 64, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False) (bn2): BatchNorm3d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (layer2): Sequential( (0): BasicBlock( (conv1): Conv3d(64, 128, kernel_size=(3, 3, 3), stride=(2, 2, 2), padding=(1, 1, 1), bias=False) (bn1): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv3d(128, 128, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False) (bn2): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv3d(64, 128, kernel_size=(1, 1, 1), stride=(2, 2, 2), bias=False) (1): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv3d(128, 128, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False) (bn1): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv3d(128, 128, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False) (bn2): BatchNorm3d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (layer3): Sequential( (0): BasicBlock( (conv1): Conv3d(128, 256, kernel_size=(3, 3, 3), stride=(2, 2, 2), padding=(1, 1, 1), bias=False) (bn1): BatchNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv3d(256, 256, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False) (bn2): BatchNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv3d(128, 256, kernel_size=(1, 1, 1), stride=(2, 2, 2), bias=False) (1): BatchNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv3d(256, 256, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False) (bn1): BatchNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv3d(256, 256, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False) (bn2): BatchNorm3d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (layer4): Sequential( (0): BasicBlock( (conv1): Conv3d(256, 512, kernel_size=(3, 3, 3), stride=(2, 2, 2), padding=(1, 1, 1), bias=False) (bn1): BatchNorm3d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv3d(512, 512, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False) (bn2): BatchNorm3d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (downsample): Sequential( (0): Conv3d(256, 512, kernel_size=(1, 1, 1), stride=(2, 2, 2), bias=False) (1): BatchNorm3d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (1): BasicBlock( (conv1): Conv3d(512, 512, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False) (bn1): BatchNorm3d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (relu): ReLU(inplace) (conv2): Conv3d(512, 512, kernel_size=(3, 3, 3), stride=(1, 1, 1), padding=(1, 1, 1), bias=False) (bn2): BatchNorm3d(512, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) ) ) (avgpool): AvgPool3d(kernel_size=(1, 4, 4), stride=1, padding=0) (fc): Linear(in_features=512, out_features=400, bias=True) ) ) dataset loading [0/1199] dataset loading [1000/1199] run train at epoch 1 Epoch: [1][1/70] Time 10.410 (10.410) Data 2.259 (2.259) Loss 6.1790 (6.1790) Acc 0.000 (0.000) Epoch: [1][2/70] Time 0.437 (5.423) Data 0.000 (1.130) Loss 6.4429 (6.3110) Acc 0.000 (0.000) Epoch: [1][3/70] Time 0.412 (3.753) Data 0.000 (0.753) Loss 7.9052 (6.8424) Acc 0.000 (0.000) Epoch: [1][4/70] Time 0.446 (2.926) Data 0.000 (0.565) Loss 9.3155 (7.4607) Acc 0.000 (0.000) Epoch: [1][5/70] Time 0.468 (2.435) Data 0.000 (0.452) Loss 10.0782 (7.9842) Acc 0.000 (0.000) Epoch: [1][6/70] Time 0.427 (2.100) Data 0.000 (0.377) Loss 10.5896 (8.4184) Acc 0.000 (0.000) Epoch: [1][7/70] Time 0.475 (1.868) Data 0.000 (0.323) Loss 10.6773 (8.7411) Acc 0.000 (0.000) Epoch: [1][8/70] Time 0.443 (1.690) Data 0.000 (0.283) Loss 9.6822 (8.8587) Acc 0.000 (0.000) Epoch: [1][9/70] Time 0.481 (1.555) Data 0.000 (0.251) Loss 10.2983 (9.0187) Acc 0.000 (0.000) Epoch: [1][10/70] Time 0.471 (1.447) Data 0.000 (0.226) Loss 9.3799 (9.0548) Acc 0.000 (0.000) Epoch: [1][11/70] Time 0.448 (1.356) Data 0.000 (0.206) Loss 9.0464 (9.0541) Acc 0.000 (0.000) Epoch: [1][12/70] Time 0.480 (1.283) Data 0.000 (0.188) Loss 8.0524 (8.9706) Acc 0.062 (0.005) Epoch: [1][13/70] Time 0.437 (1.218) Data 0.000 (0.174) Loss 7.4827 (8.8561) Acc 0.000 (0.005) Epoch: [1][14/70] Time 0.467 (1.164) Data 0.000 (0.162) Loss 7.4522 (8.7559) Acc 0.000 (0.004) Epoch: [1][15/70] Time 0.436 (1.116) Data 0.000 (0.151) Loss 7.0923 (8.6450) Acc 0.000 (0.004) Epoch: [1][16/70] Time 0.476 (1.076) Data 0.000 (0.141) Loss 6.8035 (8.5299) Acc 0.000 (0.004) Epoch: [1][17/70] Time 0.441 (1.039) Data 0.000 (0.133) Loss 6.8216 (8.4294) Acc 0.000 (0.004) Epoch: [1][18/70] Time 0.466 (1.007) Data 0.000 (0.126) Loss 6.6213 (8.3289) Acc 0.000 (0.003) Epoch: [1][19/70] Time 0.456 (0.978) Data 0.087 (0.124) Loss 6.6262 (8.2393) Acc 0.000 (0.003) Epoch: [1][20/70] Time 0.456 (0.952) Data 0.000 (0.117) Loss 6.5120 (8.1529) Acc 0.000 (0.003) Epoch: [1][21/70] Time 0.474 (0.929) Data 0.000 (0.112) Loss 6.7172 (8.0846) Acc 0.000 (0.003) Epoch: [1][22/70] Time 1.134 (0.938) Data 0.921 (0.149) Loss 6.3494 (8.0057) Acc 0.000 (0.003) Epoch: [1][23/70] Time 0.449 (0.917) Data 0.000 (0.142) Loss 6.3078 (7.9319) Acc 0.000 (0.003) Epoch: [1][24/70] Time 0.486 (0.899) Data 0.000 (0.136) Loss 6.1182 (7.8563) Acc 0.000 (0.003) Epoch: [1][25/70] Time 0.420 (0.880) Data 0.025 (0.132) Loss 6.5107 (7.8025) Acc 0.000 (0.003) Epoch: [1][26/70] Time 1.270 (0.895) Data 1.053 (0.167) Loss 6.3391 (7.7462) Acc 0.000 (0.002) Epoch: [1][27/70] Time 0.481 (0.880) Data 0.000 (0.161) Loss 6.4222 (7.6972) Acc 0.000 (0.002) Epoch: [1][28/70] Time 0.425 (0.863) Data 0.000 (0.155) Loss 6.4222 (7.6516) Acc 0.000 (0.002) Epoch: [1][29/70] Time 0.487 (0.850) Data 0.000 (0.150) Loss 6.3698 (7.6074) Acc 0.000 (0.002) Epoch: [1][30/70] Time 0.796 (0.849) Data 0.618 (0.166) Loss 6.3714 (7.5662) Acc 0.000 (0.002) Epoch: [1][31/70] Time 0.499 (0.837) Data 0.000 (0.160) Loss 6.3212 (7.5261) Acc 0.000 (0.002) Epoch: [1][32/70] Time 0.408 (0.824) Data 0.000 (0.155) Loss 6.3478 (7.4893) Acc 0.000 (0.002) Epoch: [1][33/70] Time 0.494 (0.814) Data 0.000 (0.151) Loss 6.4028 (7.4563) Acc 0.000 (0.002) Epoch: [1][34/70] Time 0.815 (0.814) Data 0.645 (0.165) Loss 6.2999 (7.4223) Acc 0.000 (0.002) Epoch: [1][35/70] Time 0.481 (0.804) Data 0.000 (0.160) Loss 6.3226 (7.3909) Acc 0.000 (0.002) Epoch: [1][36/70] Time 0.420 (0.794) Data 0.000 (0.156) Loss 6.4976 (7.3661) Acc 0.000 (0.002) Epoch: [1][37/70] Time 0.473 (0.785) Data 0.000 (0.152) Loss 6.2156 (7.3350) Acc 0.000 (0.002) Epoch: [1][38/70] Time 0.544 (0.779) Data 0.339 (0.157) Loss 6.2578 (7.3066) Acc 0.000 (0.002) Epoch: [1][39/70] Time 0.472 (0.771) Data 0.000 (0.153) Loss 6.4460 (7.2846) Acc 0.000 (0.002) Epoch: [1][40/70] Time 0.413 (0.762) Data 0.000 (0.149) Loss 6.3815 (7.2620) Acc 0.000 (0.002) Epoch: [1][41/70] Time 0.494 (0.755) Data 0.000 (0.145) Loss 6.4140 (7.2413) Acc 0.000 (0.002) Epoch: [1][42/70] Time 0.419 (0.747) Data 0.160 (0.146) Loss 6.1969 (7.2164) Acc 0.000 (0.001) Epoch: [1][43/70] Time 0.483 (0.741) Data 0.000 (0.142) Loss 6.4049 (7.1976) Acc 0.000 (0.001) Epoch: [1][44/70] Time 0.415 (0.734) Data 0.000 (0.139) Loss 6.2635 (7.1763) Acc 0.000 (0.001) Epoch: [1][45/70] Time 1.369 (0.748) Data 1.186 (0.162) Loss 6.8247 (7.1685) Acc 0.000 (0.001) Epoch: [1][46/70] Time 0.441 (0.741) Data 0.079 (0.160) Loss 6.4808 (7.1536) Acc 0.000 (0.001) Epoch: [1][47/70] Time 0.460 (0.735) Data 0.000 (0.157) Loss 6.2598 (7.1346) Acc 0.000 (0.001) Epoch: [1][48/70] Time 0.500 (0.730) Data 0.000 (0.154) Loss 6.0590 (7.1122) Acc 0.000 (0.001) Epoch: [1][49/70] Time 0.779 (0.731) Data 0.572 (0.162) Loss 6.4662 (7.0990) Acc 0.000 (0.001) Epoch: [1][50/70] Time 0.460 (0.726) Data 0.000 (0.159) Loss 6.4337 (7.0857) Acc 0.000 (0.001) Epoch: [1][51/70] Time 0.494 (0.721) Data 0.000 (0.156) Loss 6.3209 (7.0707) Acc 0.000 (0.001) Epoch: [1][52/70] Time 0.447 (0.716) Data 0.000 (0.153) Loss 6.3705 (7.0572) Acc 0.000 (0.001) Epoch: [1][53/70] Time 0.577 (0.713) Data 0.362 (0.157) Loss 6.5472 (7.0476) Acc 0.000 (0.001) Epoch: [1][54/70] Time 0.409 (0.708) Data 0.000 (0.154) Loss 6.5960 (7.0392) Acc 0.000 (0.001) Epoch: [1][55/70] Time 0.487 (0.704) Data 0.000 (0.151) Loss 6.5293 (7.0300) Acc 0.000 (0.001) Epoch: [1][56/70] Time 0.430 (0.699) Data 0.185 (0.152) Loss 6.2897 (7.0167) Acc 0.000 (0.001) Epoch: [1][57/70] Time 1.139 (0.707) Data 0.950 (0.166) Loss 6.4890 (7.0075) Acc 0.000 (0.001) Epoch: [1][58/70] Time 0.490 (0.703) Data 0.000 (0.163) Loss 6.3642 (6.9964) Acc 0.000 (0.001) Epoch: [1][59/70] Time 0.424 (0.698) Data 0.000 (0.160) Loss 6.1828 (6.9826) Acc 0.000 (0.001) Epoch: [1][60/70] Time 0.477 (0.694) Data 0.000 (0.158) Loss 6.3884 (6.9727) Acc 0.000 (0.001) Epoch: [1][61/70] Time 0.464 (0.691) Data 0.000 (0.155) Loss 6.1745 (6.9596) Acc 0.000 (0.001) Epoch: [1][62/70] Time 0.435 (0.687) Data 0.000 (0.152) Loss 6.2590 (6.9483) Acc 0.000 (0.001) Epoch: [1][63/70] Time 0.486 (0.683) Data 0.000 (0.150) Loss 6.3035 (6.9381) Acc 0.000 (0.001) Epoch: [1][64/70] Time 0.426 (0.679) Data 0.211 (0.151) Loss 6.4974 (6.9312) Acc 0.000 (0.001) Epoch: [1][65/70] Time 1.368 (0.690) Data 1.162 (0.167) Loss 6.1299 (6.9189) Acc 0.000 (0.001) Epoch: [1][66/70] Time 0.465 (0.687) Data 0.000 (0.164) Loss 6.2865 (6.9093) Acc 0.000 (0.001) Epoch: [1][67/70] Time 0.474 (0.683) Data 0.000 (0.162) Loss 6.3081 (6.9003) Acc 0.000 (0.001) Epoch: [1][68/70] Time 0.464 (0.680) Data 0.000 (0.159) Loss 6.4757 (6.8941) Acc 0.000 (0.001) Epoch: [1][69/70] Time 0.450 (0.677) Data 0.000 (0.157) Loss 6.4588 (6.8878) Acc 0.000 (0.001) Epoch: [1][70/70] Time 0.409 (0.673) Data 0.000 (0.155) Loss 6.4052 (6.8860) Acc 0.000 (0.001)
Did you solve the problem?
same problem,have you solved it?plz tell me!!thx!!!
@QAQ33 @seominseok0429 @chaofiber Hello, Have you solved this problem? Maybe a long time ago.
the mean and std is not Normalized to 0-1?