Real-time-GesRec icon indicating copy to clipboard operation
Real-time-GesRec copied to clipboard

training parameters of egogesture dataset

Open kinfeparty opened this issue 4 years ago • 17 comments

Hello! I have followed your works for one month. Thanks for presenting the pretrained model !

I want to train from the begining stage and reach the accuracy of the pretrained model presented.

I use resnext101 network and pretrained on jester dataset ,the accuarcy is 91%. And I use this pretrained network train on egogesutre dataset. but the higest accuracy is 90.9%,which can't reach 94% accuracy of the given pretrained model. The loss is 0.65 but the pretrain model's is 0.48. I think maybe some parameters difference cause this question. Because my model weight is 364.14MB but yours is 364.21MB .

Here is the parameter on jester dataset.

python main.py --root_path /Lun4/fdh/Real-time-GesRec
--video_path ../jester-dataset/20bn-jester-v1
--annotation_path annotation_Jester/jester.json
--result_path results_jester
--dataset jester
--n_classes 27
--model resnext
--model_depth 101
--resnext_cardinality 32
--resnet_shortcut B
--train_crop random
--learning_rate 0.1
--sample_duration 32
--modality RGB
--downsample 1
--batch_size 24
--n_threads 16
--checkpoint 1
--n_val_samples 1
--test_subset test
--n_epochs 100 \

Here is my pretrained model's parameter on egogesture dataset.

python main.py --root_path /Lun4/fdh/Real-time-GesRec
--video_path ../egogesture-dataset/image
--annotation_path annotation_EgoGesture/egogestureall_but_None.json
--pretrain_path results_jester/jester_resnext_1.0x_RGB_32_best.pth
--result_path results_ego_jest
--dataset egogesture
--n_classes 27
--n_finetune_classes 83
--resnext_cardinality 32
--model resnext
--model_depth 101
--pretrain_modality RGB
--resnet_shortcut B
--train_crop random
--learning_rate 0.01
--sample_duration 32
--modality Depth
--batch_size 24
--n_threads 16
--checkpoint 1
--n_val_samples 1
--test
--n_epochs 100
--ft_portion last_layer \

I hope that you can help me solve the question on accuracy If there is any wrong parameter.I will be thankful to you If you can present the parameter on jester and egogesture dataset using resnext101.

Hope that I do not trouble you.

kinfeparty avatar Jun 16 '20 08:06 kinfeparty

hi @kinfeparty, First things came to my mind is to use train_crop="center" and start learning_rate=0.01 and change lr_steps after observing the training. lr_steps divide the learning rate by 10 at every step defined.

ahmetgunduz avatar Jun 16 '20 11:06 ahmetgunduz

hi @kinfeparty, First things came to my mind is to use train_crop="center" and start learning_rate=0.01 and change lr_steps after observing the training. lr_steps divide the learning rate by 10 at every step defined.

Do you mean that The train_crop="center" is in jester or egogesture? I found that this parameter in the .sh files you provided are all "random".

kinfeparty avatar Jun 16 '20 11:06 kinfeparty

In both of them. random just create an augmentation procedure. Especially in jester I would use center. One question: In egogesture training do you use your pretrained model or the pretrained model I provided?

ahmetgunduz avatar Jun 16 '20 11:06 ahmetgunduz

Because I want to get the best result by myself, I use my pretrained model. I found that if I want to resume training in egogesture dataset.The bug exist.

size mismatch for module.conv1.weight: copying a param with shape torch.Size([64, 1, 3, 7, 7]) from checkpoint, the shape in current model is torch.Size([64, 1, 7, 7, 7]).

So I should train the model using "model = _modify_first_conv_layer(model,7,3)" in jester dataset?

kinfeparty avatar Jun 16 '20 11:06 kinfeparty

That is right!

ahmetgunduz avatar Jun 16 '20 11:06 ahmetgunduz

Thank you ! So the jester model you provided in the website is the model you use to pretrain on egogesture dataset and get the best result?

kinfeparty avatar Jun 16 '20 11:06 kinfeparty

yes that is right.

ahmetgunduz avatar Jun 16 '20 11:06 ahmetgunduz

Thank you, I will try it.

kinfeparty avatar Jun 16 '20 11:06 kinfeparty

Hello, I use your pretrained model on jester and get 95% accuracy on validation dataset.

I use center crop(I found another parameter"corner"). But there are some bug in center crop. FOWHR3RXF Z}VF7OGX0PD I don't know how to fix this bug. So I modify the learning rate to 0.01 and use random_crop resume training the pretrained model on jester dataset. I'm waiting for the result, the loss seems good but I think I should tell you this bug.

kinfeparty avatar Jun 16 '20 14:06 kinfeparty

Thanks for letting me know. Strange!

Anyway please let me know if you are able to replicate the accuracy in this setting.

ahmetgunduz avatar Jun 17 '20 07:06 ahmetgunduz

I found that the pretrained model's epoch is 14 and get 95% accuracy.

Now the result in 10 epoch with my parameter is 80.8% accuracy.

I think this model can't reach 95% accuracy in 4 epochs.

python main.py --root_path /Lun4/fdh/Real-time-GesRec
--video_path ../jester-dataset/20bn-jester-v1
--annotation_path annotation_Jester/jester.json
--result_path results_jester
--dataset jester
--n_classes 27
--n_finetune_classes 27
--model resnext
--model_depth 101
--resnext_cardinality 32
--resnet_shortcut B
--train_crop random
--learning_rate 0.01
--sample_duration 32
--modality RGB
--downsample 1
--batch_size 48
--n_threads 16
--checkpoint 1
--n_val_samples 1
--test_subset test
--n_epochs 100 \

This is my parameter, I use two GPU so I modify batch_size to 48. I'm not sure center_crop is necessary reason to reach 95% accuracy. Can you help me fix the center_crop bug? Or tell me your training detail? Such as the GPU number.

kinfeparty avatar Jun 17 '20 08:06 kinfeparty

Probably the center crop is not the reason for the 15 percent difference. I was using one Nvidia Titan XP GPU. I do not recall the batch size, probably it was 8. What is your lr_step parameters?

ahmetgunduz avatar Jun 17 '20 08:06 ahmetgunduz

I use the defalut parameter. --lr_steps', default=[15, 25, 35, 45, 60, 50, 200, 250]

The pretrained model is 14 epoch. So the lr_steps should be modified in jester dataset?

kinfeparty avatar Jun 17 '20 08:06 kinfeparty

yes pleasee change it to [10, 15 ] and resume training from 10th epoch?

ahmetgunduz avatar Jun 17 '20 08:06 ahmetgunduz

ok,I will try it and show the result.

kinfeparty avatar Jun 17 '20 09:06 kinfeparty

I found an parameter in https://github.com/okankop/Efficient-3DCNNs

the model mobilenet's parameter is --downsample 2 \

the dataset ucf101 the parameter downsample is 1.

So what's the meaning of "Selecting 1 frame out of N" ? Why different datasets need different downsample parameter?

kinfeparty avatar Jun 18 '20 05:06 kinfeparty

Hello, can you provide guidance on how to use the Resnet101 for training on the Jester dataset? What kind of commands should be used on the terminal? I hope you can help me solve this problem. Thank you very much!

kido1412y2y avatar Apr 04 '23 05:04 kido1412y2y