CoCLR icon indicating copy to clipboard operation
CoCLR copied to clipboard

when can the checkpoints of Kinetics-400 be uploaded.

Open hw-liang opened this issue 4 years ago • 13 comments

hw-liang avatar Jan 06 '21 18:01 hw-liang

Maybe later this week or next week, I am still doing some final check. Will let you know in this issue if it's uploaded.

TengdaHan avatar Jan 06 '21 19:01 TengdaHan

Maybe later this week or next week, I am still doing some final check. Will let you know in this issue if it's uploaded.

Thanks! Look forward to your update!

hw-liang avatar Jan 18 '21 18:01 hw-liang

Is there any news on checkpoints of k400? Looking forward it as well.

June01 avatar Jan 25 '21 22:01 June01

Also waiting here, as I cannot reproduce the results for K400 training.

thematrixduo avatar Apr 08 '21 11:04 thematrixduo

Same here.

17Skye17 avatar Apr 19 '21 08:04 17Skye17

Sorry for the long delay... they have been uploaded now. https://github.com/TengdaHan/CoCLR#pretrained-weights

TengdaHan avatar Apr 29 '21 01:04 TengdaHan

Thanks for uploading this. Strangely I cannot reproduce this result using your given instructions. I noticed that in your infoNCE training you run 'main_infonce.py' and 'teco_fb_main.py' instead of 'main_nce.py'. Are they the same files?

In fact, I cannot reproduce the result for ucf101 pretraining either. If someone else succeeded in reproducing the result using latest pytorch package please let me know.

thematrixduo avatar May 04 '21 13:05 thematrixduo

@TengdaHan Hi Tengda, thanks for uploading it! I am a little bit confused, is it just the weights of training on k400 only? Or a joint training of k400 firstly and then ucf101? Here is the retrieval performance I got on UCF101 without any training further:

1NN acc = 0.5062 5NN acc = 0.6845 10NN acc = 0.7638 20NN acc = 0.8371 50NN acc = 0.9082

June01 avatar May 13 '21 04:05 June01

@June01 The weight in "Kinetics400-pretrained models" are self-supervised trained on K400 only. Hmm, it seems your retrieval result here "1NN=0.5062" is better than what I got with the same model last year. But anyway in our paper, we only report NN-retrieval with UCF101-pretrained weights.

@thematrixduo filename - sorry, they are the same file, I changed the name. What accuracy did you get when reproducing? If UCF101-RGB finetune is about 86% - 88% I think it's acceptable. Our 90+ result is obtained by fusing two-stream predictions.

TengdaHan avatar May 13 '21 15:05 TengdaHan

@

@June01 The weight in "Kinetics400-pretrained models" are self-supervised trained on K400 only. Hmm, it seems your retrieval result here "1NN=0.5062" is better than what I got with the same model last year. But anyway in our paper, we only report NN-retrieval with UCF101-pretrained weights.

@thematrixduo filename - sorry, they are the same file, I changed the name. What accuracy did you get when reproducing? If UCF101-RGB finetune is about 86% - 88% I think it's acceptable. Our 90+ result is obtained by fusing two-stream predictions.

Thanks for the reply. I can only get 82% for K400-Pretraining, and only 78% for UCF101-Pretraining (2-Cycles). For UCF101-pretraining I used your uploaded lmdb data, and strictly followed the instructions given here.

thematrixduo avatar May 13 '21 15:05 thematrixduo

Did you pretrain any other architectures like R(2+1)D-18 on Kinetics 400.

fmthoker avatar Aug 17 '21 14:08 fmthoker

@fmthoker No, we didn't. We only used S3D backbone in our experiment.

TengdaHan avatar Aug 18 '21 19:08 TengdaHan

@thematrixduo I updated the code that fixed an issue that might reduce the training efficiency of the co-training stage. Maybe it's related to UCF101 reproduction: https://github.com/TengdaHan/CoCLR/issues/43

TengdaHan avatar Oct 10 '21 20:10 TengdaHan