PyTorch_YOWO
PyTorch_YOWO copied to clipboard
About the performance on JHMDB
Hi,
First of all, thank you for your great work!
I am wondering does your work get improvement only on UCF and AVA datasets? Because I've run your code on JHMDB dataset without changing anything, the best frame-mAP I got is only 67%. Is it because the training settings are only for UCF and AVA?
Also, I found that you freeze all the 2D and 3D backbone parameters when training on JHMDB, but in the original YOWO, they still allow the last few layers of backbone to be fine-tuned. Can you explain about this?
@r09921135 Hi,
Thanks for for your affirmation of my work.
JHMDB is a very small-scale dataset, which means that it is easily over fitted. This dilemma requires us to carefully adjust hyperparameters. I think such a small dataset is unconvincing, and I am not good at adjusting hyperparameters, so I do not deal with JHMDB dataset. Both UCF101-24 and AVA are large-scale benchmarks, so I think that the improvement on these two benchmarks can prove the effectiveness of my job.
All the settings for JHMDB as you can see in my project are just random and not a good choice.
If you solve this problem, I wish you could share the strategy you adopted. Thanks a lot.