ContextPose-PyTorch-release icon indicating copy to clipboard operation
ContextPose-PyTorch-release copied to clipboard

Reproducing reported performance

Open JingweiJ opened this issue 3 years ago • 1 comments

Hi @ShirleyMaxx ,

Thanks a lot for the great work and releasing this codebase!

I'm trying to reproduce the reported performance of 43.4mm in H36M. After following the instruction of preparing data and pretrained models, I'm simply using 1 GPU to train with the config file of human36m_vol_softmax_single.yaml. Instead of running for 9999 epochs, I trained for 30 epochs as indicated in the paper. However the best result I got is MPJE=55.0mm (The per_pose_error.Average.Average metric in metrics.json).

Would you mind clarifying if I'm doing something wrong? How should I modify the config file to reproduce the best performance?

JingweiJ avatar Nov 18 '21 20:11 JingweiJ

Hi, thanks for your attention and sorry for my late reply!

Since this work focuses on relative pose estimation, we first align the root before computing MPJPE following the standard practice. Therefore, you could check the per_pose_error_relative.Average.Average metric in metric.json, and that is the reported metric we've used.

By the way, if you only use 1 GPU for training, you may also need more epochs to get a better result, could you please try running 60 epochs (you could resume the training from 30 epochs in our code) and check the root-aligned metric at that time? We've tried training several times using 4 GPUs, and 43.4mm is not the best result. So I think that enough training would work!

ShirleyMaxx avatar Nov 22 '21 15:11 ShirleyMaxx