Pytorch_Realtime_Multi-Person_Pose_Estimation icon indicating copy to clipboard operation
Pytorch_Realtime_Multi-Person_Pose_Estimation copied to clipboard

low mAP on coco because of preprocessing?

Open puppywst opened this issue 7 years ago • 2 comments

I got low mAP (~30%) on 1160 images (selected by openpose) of coco val2014. I trained the model by myself with pretrained vgg19_10.

I read codes of preprocessing, and have a question. transform RandomScale only takes the scale of person 0 in annotation to scale the image. But in official code, each person should be scaled as a center person. In your way, the dataset scales are less than that of the original code.

Could it be the reason why I got low mAP?

puppywst avatar Aug 17 '18 07:08 puppywst

@puppywst, Hi, How do you get mAP? Could you share the way, thanks.

NokiaDDT avatar Aug 20 '18 02:08 NokiaDDT

@NokiaDDT for those 1160 images, I padded them with 0s to image size 1000*1000. Take the padded image into the network, then crop heatmap, paf out with the original image size. This repo's postprocessing in test folder is good.

I recommend you to read this repo [https://github.com/tensorboy/pytorch_Realtime_Multi-Person_Pose_Estimation]. Important differences are image preprocessing, image transformation. And it does have a evaluation part, and get almost the same mAP as the offical openpose.

puppywst avatar Aug 20 '18 06:08 puppywst