low mAP on coco because of preprocessing?
I got low mAP (~30%) on 1160 images (selected by openpose) of coco val2014. I trained the model by myself with pretrained vgg19_10.
I read codes of preprocessing, and have a question. transform RandomScale only takes the scale of person 0 in annotation to scale the image. But in official code, each person should be scaled as a center person. In your way, the dataset scales are less than that of the original code.
Could it be the reason why I got low mAP?
@puppywst, Hi, How do you get mAP? Could you share the way, thanks.
@NokiaDDT for those 1160 images, I padded them with 0s to image size 1000*1000. Take the padded image into the network, then crop heatmap, paf out with the original image size. This repo's postprocessing in test folder is good.
I recommend you to read this repo [https://github.com/tensorboy/pytorch_Realtime_Multi-Person_Pose_Estimation]. Important differences are image preprocessing, image transformation. And it does have a evaluation part, and get almost the same mAP as the offical openpose.