Realtime_Multi-Person_Pose_Estimation
Realtime_Multi-Person_Pose_Estimation copied to clipboard
AP accuracy reported in COCO workshop 2016 slides
Is the accuracy reported in slides for CPM + gtbox measured on your custom (small) validation set? I tested our CPM model on whole validation set and got AP of around 0.36 only with 1 scale and with gtbox as initialization.
Yes, on my small validation set. But it should not have such a difference.
I have created LMDB from COCO dataset similar as in the original CPM work bypassing json files (directly creating LMDB from COCO API). I have skipped persons that are smaller than 32*32 and have less than 5 annotated keypoints and are to close to other persons. I didn't use much of augmentation (scaling only) and trained with similar solver as in multi person version. For the evaluation I have used CPM_demo code and integrated it in COCO evaluation. I don't think there could be anything special in evaluation code as I see it's working, only the results are pretty bad.
I also have a question about multi-scale approach. In testing I think you have summed up the heatmaps of joints for different scales. I was thinking of finding max value across these heatmaps intstead for each joint.
Do you have any hints on why the results (single scale) are so bad for the original version of CPM?
Thanks!
@ds2268 Hi, how do you get AP result on validation set? I run evalCOCO.m , but I don't know wthat the 'coco_val' mean in the code. Could you please tell me how to get the 'coco_val' or how to change the code? Thank you very much!
@louielu1027 I have same question, have you ever solve the problem?
@cg3575 coco_val is a text like this 'image_name image_id'. you should read the text in matlab ,then use 'image_name' or 'image_id' when the code need in evalCOCO.m.
@louielu1027 what is the last line evalDemo()in evalDemo.m, have you recurrenced the author's result? thank you.
@louielu1027 what is the last line evalDemo()in evalCOCO.m, have you recurrenced the author's result? thank you.
I got AP=0.560, evalDemo code is in COCO toolbox MatlabAPI
Hi everyone. I run the test model on COCO 2017 test-dev subset, only 0.52 AP was achieved which is far from the performance on COCO validation subset. What about your test?