Realtime_Multi-Person_Pose_Estimation icon indicating copy to clipboard operation
Realtime_Multi-Person_Pose_Estimation copied to clipboard

Some questions about your projects

Open ChenhanXmu opened this issue 8 years ago • 24 comments

Dear Dr. Zhe Cao, I'm a postgraduate student from Xiamen University. Recently, I am focusing on the research about multi-person pose estimation.I have some questions about your projects. There are two major questions:

  1. I ran your model in (https://github.com/CMU-Perceptual-Computing-Lab/caffe_rtpose) to test COCO dataset. But there is no way to be aligned on your result in COCO_eval_2014.

It's note that we set one person's score as the average of none-zero scores in this person. The result of 1000 testing images showed in the following:

qq 20170313192342

  1. As to training(https://github.com/ZheC/Realtime_Multi-Person_Pose_Estimation), I just found the code of generating COCO, but I need to generate both json and lmdb MPII's data, do you have this code? I appreciate it if you share the code with me or tell me how to do this.

Thanks a lot! Han

ChenhanXmu avatar Mar 13 '17 11:03 ChenhanXmu

Same question here. In the training script (genJSON.m), the first 2644 images in the val set are excluded from training. So I applied the released model on this "minival" set, and got an AP of 43.9, far away from expected.

@ZheC It would be very helpful if you could reveal more details on evaluation. Thanks!

xiaoyong avatar Mar 14 '17 04:03 xiaoyong

The C++ code is basically for the demo purpose (it uses one scale during testing for faster speed and some parameters inside the code are not optimal for COCO evaluation as well). In practice, we are using the matlab code with 4 scale search for getting the COCO result. Here is an example matlab code: https://github.com/ZheC/Realtime_Multi-Person_Pose_Estimation/blob/master/testing/eval.m

ZheC avatar Mar 14 '17 07:03 ZheC

Thank you for your generous help. It's really a good work. Now, I need to generate lmdb training data using MPI dataset. But when I run genLMDB.py I meet the MPI.json is not found. Would you tell me how can I generate this json file. Or simple, It would be very helpful if you release your code or your MPI.json file.

Han

ChenhanXmu avatar Mar 14 '17 11:03 ChenhanXmu

Right now you can download the json file by: wget http://posefs1.perception.cs.cmu.edu/Users/ZheCao/MPI.json

I will add the code to github later, including the code for generating the Masks for unannotated people. Before training, we use the SSD detector: https://github.com/weiliu89/caffe/tree/ssd. to detect all the people and the bounding box predictions for unannotated people are used to generate the mask.

ZheC avatar Mar 14 '17 17:03 ZheC

2333!!! You are very nice. There are some question:

  1. In eval.m : ( for part = 1:12 ),I don't know what is the part,It's the number of human body's parts? But It is not matched the MPII dataset. In addition, the function orderMPI is not given.

  2. I don't know how to generate MPIIMask,It‘s’ only need bounding boxes to generate the mask or just like genCOCOMask also requests segmentation?

Thx!!!

ChenhanXmu avatar Mar 15 '17 07:03 ChenhanXmu

By the way, would you mind release the training curve?

ChenhanXmu avatar Mar 15 '17 15:03 ChenhanXmu

Just an update: I corrected the issues in eval.m pointed out by https://github.com/ZheC/Realtime_Multi-Person_Pose_Estimation/issues/38 @xiaoyong @ChenhanXmu You should use the updated version instead.

ZheC avatar Mar 16 '17 04:03 ZheC

@ZheC Thanks for your quick response! I tried your evaluation code, and here are the AP numbers on the 2644 minival set:

Matlab (1 scale) Matlab (4 scales) caffe_rtpose (1 scale)
[email protected]:0.95 48.2 57.7 44.9

The 4-scale setting is close to what's reported in your paper. How about the 1-scale setting? Is it reasonable?

Besides, caffe_rtpose gets slightly lower AP than Matlab. Any plan to fill the gap? If not, I would try to figure out their difference ==

xiaoyong avatar Mar 16 '17 07:03 xiaoyong

@xiaoyong Do you have meet any error when you are testing with GPU?

ChenhanXmu avatar Mar 16 '17 10:03 ChenhanXmu

@ChenhanXmu Some path editing (e.g. caffe path, coco image path) is required. No big issue. After the json file is written, I use coco's python API for evaluation.

xiaoyong avatar Mar 16 '17 10:03 xiaoyong

qq 20170316190657 @xiaoyong

ChenhanXmu avatar Mar 16 '17 11:03 ChenhanXmu

What's your cudnn ,matlab and ubuntu version? @xiaoyong

ChenhanXmu avatar Mar 16 '17 11:03 ChenhanXmu

I use cudnn 5.1, matlab r2015a and ubuntu 14.04. @ChenhanXmu

xiaoyong avatar Mar 16 '17 11:03 xiaoyong

@xiaoyong Have you modify json_for_coco_eval(count).score = pred(j).annorect(d).annopoints.score *length(pred(j).annorect(d).annopoints.point); to json_for_coco_eval(count).score = pred(j).annorect(d).annopoints.score/length(pred(j).annorect(d).annopoints.point); ?

ChenhanXmu avatar Mar 17 '17 02:03 ChenhanXmu

@ChenhanXmu It seems to be a bug. Bug after the modification, AP performance drops slightly :-)

xiaoyong avatar Mar 17 '17 05:03 xiaoyong

@ZheC Could you tell me how to generate MPIIMask ?

ChenhanXmu avatar Mar 24 '17 07:03 ChenhanXmu

@ZheC I have used SSD detector: https://github.com/weiliu89/caffe/tree/ssd. to detect all the people and the bounding box predictions. But I don't know how to generate MPII Mask. Can I to do it the same as genCOCOMask?

ChenhanXmu avatar Mar 27 '17 12:03 ChenhanXmu

@xiaoyong Hello, I am also evaluation with the model using evalCOCO.m

  1. which evalDemo.m did you use? Is it the same file in coco/MatlabAPI ? I see a function call from evalCOCO.m but its actually a script! Did you write your own function or modified the script?

2.One more concern, where can I find this this minival set? and how did you generate coco_val for that?

  1. @ZheC This is what I got, I think it is a poor result! I used the images from image_info_val2014_1k.txt in caffe_rtpose, What could be the reason for this result?
Average Precision (AP) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] = 0.199
 Average Precision (AP) @[ IoU=0.50      | area=   all | maxDets= 20 ] = 0.296
 Average Precision (AP) @[ IoU=0.75      | area=   all | maxDets= 20 ] = 0.209
 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.303
 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.194
 Average Recall    (AR) @[ IoU=0.50:0.95 | area=   all | maxDets= 20 ] = 0.508
 Average Recall    (AR) @[ IoU=0.50      | area=   all | maxDets= 20 ] = 0.683
 Average Recall    (AR) @[ IoU=0.75      | area=   all | maxDets= 20 ] = 0.532
 Average Recall    (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.424
 Average Recall    (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.603

priyapaul avatar Apr 12 '17 11:04 priyapaul

@ZheC I managed to run your python code for the evaluation on the whole coco validation set. Nevertheless, the results I get on 4 scales are not close to the ones reported on your paper. Is there anything in the python code I should consider to perform the evaluation?

These are my results with the python code on 4 scales

Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.411 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.646 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.431 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.535 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.353 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.646 Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.847 Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.693 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.583 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.737

legan78 avatar Apr 30 '17 18:04 legan78

@ChenhanXmu Have you solved the problem of generating MPIIMask ?

trantorrepository avatar Aug 16 '17 03:08 trantorrepository

@legan78 Did you fix the problem of difference in evaluation with python? I wonder if it is the scales issue. The matlab code calculate the scales https://github.com/ZheC/Realtime_Multi-Person_Pose_Estimation/blob/master/testing/src/applyModel.m#L44. Could you share your evaluation python code? I tried the python one but seems to get worse result on the first 50 evaluation images

dragonfly90 avatar Aug 24 '17 21:08 dragonfly90

@ChenhanXmu Have you solved the problem of generating MPIIMask ?

lang25150 avatar Sep 26 '17 07:09 lang25150

@priyapaul I have same questions,which evalDemo.m did they use, could you tell me how to run the evalCOCO.m ?

cg3575 avatar Sep 29 '17 08:09 cg3575

@ZheC @legan78 my result is the same as yours, about 42%. Have you found the reason lead to the poor result. the difference about my model and CaoZhe's is my batch_size=4, thanks a lot

YanYan0716 avatar Aug 27 '18 01:08 YanYan0716