mxnet_Realtime_Multi-Person_Pose_Estimation
mxnet_Realtime_Multi-Person_Pose_Estimation copied to clipboard
About data.json
Hi @dragonfly90 , can you please explain how your data.json is organized? Cause I'd like to finetune this model to a new dataset where there are 14 keypoints to be detected and I had to generate a new json file to train the model.
Hi @insomnia250 , you could change https://github.com/dragonfly90/mxnet_Realtime_Multi-Person_Pose_Estimation/blob/master/pose_io/annotation.ipynb to generate new json file. You also need to change https://github.com/dragonfly90/mxnet_Realtime_Multi-Person_Pose_Estimation/blob/master/GenerateLabelCPM.py#L83 to make it fit 14 keypoints (Cao's original paper use 18 key points).
@dragonfly90 Thanks for your reply!
I have roughly read that annotation.ipynb. But I can't find its 'pycocotools' module according to from pycocotools.coco import COCO. And maybe it's difficult for me to figure out how that desired json file is organized.
So, is it possible to take an example to illustrate what that json file should contain so that I can generate it according to my dataset.
Thank you!
@insomnia250 pycocotools is coco API (https://github.com/pdollar/coco). You need to install it. I think you could just change GenerateLabelCPM.py if you want to use other dataset. You only need the pixel location of every key point. I want to have a try on this dataset https://challenger.ai/competition/keypoint/subject. It includes 14 key points. I will publish my code after I finish the generating label part.
@dragonfly90 OK, I see. Thanks for your help and wish you good grades in that competition :)
Hi @dragonfly90 I am also confused about how your data.json is organized. Can you give an explanation for each part in data.json. Thanks!In fact, I also want to use this model to a new dataset. So could you please tell us when your code about this can be published?
Thank you!
@qqsh0214 I update https://github.com/dragonfly90/mxnet_Realtime_Multi-Person_Pose_Estimation/blob/master/pose_io/annotation.ipynb. You could check Zhe Cao's original code for the data format. Basically you have main person keypoints location and other persons' keypoints location in each image, so you could generate heamap and pafmap later. I will try to add new code next week.
@dragonfly90 Hi, thanks for your work. I'm confused about the label or the ground truth about the cpm and paf. Different pictures have different numbers of people, so the keypoints' number is not always the same. However the label of the ground truth of heatmap and pafmap are alway the same as 19 and 38, the 19 is the keypoint + background in COCO, but the 38? I'm confused about the paf's number,38. would you mind giving me more information about the paf and its number,38.thanks