mmpose
mmpose copied to clipboard
No inference results on custom dataset
Hi,
I have followed macaque dataset format to label the keypoints in custom dataset. My custom dataset comprises of 200 black and white .png images with straight lines. I have labeled my dataset with three keypoints (start,end and center). The dataset code files and annotation files are attached for reference. Then I trained HRNet on the custom dataset (the config file is also attached). After training, I do get the keypoints in "results_keypoints.json" file for my validation dataset but when I run "top_down_img_demo.py" on my same validation dataset with a .json file created as per your comment here [https://github.com/open-mmlab/mmpose/issues/582#issuecomment-821166866]. I do not get any results at all. However, if I use my test annotation json file (mendeleypl_test.json with gt data) in the inference, I get the desired bounding box results with keypoints.
Kindly help me in this regard.
Data annotation files pl_annotation.csv mendeleypl_train.txt mendeleypl_test.txt
Code and config files inference.txt mendeley_dataset.txt powerline_base_dataset.txt hrnet_w32_mendeleypl_256x256.txt
Results file result_keypoints.txt
Inference .json file (same as mendeleypl_test.json but without gt annotation data) mendeley_test.txt
Thanks
For top-down approaches, you also need to provide bounding boxes in the .json or use a box detector.
https://github.com/open-mmlab/mmpose/blob/f8678064e1bca0272f64390aab7beaae820d3e34/demo/top_down_img_demo.py#L73
For top-down approaches, you also need to provide bounding boxes in the .json or use a box detector.
I do not want to provide the bbox annotation and want to inference the bbox and keypoints directly. For the box detector will I have to train it separately?
Yes. You have to train a box detector in this case. You may refer to mmdet for more information.
It is also possible, if you want to use the whole image as the input. In this case, the object should locate at the center of the image. Please check this. https://github.com/open-mmlab/mmpose/blob/master/demo/docs/2d_animal_demo.md#using-the-full-image-as-input
It is also possible, if you want to use the whole image as the input. In this case, the object should locate at the center of the image. Please check this. https://github.com/open-mmlab/mmpose/blob/master/demo/docs/2d_animal_demo.md#using-the-full-image-as-input
What if I use a bottom up model? Would I still require a bounding box detector?
No. But bottom-up models generally need more images and longer training time for convergence.
Not sure, if it will work with only 200 images.
Not sure, if it will work with only 200 images.
Noted with thanks. It is worth a try :) Also can you tell me how are the sigmas calculated for the coco format datasets (so that I can calculate for my custom dataset accordingly) and what do these represent?
'https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/'
'pycocotools/cocoeval.py#L523'
sigma is the normalized factor of the human skeletal keypoint, which is calculated by the standard deviation of human annotation result. It is used to measure the annotation quality, and will be used for evaluation (mAP).
Please check https://cocodataset.org/#keypoints-eval for more details.
sigma is the normalized factor of the human skeletal keypoint, which is calculated by the standard deviation of human annotation result. It is used to measure the annotation quality, and will be used for evaluation (mAP).
Please check https://cocodataset.org/#keypoints-eval for more details.
@jin-s13 noted thanks. I also wanted to ask that can we include multiple evaluation metrics for our experiments? For coco type datasets the evaluation is done according to coco format but if I want to include other metrics such as F1-measure and processing speed (FPS) then how can that be achieved? Also is it possible to display AUC and PCKh metrics apart from the coco metrics?
Yes, we can use multiple evaluation metrics. In fact, in some config files (and some specific datasets), we use 'PCK', 'AUC', 'EPE' for evaluation. https://github.com/open-mmlab/mmpose/blob/c8ff23fa6014a9caab3f935e10a1acb9712f155c/configs/hand/2d_kpt_sview_rgb_img/topdown_heatmap/onehand10k/hrnetv2_w18_onehand10k_256x256_dark.py#L8
One can modify the evaluate function, see https://github.com/open-mmlab/mmpose/blob/c8ff23fa6014a9caab3f935e10a1acb9712f155c/mmpose/datasets/datasets/hand/hand_base_dataset.py#L132 to include more evaluation metrics.
Yes, we can use multiple evaluation metrics. In fact, in some config files (and some specific datasets), we use 'PCK', 'AUC', 'EPE' for evaluation. https://github.com/open-mmlab/mmpose/blob/c8ff23fa6014a9caab3f935e10a1acb9712f155c/configs/hand/2d_kpt_sview_rgb_img/topdown_heatmap/onehand10k/hrnetv2_w18_onehand10k_256x256_dark.py#L8
One can modify the evaluate function, see https://github.com/open-mmlab/mmpose/blob/c8ff23fa6014a9caab3f935e10a1acb9712f155c/mmpose/datasets/datasets/hand/hand_base_dataset.py#L132
to include more evaluation metrics.
@jin-s13 Yes but how can one include these metrics along with COCO evaluation metrics as the evaluate function in COCO type datasets
def evaluate(self, outputs, res_folder, metric='mAP', **kwargs):
is different from the one you mentioned. I tried to incorporate the methods in def _report_metric(self, to def _do_python_keypoint_eval(self, res_file): COCO evaluate function but there are several problems with the methods called inside these functions. Kindly guide on this. Also how can one measure the model speed in FPS and model F1 score? Include these metrics in the evaluation parameters as well.
Yes, we can use multiple evaluation metrics. In fact, in some config files (and some specific datasets), we use 'PCK', 'AUC', 'EPE' for evaluation. https://github.com/open-mmlab/mmpose/blob/c8ff23fa6014a9caab3f935e10a1acb9712f155c/configs/hand/2d_kpt_sview_rgb_img/topdown_heatmap/onehand10k/hrnetv2_w18_onehand10k_256x256_dark.py#L8
One can modify the evaluate function, see https://github.com/open-mmlab/mmpose/blob/c8ff23fa6014a9caab3f935e10a1acb9712f155c/mmpose/datasets/datasets/hand/hand_base_dataset.py#L132
to include more evaluation metrics.
@jin-s13 since the dice or F1-score is equal to 2 * precision * recall / (precision + recall) and we already get the AP and AR from the COCO evaluation results so can we use this formula for dice score calculation? Dice score= 2 * AP * AR / (AP + AR) Thanks in advance