human-pose-estimation.pytorch
human-pose-estimation.pytorch copied to clipboard
setting output=target in validation does not give AP=1.0
this is not an issue but rather an observation. What i did:
- use ground truth box in json as input instance box
- disable flip test
- disable oks_nms instance re-scoring
- simply set output=target in def validate(config, val_loader, val_dataset, ...) in function.py
i get the results:
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.975 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.990 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.988 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.971 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.980 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 20 ] = 0.988 Average Recall (AR) @[ IoU=0.50 | area= all | maxDets= 20 ] = 0.999 Average Recall (AR) @[ IoU=0.75 | area= all | maxDets= 20 ] = 0.996 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets= 20 ] = 0.984 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets= 20 ] = 0.996
this represent results of a perfect convolution net which have predictions same as the target ground truth. Even so, there is a drop of 0.025 in AP.
This shows that there is some still room of improvement if you have better post processing methods or better target definition.