CornerNet-Lite
CornerNet-Lite copied to clipboard
wrong with test
I used python evaluate.py CornerNet_Saccade --testiter 500000 --split testing
to test coco datasets, but the result likes this:
Accumulating evaluation results... DONE (t=12.17s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
why? Did I do anything wrong with the steps?
I met this problem too.
Any one solved it~~~?
COCO does not provide annotation for the test set. That's why the COCO evaluation API gives -1, when you are evaluating the detector on the test set. You need to submit the result JSON file to COCO evaluation server. The JSON file can be found in results/<config>/<iter>/testing
.
@heilaw Ok, thanks.Now I want to train my own dataset, I format the data to the format coco, then which files and parameters should I modify?
COCO does not provide annotation for the test set. That's why the COCO evaluation API gives -1, when you are evaluating the detector on the test set. You need to submit the result JSON file to COCO evaluation server. The JSON file can be found in
results/<config>/<iter>/testing
.
I still don't understand very well. Can you say something specific? Thank you very much.
I used
python evaluate.py CornerNet_Saccade --testiter 500000 --split testing
to test coco datasets, but the result likes this:Accumulating evaluation results... DONE (t=12.17s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
why? Did I do anything wrong with the steps?
Can you tell me how to do it?
Any one solved it~~~?
Have you solved it?