CornerNet-Lite icon indicating copy to clipboard operation
CornerNet-Lite copied to clipboard

wrong with test

Open wudi00 opened this issue 5 years ago • 7 comments

I used python evaluate.py CornerNet_Saccade --testiter 500000 --split testing to test coco datasets, but the result likes this: Accumulating evaluation results... DONE (t=12.17s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 why? Did I do anything wrong with the steps?

wudi00 avatar May 22 '19 06:05 wudi00

I met this problem too.

SeeeeShiwei avatar May 22 '19 10:05 SeeeeShiwei

Any one solved it~~~?

SeeeeShiwei avatar May 23 '19 01:05 SeeeeShiwei

COCO does not provide annotation for the test set. That's why the COCO evaluation API gives -1, when you are evaluating the detector on the test set. You need to submit the result JSON file to COCO evaluation server. The JSON file can be found in results/<config>/<iter>/testing.

heilaw avatar May 23 '19 18:05 heilaw

@heilaw Ok, thanks.Now I want to train my own dataset, I format the data to the format coco, then which files and parameters should I modify?

wudi00 avatar May 25 '19 08:05 wudi00

COCO does not provide annotation for the test set. That's why the COCO evaluation API gives -1, when you are evaluating the detector on the test set. You need to submit the result JSON file to COCO evaluation server. The JSON file can be found in results/<config>/<iter>/testing.

I still don't understand very well. Can you say something specific? Thank you very much.

float4189 avatar Sep 16 '19 01:09 float4189

I used python evaluate.py CornerNet_Saccade --testiter 500000 --split testing to test coco datasets, but the result likes this: Accumulating evaluation results... DONE (t=12.17s). Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = -1.000 Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000 why? Did I do anything wrong with the steps?

Can you tell me how to do it?

float4189 avatar Sep 16 '19 01:09 float4189

Any one solved it~~~?

Have you solved it?

float4189 avatar Sep 16 '19 01:09 float4189