KittiSeg icon indicating copy to clipboard operation
KittiSeg copied to clipboard

Question for Precision Calculation

Open hxlee309 opened this issue 6 years ago • 3 comments

Hi,

I check the folder KittiSeg/DATA/data_road/testing, seems there's only testing images without the corresponding ground truth images. However, after I run $python evaluate.py, and checked the generated log file in KittiSeg/RUNS/KittiSeg_pretrained/analyse, I found the following:

2018-10-02 15:13:41,234 root INFO Evaluation Succesfull. Results: 2018-10-02 15:13:41,234 root INFO MaxF1 : 96.0821 2018-10-02 15:13:41,234 root INFO BestThresh : 14.5098 2018-10-02 15:13:41,234 root INFO Average Precision : 92.3620 2018-10-02 15:13:41,234 root INFO Speed (msec) : 84.2132 2018-10-02 15:13:41,234 root INFO Speed (fps) : 11.8746 ...

How can the average precision be calculated here without knowing the ground truth images?

Any help will be appreciated.

Thanks,

Hanxiang

hxlee309 avatar Oct 02 '18 19:10 hxlee309

I have the same question.

HelloZEX avatar Nov 27 '18 01:11 HelloZEX

In DATA/data_road/testing folder, the annotations are provided in "calib" folder, and the corresponding images are in "image_2" folder. So I guess it evaluates against those annotations. This is what I thought of. Please correct me if I'm wrong.

KavyaRavulapati avatar Dec 02 '18 16:12 KavyaRavulapati

evaluate.py does an evaluation on both the validation and test data.

I think the metrics you're seeing above (MaxF1, BestThresh etc) are for the validation data.

For the test data, it just saves the output images without running a metric evaluation (hence doesn't need the ground truth images in testing.txt). You can refer to submodules/evaluation/kitti_test.py -> create_test_output() function.

lefthandwriter avatar Mar 26 '19 18:03 lefthandwriter