squeezeDet icon indicating copy to clipboard operation
squeezeDet copied to clipboard

Evaluation

Open YangBain opened this issue 7 years ago • 4 comments

@ALL Hey, anyone who knows why the evaluation results (i.e. Car_easy =0.00....) like as follows:

Evaluating detections... Running: ./src/dataset/kitti-eval/cpp/evaluate_object ./data/KITTI/training ./data/KITTI/ImageSets/val.txt ./data/model_checkpoints/squeezeDet/val/detection_files_200000 3769 /bin/sh: 1: ./src/dataset/kitti-eval/cpp/evaluate_object: not found Evaluation summary: Average number of detections per image: 9.59724064739: Timing: im_read: 0.031s detect: 0.014s misc: 0.011s Average precisions: car_easy: 0.000 car_medium: 0.000 car_hard: 0.000 pedestrian_easy: 0.000 pedestrian_medium: 0.000 pedestrian_hard: 0.000 cyclist_easy: 0.000 cyclist_medium: 0.000 cyclist_hard: 0.000 Mean average precision: 0.000 Analyzing detections... Detection Analysis: Number of detections: 16571.0 Number of objects: 17558.0 Percentage of correct detections: 0.759640335526 Percentage of localization error: 0.0990284231489 Percentage of classification error: 0.025164443908 Percentage of background error: 0.114718484099 Percentage of repeated detections: 0.00144831331845

Thank you very much.

Bin Yang

YangBain avatar Jul 03 '17 01:07 YangBain

^src/eval.py is calling KITTI's evaluation script. Have you compiled it before running evaluation?

BichenWuUCB avatar Jul 03 '17 06:07 BichenWuUCB

I forgot to do this step before running evaluation. I will try it. Thank you very much.

YangBain avatar Jul 03 '17 08:07 YangBain

@BichenWuUCB if one is using custom data resolution and want to modify eval script, I mean "evaluate_object.cpp", do you have any ideas on how these KITTI evaluation parameters (see below) were computed?

// evaluation parameter const int32_t MIN_HEIGHT[3] = {40, 25, 25}; // minimum height for evaluated groundtruth/detections const int32_t MAX_OCCLUSION[3] = {0, 1, 2}; // maximum occlusion level of the groundtruth used for evaluation const double MAX_TRUNCATION[3] = {0.15, 0.3, 0.5}; // maximum truncation level of the groundtruth used for evaluation

It seems those above parameters works for KITTI image resolution, is there any generic formulation for an arbitrary image resolution?

acmaheri avatar May 13 '19 12:05 acmaheri

@BichenWuUCB if one is using custom data resolution and want to modify eval script, I mean "evaluate_object.cpp", do you have any ideas on how these KITTI evaluation parameters (see below) were computed?

// evaluation parameter const int32_t MIN_HEIGHT[3] = {40, 25, 25}; // minimum height for evaluated groundtruth/detections const int32_t MAX_OCCLUSION[3] = {0, 1, 2}; // maximum occlusion level of the groundtruth used for evaluation const double MAX_TRUNCATION[3] = {0.15, 0.3, 0.5}; // maximum truncation level of the groundtruth used for evaluation

It seems those above parameters works for KITTI image resolution, is there any generic formulation for an arbitrary image resolution?

@BichenWuUCB Any update on this? I mean, eval.py is good for other datasets, with other image sizes, but labels in the correct KITTI format or should be changed?

alex96295 avatar Jun 11 '20 16:06 alex96295