OpenLane icon indicating copy to clipboard operation
OpenLane copied to clipboard

Bugs in v1.1

Open Aguin opened this issue 3 years ago • 3 comments

Hi @ChonghaoSima @zihanding819 , I found some bugs in v1.1

  1. https://github.com/OpenPerceptionX/OpenLane/blob/main/eval/LANE_evaluation/lane3d/eval_3D_lane.py#L182 both_invisible_indices are counted in num_match_mat, but https://github.com/OpenPerceptionX/OpenLane/blob/main/eval/LANE_evaluation/lane3d/eval_3D_lane.py#L236 the denominator only counts visible points.
  2. https://github.com/OpenPerceptionX/OpenLane/blob/main/eval/LANE_evaluation/lane3d/eval_3D_lane.py#L203 those -1s should be removed bofore computing avg.

I got this with your realeased persformer ckpt ===> Evaluation laneline F-measure: 0.78679105
===> Evaluation laneline Recall: 0.69620109
===> Evaluation laneline Precision: 0.90448399
===> Evaluation laneline Category Accuracy: 0.87950062
===> Evaluation laneline x error (close): 0.18658032 m
===> Evaluation laneline x error (far): -0.22041094 m
===> Evaluation laneline z error (close): -0.02688632 m
===> Evaluation laneline z error (far): -0.34050375 m

  1. pred_lanes should be converted to ndarray before https://github.com/OpenPerceptionX/OpenLane/blob/main/eval/LANE_evaluation/lane3d/eval_3D_lane.py#L95

  2. #18

Aguin avatar Jul 07 '22 11:07 Aguin

Hi, @Aguin Thanks for your suggestions, and here are my responses to questions 1-4.

  1. In the coming update, num_match_match will only contains both_visible_indices, i.e, excludes the case both_invisible_indices.

  2. We have excluded those -1 in line391-394. At present, there is no negative x_error/z_error in our evaluation results.

  x_error_close_avg = np.average(laneline_x_error_close[laneline_x_error_close > -1 + 1e-6])
  x_error_far_avg = np.average(laneline_x_error_far[laneline_x_error_far > -1 + 1e-6])
  z_error_close_avg = np.average(laneline_z_error_close[laneline_z_error_close > -1 + 1e-6])
  z_error_far_avg = np.average(laneline_z_error_far[laneline_z_error_far > -1 + 1e-6])
  1. After loaded from our json file, the type of pred_lanes is a list. Each item in the list represents a single lane whose type is numpy.ndarray and in shape (N, 3), where N represents the number of points of a lane. During our evaluation, no additional type conversion code needs to be added at this time.

  2. We will further consider whether gt_visibility should be introduced into the evaluation, and will communicate with you in time if there is any update in the future.

For issue 3 and 4, hope you can provide more information for further communication.

zihanding819 avatar Jul 08 '22 14:07 zihanding819

@zihanding819 I really appreciate that you have been quick to resolve these issues and I believe the evaluation metric will be more convincing after the first 2 issues have been fixed.

For issue 3, I think it's okay to let the users pass in lists of ndarrays or lists, as long as the format is stated in the comments. I raised this just because I saw np.array() here https://github.com/OpenPerceptionX/OpenLane/blob/main/eval/LANE_evaluation/lane3d/eval_3D_lane.py#L98 and I thought it might be a little bit confusing.

Besides, I think some invisible points are actually noisy data because they look weird, although I'm not sure how they would affect the evaluation. You can easily find these weird frames by sampling some data and here are some examples.

output1

output2

output3

Aguin avatar Jul 08 '22 16:07 Aguin

@Aguin The visibility attribute is supposed to deal with this noises, just as shown in the mid figures. Invisible gt points will not affect evaluation.

RicardLee avatar Aug 25 '22 07:08 RicardLee