Bugs in v1.1
Hi @ChonghaoSima @zihanding819 , I found some bugs in v1.1
- https://github.com/OpenPerceptionX/OpenLane/blob/main/eval/LANE_evaluation/lane3d/eval_3D_lane.py#L182
both_invisible_indicesare counted innum_match_mat, but https://github.com/OpenPerceptionX/OpenLane/blob/main/eval/LANE_evaluation/lane3d/eval_3D_lane.py#L236 the denominator only counts visible points. - https://github.com/OpenPerceptionX/OpenLane/blob/main/eval/LANE_evaluation/lane3d/eval_3D_lane.py#L203 those
-1s should be removed bofore computing avg.
I got this with your realeased persformer ckpt ===> Evaluation laneline F-measure: 0.78679105
===> Evaluation laneline Recall: 0.69620109
===> Evaluation laneline Precision: 0.90448399
===> Evaluation laneline Category Accuracy: 0.87950062
===> Evaluation laneline x error (close): 0.18658032 m
===> Evaluation laneline x error (far): -0.22041094 m
===> Evaluation laneline z error (close): -0.02688632 m
===> Evaluation laneline z error (far): -0.34050375 m
-
pred_lanesshould be converted to ndarray before https://github.com/OpenPerceptionX/OpenLane/blob/main/eval/LANE_evaluation/lane3d/eval_3D_lane.py#L95 -
#18
Hi, @Aguin Thanks for your suggestions, and here are my responses to questions 1-4.
-
In the coming update,
num_match_matchwill only containsboth_visible_indices, i.e, excludes the caseboth_invisible_indices. -
We have excluded those
-1inline391-394. At present, there is no negativex_error/z_errorin our evaluation results.
x_error_close_avg = np.average(laneline_x_error_close[laneline_x_error_close > -1 + 1e-6])
x_error_far_avg = np.average(laneline_x_error_far[laneline_x_error_far > -1 + 1e-6])
z_error_close_avg = np.average(laneline_z_error_close[laneline_z_error_close > -1 + 1e-6])
z_error_far_avg = np.average(laneline_z_error_far[laneline_z_error_far > -1 + 1e-6])
-
After loaded from our json file, the type of
pred_lanesis alist. Each item in the list represents a single lane whose type isnumpy.ndarrayand in shape (N, 3), where N represents the number of points of a lane. During our evaluation, no additional type conversion code needs to be added at this time. -
We will further consider whether
gt_visibilityshould be introduced into the evaluation, and will communicate with you in time if there is any update in the future.
For issue 3 and 4, hope you can provide more information for further communication.
@zihanding819 I really appreciate that you have been quick to resolve these issues and I believe the evaluation metric will be more convincing after the first 2 issues have been fixed.
For issue 3, I think it's okay to let the users pass in lists of ndarrays or lists, as long as the format is stated in the comments. I raised this just because I saw np.array() here https://github.com/OpenPerceptionX/OpenLane/blob/main/eval/LANE_evaluation/lane3d/eval_3D_lane.py#L98 and I thought it might be a little bit confusing.
Besides, I think some invisible points are actually noisy data because they look weird, although I'm not sure how they would affect the evaluation. You can easily find these weird frames by sampling some data and here are some examples.



@Aguin The visibility attribute is supposed to deal with this noises, just as shown in the mid figures. Invisible gt points will not affect evaluation.