RaTrack
RaTrack copied to clipboard
About MOTA calculation
Dear Author, I appreciate this great work again.
We understand that you are unable to share the evaluation code. Consequently, we have developed our own evaluation script based on the descriptions and code snippets provided in your paper. Specifically, we followed the procedure outlined in lines 83-117 of RaTrack/src/main_utils.py to project bounding boxes onto the radar coordinate system and filter radar points of moving objects.
kitti_locations = VodTrackLocations(root_dir=args.dataset_path,
output_dir=args.dataset_path,
frame_set_path="",
pred_dir="",
)
frame_data_0 = FrameDataLoader(kitti_locations=kitti_locations,
frame_number=str(index + 1).zfill(5))
frame_data_1 = FrameDataLoader(kitti_locations=kitti_locations,
frame_number=str(index).zfill(5))
try:
import dataset_classes.track_vod_3d as vod_data
labels1 = vod_data.load_labels(frame_data_0.raw_tracking_labels, index + 1)
labels2 = vod_data.load_labels(frame_data_1.raw_tracking_labels, index)
transforms1 = FrameTransformMatrix(frame_data_0)
transforms2 = FrameTransformMatrix(frame_data_1)
lbl1 = labels1.data[index + 1]
lbl2 = labels2.data[index]
lbl1_mov = filter_moving_boxes_det(frame_data_0.raw_detection_labels, lbl1)
lbl2_mov = filter_moving_boxes_det(frame_data_1.raw_detection_labels, lbl2)
except:
continue
lbl1 = lbl1_mov
lbl2 = lbl2_mov
batch_size = pc1.size(0)
num_examples += batch_size
if args.model == 'track4d_radar':
gt_mov_pts1, gt_cls1, gt_objs1, objs_idx1, objs_centre1, cls_obj_id1, boxes1, objs_combined1, objs_idx_combined1, objs_centre_combined1 = filter_object_points(
args, lbl1, pc1, transforms1)
We then saved these points as Ground Truth files with confidence scores set to 1, as detailed in lines 171-184 of the same code, for subsequent calculations and scoring.
for obj_id, obj in objects.items():
idx += 1
result_str = "NA"
result_str += " 1"
result_str += " -1"
result_str += " -1"
result_str += " " + str(float(confs[idx]))
result_str += " " + str(obj_id)
for i in range(obj.size(2)):
result_str += " " + str(float(obj[0, 3, i]))
result_str += " " + str(float(obj[0, 4, i]))
result_str += " " + str(float(obj[0, 5, i]))
result_str += "\n"
file.writelines(result_str)
In computing MOTA, we adhered to the definitions provided in "Evaluating Multiple Object Tracking Performance: The CLEAR MOT Metrics" and the point-based IOU description in Section V-A of your RaTrack paper. According to Section V-E, we calculated GT, FP, FN, and IDSW exclusively for objects with more than 5 radar points. Despite using 40 different confidence score thresholds ranging from 0 to 1, the results did not meet expectations (I get a negative MOTA value): lower thresholds led to a high number of false positives, while higher thresholds resulted in an excessive number of false negatives.
Additionally, we observed potential noise affecting detection outcomes in the motion segmentation results from line 132 of RaTrack/src/main_utils.py.
cls_mask = torch.where(cls.squeeze(0).squeeze(0) > 0.50, 1, 0)
Could you kindly clarify the following:
-
In calculating false positives (FP) and false negatives (FN), did you consider tracking IDs or information from previous frames, or were these metrics based solely on detection results?
-
What confidence score threshold was used for the MOTA values reported in your paper?
-
Did you encounter any issues with noise in the motion segmentation results? Thank you for your time and assistance. We look forward to your valuable insights.