second.pytorch
second.pytorch copied to clipboard
Is there a bug in get_sensor_data of nuscenes dataset?
Hi, I might find a bug here,
https://github.com/traveller59/second.pytorch/blob/3aba19c9688274f75ebb5e576f65cfe54773c021/second/data/nuscenes_dataset.py#L185
Why you concatenate timestamp channel instead of reflectance?
Will np.concatenate(sweep_points_list, axis=0)[:, [0, 1, 2, 3]] be better?
Will
np.concatenate(sweep_points_list, axis=0)[:, [0, 1, 2, 3]]be better?
My experiment shows that use timestep is better than reflectance, which is confusing.
@Yihanhu Do you know that is the reflectance used when training on KITTI? I don't try it on KITTI yet. And could you tell me how much performance is gained when using timestamp on nuScenes according to your experiments?
@Yihanhu Do you know that is the reflectance used when training on KITTI? I don't try it on KITTI yet. And could you tell me how much performance is gained when using timestamp on nuScenes according to your experiments?
Yes, it is. I haven't done detailed research yet. For the same configuration, my model somehow cannot even work without timesteps.
@Yihanhu The original code stacks all the point cloud frames(~11 frames) into a single frame. I only sample 3 frames from the 11 frames uniformly, and use the reflection channel instead of the timestamp channel. Results on val set shows that using reflection is a little better(21.1/19.7 mAP on 1/8 datasets). If you are still working on nuScenes dataset, we can talk more further.
Yes, it is. I haven't done detailed research yet. For the same configuration, my model somehow cannot even work without timesteps.
It seems like the timestamp introduces some bias to overfit the data. I was wondering whether the operation of concatenating the timestamp as the input feature is reasonable or not. I suppose the reason why it works is: When evaluating the validation set, the prediction of the validation data should be similar to those training data with similar/near timestamps.
If this is really the reason behind, the operation of concatenating the timestamp as the input feature is really not reasonable and cannot generalize to real-world scenarios.
Hi, how to use more than 4 features as input. for example, can I simply use points[:,[0,1,2,3,4]]?