OpenPCDet
OpenPCDet copied to clipboard
substantial difference between the test set and the validation set
I trained the model according to the original configuration in pointpillar.yaml, splitting the dataset into 80%-20% and 50%-50% for training and validation, respectively. Despite my efforts, when submitting the best results to the KITTI official for evaluation on the test set, I consistently fail to achieve metrics close to those reported in the original paper. The discrepancy is particularly noticeable for pedestrian and cyclist classes. Pointpillar:Pedestrian (3D Detection) | 52.08 % | 43.53 % | 41.49 % mine:Pedestrian (3D Detection) | 42.86 % | 36.09 % | 33.53 % What reasons could be causing this situation? thanks! @sshaoshuai
This issue is stale because it has been open for 30 days with no activity.
This issue was closed because it has been inactive for 14 days since being marked as stale.