ProficientTeachers icon indicating copy to clipboard operation
ProficientTeachers copied to clipboard

The training sequence id of the waymo dataset.

Open azhuantou opened this issue 2 years ago • 5 comments

@yinjunbo Hi, thanks for your nice work. Could you please release the id of your training sequence or the SECOND model wegiht on the waymo dataset? I follow your setting to sample sequence-level frame under the 5% setting, but I get low performance on the waymo dataset based on the SECOND. Looking forward to your reply! image

azhuantou avatar Nov 03 '22 11:11 azhuantou

@yinjunbo

Thanks for your great work. We now have the same problem. Without your sampled sequence id in Waymo, it's hard for the community to follow your work, as any fresh sampling is not fair enough to reproduce the baseline.

Looking forward to your reply ! Thank you in advance !

zhengjilai avatar Nov 25 '22 07:11 zhengjilai

Thans for your interest and sorry for the late reply. The scene list can be found at https://pan.baidu.com/s/1BLo1MZu-LusQFR5OrAiUIQ with extraction coden49p. (The different amounts of labeled scenes can be easily obtained by uniformity downsampling, e.g., [::20] for 5%.) Since I'm currenctly busy with other project, this code is suposed to be rechecked at late December. Thx!

yinjunbo avatar Nov 28 '22 05:11 yinjunbo

@yinjunbo Thanks for your great work. We now have the problem with the 5%( actually 2.5%) labeled data's split. We tried to recurrence the pre-train result on Waymo with 5% labeled data. We get the split from the https://pan.baidu.com/s/1BLo1MZu-LusQFR5OrAiUIQ, and get the 5% with [::20] to process the whole waymo_labeled_scenes_list.txt, is it right? Our results are different with yours. So, can you supply the number of pre-train epochs, learning rate and the way to get the right sequence split (if our process is wrong) to help us follow your work.

Looking forward to your reply ! Thank you in advance ! image

Agenzysheep avatar Dec 16 '22 09:12 Agenzysheep

@yinjunbo Thanks for your great work. We now have the problem with the 5%( actually 2.5%) labeled data's split. We tried to recurrence the pre-train result on Waymo with 5% labeled data. We get the split from the https://pan.baidu.com/s/1BLo1MZu-LusQFR5OrAiUIQ, and get the 5% with [::20] to process the whole waymo_labeled_scenes_list.txt, is it right? Our results are different with yours. So, can you supply the number of pre-train epochs, learning rate and the way to get the right sequence split (if our process is wrong) to help us follow your work.

Looking forward to your reply ! Thank you in advance ! image

Thanks for your interest. We used 4 GPUs with total batch size of 16 and learning rate 0.003. We trained 30 epochs. Sometimes training results may be unstable for ped. and cyc., but stable for car, which is around 49 mAPH.
Here's the training list for 5% setting that you can double check with: 'segment-10017090168044687777_6380_000_6400_000_with_camera_labels.tfrecord', 'segment-10750135302241325253_180_000_200_000_with_camera_labels.tfrecord', 'segment-11718898130355901268_2300_000_2320_000_with_camera_labels.tfrecord', 'segment-1265122081809781363_2879_530_2899_530_with_camera_labels.tfrecord', 'segment-13519445614718437933_4060_000_4080_000_with_camera_labels.tfrecord', 'segment-14348136031422182645_3360_000_3380_000_with_camera_labels.tfrecord', 'segment-15266427834976906738_1620_000_1640_000_with_camera_labels.tfrecord', 'segment-1605912288178321742_451_000_471_000_with_camera_labels.tfrecord', 'segment-16735938448970076374_1126_430_1146_430_with_camera_labels.tfrecord', 'segment-17752423643206316420_920_850_940_850_with_camera_labels.tfrecord', 'segment-1891390218766838725_4980_000_5000_000_with_camera_labels.tfrecord', 'segment-2570264768774616538_860_000_880_000_with_camera_labels.tfrecord', 'segment-3195159706851203049_2763_790_2783_790_with_camera_labels.tfrecord', 'segment-3919438171935923501_280_000_300_000_with_camera_labels.tfrecord', 'segment-4672649953433758614_2700_000_2720_000_with_camera_labels.tfrecord', 'segment-5458962501360340931_3140_000_3160_000_with_camera_labels.tfrecord', 'segment-6242822583398487496_73_000_93_000_with_camera_labels.tfrecord', 'segment-7187601925763611197_4384_300_4404_300_with_camera_labels.tfrecord', 'segment-8031709558315183746_491_220_511_220_with_camera_labels.tfrecord', 'segment-9016865488168499365_4780_000_4800_000_with_camera_labels.tfrecord'

yinjunbo avatar Dec 21 '22 04:12 yinjunbo

@yinjunbo Thanks for your great work. We now have the problem with the 5%( actually 2.5%) labeled data's split. We tried to recurrence the pre-train result on Waymo with 5% labeled data. We get the split from the https://pan.baidu.com/s/1BLo1MZu-LusQFR5OrAiUIQ, and get the 5% with [::20] to process the whole waymo_labeled_scenes_list.txt, is it right? Our results are different with yours. So, can you supply the number of pre-train epochs, learning rate and the way to get the right sequence split (if our process is wrong) to help us follow your work. Looking forward to your reply ! Thank you in advance ! image

Thanks for your interest. We used 4 GPUs with total batch size of 16 and learning rate 0.003. We trained 30 epochs. Sometimes training results may be unstable for ped. and cyc., but stable for car, which is around 49 mAPH. Here's the training list for 5% setting that you can double check with: 'segment-10017090168044687777_6380_000_6400_000_with_camera_labels.tfrecord', 'segment-10750135302241325253_180_000_200_000_with_camera_labels.tfrecord', 'segment-11718898130355901268_2300_000_2320_000_with_camera_labels.tfrecord', 'segment-1265122081809781363_2879_530_2899_530_with_camera_labels.tfrecord', 'segment-13519445614718437933_4060_000_4080_000_with_camera_labels.tfrecord', 'segment-14348136031422182645_3360_000_3380_000_with_camera_labels.tfrecord', 'segment-15266427834976906738_1620_000_1640_000_with_camera_labels.tfrecord', 'segment-1605912288178321742_451_000_471_000_with_camera_labels.tfrecord', 'segment-16735938448970076374_1126_430_1146_430_with_camera_labels.tfrecord', 'segment-17752423643206316420_920_850_940_850_with_camera_labels.tfrecord', 'segment-1891390218766838725_4980_000_5000_000_with_camera_labels.tfrecord', 'segment-2570264768774616538_860_000_880_000_with_camera_labels.tfrecord', 'segment-3195159706851203049_2763_790_2783_790_with_camera_labels.tfrecord', 'segment-3919438171935923501_280_000_300_000_with_camera_labels.tfrecord', 'segment-4672649953433758614_2700_000_2720_000_with_camera_labels.tfrecord', 'segment-5458962501360340931_3140_000_3160_000_with_camera_labels.tfrecord', 'segment-6242822583398487496_73_000_93_000_with_camera_labels.tfrecord', 'segment-7187601925763611197_4384_300_4404_300_with_camera_labels.tfrecord', 'segment-8031709558315183746_491_220_511_220_with_camera_labels.tfrecord', 'segment-9016865488168499365_4780_000_4800_000_with_camera_labels.tfrecord'

Thanks for your reply! Now we can get a similar pre-train result with yours. To follow your work, we want to know the number of training epoch at semi-supervised learning stage. Looking forward to your reply ! Thank you in advance !

Agenzysheep avatar Jan 16 '23 08:01 Agenzysheep