AttentivePrototypeSFUDA
AttentivePrototypeSFUDA copied to clipboard
Official implementation for the paper "Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D Object Detection"
Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D Object Detection
PyTorch code release of the paper "Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D Object Detection",
by Deepti Hegde, Vishal M. Patel
Follow the instructions for installation from ST3D
Dataset preperation
-
KITTI
python -m pcdet.datasets.kitti.kitti_dataset create_kitti_infos tools/cfgs/dataset_configs/kitti_dataset.yaml
-
nuScenes
python -m pcdet.datasets.nuscenes.nuscenes_dataset --func create_nuscenes_infos --cfg_file tools/cfgs/dataset_configs/nuscenes_dataset.yaml --version v1.0-trainval
-
Waymo (this will take several hours for the whole dataset. You may download only a subset for a faster pre-processing and source training)
python -m pcdet.datasets.waymo.waymo_dataset --func create_waymo_infos --cfg_file tools/cfgs/dataset_configs/waymo_dataset.yaml
-
Organize each folder inside data like the following
AttentivePrototypeSFUDA ├── data (main data folder) │ ├── kitti │ │ │── ImageSets │ │ │── training │ │ │ ├──calib & velodyne & label_2 & image_2 & (optional: planes) │ │ │── testing │ │ │ ├──calib & velodyne & image_2 | | │ ├── nuscenes │ │ │── v1.0-trainval (or v1.0-mini if you use mini) │ │ │ │── samples │ │ │ │── sweeps │ │ │ │── maps │ │ │ │── v1.0-trainval | | │ ├── waymo │ │ │── ImageSets │ │ │── raw_data │ │ │ │── segment-xxxxxxxx.tfrecord | | | |── ... | | |── waymo_processed_data │ │ │ │── segment-xxxxxxxx/ | | | |── ... │ │ │── pcdet_gt_database_train_sampled_xx/ │ │ │── pcdet_waymo_dbinfos_train_sampled_xx.pkl
The below instructions are deprecated, please skip to this section for up-to-date instructions.
We implement the proposed method for two object detectors, SECOND-iou and PointRCNN for several domain shift scenarios. You can find the folder of pretrained models here. Find specific model downloads and their corresponding config files below.
| SECOND-iou |
Domain shift | Model file | Configuration file |
---|---|---|
Waymo -> KITTI | download | link |
Waymo -> nuScenes | download | link |
nuScenes -> KITTI | download | link |
| PointRCNN |
Domain shift | Model file | Configuration file |
---|---|---|
Waymo -> KITTI | download | link |
KITTI -> nuScenes | download | link |
nuScenes -> KITTI | download | link |
Follow the instructions to implement the method in the folders SECOND-iou and PointRCNN
Training
The entire training procedure may be divided into two stages: 1) source model training and 2) source-free domain adaptation.
Source model training
Single GPU training
python tools/train.py --cfg_file tools/cfgs/da-waymo-kitti_models/secondiou/secondiou_cyc.yaml --extra_tag {PREFERRED NAME}
Multi-GPU training
bash tools/scripts/dist_train.sh {NUM_GPUS} --cfg_file tools/cfgs/da-waymo-kitti_models/secondiou/secondiou_cyc.yaml --extra_tag {PREFERRED NAME}
Source-free domain adaptation
Choose the best performing model from the previous step and use it as the source trained model.
python tools/train.py --cfg_file tools/cfgs/da-waymo-kitti_models/secondiou_attproto/secondiou_proto_ros_cyc.yaml --extra_tag {PREFERRED NAME} --pretrained_model {SOURCE_MODEL_PATH}
Testing
python tools/test.py --cfg tols/cfgs/${PATH_TO_CONFIG_FILE} --extra_tag {PREFERRED_NAME} --ckpt {PATH_TO_CKPT}