Different from UDA task, the purpose of an Active Domain Adaptation (ADA) task is to pick up a subset of unlabeled target domain
Here, We take Waymo-to-KITTI adaptation as an example.
-
Train FEAT=3 (X,Y,Z) with SN (statistical normalization) using multiple GPUs
sh scripts/dist_train.sh ${NUM_GPUs} \ --cfg_file ./cfgs/DA/waymo_kitti/source_only/pvrcnn_old_anchor_sn_kitti.yaml
-
Train FEAT=3 (X,Y,Z) with SN (statistical normalization) using multiple machines
sh scripts/slurm_train.sh ${PARTITION} ${JOB_NAME} ${NUM_NODES} \ --cfg_file ./cfgs/DA/waymo_kitti/source_only/pvrcnn_old_anchor_sn_kitti.yaml
-
Train FEAT=3 (X,Y,Z) without SN (statistical normalization) using multiple GPUs
sh scripts/dist_train.sh ${NUM_GPUs} \ --cfg_file ./cfgs/DA/waymo_kitti/source_only/pvrcnn_feat_3_vehi.yaml
-
Train FEAT=3 (X,Y,Z) without SN (statistical normalization) using multiple machines
sh scripts/slurm_train.sh ${PARTITION} ${JOB_NAME} ${NUM_NODES} \ --cfg_file ./cfgs/DA/waymo_kitti/source_only/pvrcnn_feat_3_vehi.yaml
-
Train other baseline detectors such as Voxel R-CNN using multiple GPUs
sh scripts/dist_train.sh ${NUM_GPUs} \ --cfg_file ./cfgs/DA/waymo_kitti/source_only/voxel_rcnn_feat_3_vehi.yaml
-
Train other baseline detectors such as Voxel R-CNN using multiple machines
sh scripts/slurm_train.sh ${PARTITION} ${JOB} ${NUM_NODES} \ --cfg_file ./cfgs/DA/waymo_kitti/source_only/voxel_rcnn_feat_3_vehi.yaml
-
Note that for the cross-domain setting where the KITTI dataset is regarded as the target domain, please try --set DATA_CONFIG_TAR.FOV_POINTS_ONLY True to enable front view point cloud only. We report the best model for all epochs on the validation set.
-
Test the source-only models using multiple GPUs
sh scripts/dist_test.sh ${NUM_GPUs} \ --cfg_file ./cfgs/DA/waymo_kitti/source_only/pvrcnn_feat_3_vehi.yaml \ --ckpt ${CKPT}
-
Test the source-only models using multiple machines
sh scripts/slurm_test_mgpu.sh ${PARTITION} ${NUM_NODES} \ --cfg_file ./cfgs/DA/waymo_kitti/source_only/pvrcnn_feat_3_vehi.yaml \ --ckpt ${CKPT}
-
Test the source-only models of all ckpts using multiple GPUs
sh scripts/dist_test.sh ${NUM_GPUs} \ --cfg_file ./cfgs/DA/waymo_kitti/source_only/pvrcnn_feat_3_vehi.yaml \ --eval_all
-
Test the source-only models of all ckpts using multiple machines
sh scripts/slurm_test_mgpu.sh ${PARTITION} ${NUM_NODES} \ --cfg_file ./cfgs/DA/waymo_kitti/source_only/pvrcnn_feat_3_vehi.yaml \ --eval_all
-
You need to set the
--pretrained_model ${PRETRAINED_MODEL}
when finish the pretraining model stage -
Train with SN (statistical normalization) using multiple GPUs
sh scripts/ADA/dist_train_active_source.sh ${NUM_GPUs} \ --cfg_file ./cfgs/ADA/waymo-kitti/pvrcnn/active_source_only.yaml \ --pretrained_model ${PRETRAINED_MODEL}
-
Train with SN (statistical normalization) using multiple machines
sh scripts/ADA/slurm_train_active_source.sh ${PARTITION} ${JOB_NAME} ${NUM_NODES} \ --cfg_file ./cfgs/ADA/waymo-kitti/pvrcnn/active_source_only.yaml \ --pretrained_model ${PRETRAINED_MODEL}
-
Train without SN (statistical normalization) using multiple GPUs
sh scripts/ADA/dist_train_active_source.sh ${NUM_GPUs} \ --cfg_file ./cfgs/ADA/waymo-kitti/pvrcnn/active_source_only_wosn.yaml \ --pretrained_model ${PRETRAINED_MODEL}
-
Train without SN (statistical normalization) using multiple machines
sh scripts/ADA/slurm_train_active_source.sh ${PARTITION} ${JOB_NAME} ${NUM_NODES} \ --cfg_file ./cfgs/ADA/waymo-kitti/pvrcnn/active_source_only_wosn.yaml \ --pretrained_model ${PRETRAINED_MODEL}
-
You need to set the
--pretrained_model ${PRETRAINED_MODEL}
when finish the adaptation stage 1 -
Train with 1% annotation budget using multiple GPUs
sh scripts/ADA/dist_train_active.sh ${NUM_GPUs} \ --cfg_file ./cfgs/ADA/waymo-kitti/pvrcnn/active_dual_target_01.yaml \ --pretrained_model ${PRETRAINED_MODEL}
-
Train with 1% annotation budget using multiple machines
sh scripts/ADA/slurm_train_active.sh ${PARTITION} ${JOB_NAME} ${NUM_NODES} \ --cfg_file ./cfgs/ADA/waymo-kitti/pvrcnn/active_dual_target_01.yaml \ --pretrained_model ${PRETRAINED_MODEL}
-
Train with 5% annotation budget using multiple GPUs
sh scripts/ADA/dist_train_active.sh ${NUM_GPUs} \ --cfg_file ./cfgs/ADA/waymo-kitti/pvrcnn/active_dual_target_05.yaml \ --pretrained_model ${PRETRAINED_MODEL}
-
Train with 5% annotation budget using multiple machines
sh scripts/ADA/slurm_train_active.sh ${PARTITION} ${JOB_NAME} ${NUM_NODES} \ --cfg_file ./cfgs/ADA/waymo-kitti/pvrcnn/active_dual_target_05.yaml \ --pretrained_model ${PRETRAINED_MODEL}
-
Test with a ckpt file:
python test.py --cfg_file ${CONFIG_FILE} \ --batch_size ${BATCH_SIZE} \ --ckpt ${CKPT}
-
To test all the saved checkpoints of a specific training setting and draw the performance curve on the Tensorboard, add the
--eval_all
argument:python test.py \ --cfg_file ${CONFIG_FILE} \ --batch_size ${BATCH_SIZE} \ --eval_all
-
Notice that if you want to test on the setting with KITTI as target domain, please add
--set DATA_CONFIG_TAR.FOV_POINTS_ONLY True
to enable front view point cloud only:python test.py \ --cfg_file ${CONFIG_FILE} \ --batch_size ${BATCH_SIZE} \ --eval_all \ --set DATA_CONFIG_TAR.FOV_POINTS_ONLY True
-
To test with multiple machines for S-Proj:
sh scripts/slurm_test_mgpu.sh ${PARTITION} ${NUM_NODES} \ --cfg_file ${CONFIG_FILE} \ --batch_size ${BATCH_SIZE}
-
Train with TQS
sh scripts/ADA/dist_train_active_TQS.sh ${NUM_GPUs} \ --cfg_file ./cfgs/ADA/waymo-kitti/pvrcnn/active_TQS.yaml \ --pretrained_model ${PRETRAINED_MODEL}
-
Train with CLUE
sh scripts/ADA/dist_train_active_CLUE.sh ${NUM_GPUs} \ --cfg_file ./cfgs/ADA/waymo-kitti/pvrcnn/active_CLUE.yaml \ --pretrained_model ${PRETRAINED_MODEL}
-
Train with multiple GPUs
sh scripts/ADA/dist_train_active_st3d.sh ${NUM_GPUs} \ --cfg_file ./cfgs/ADA/waymo-kitti/pvrcnn/active_st3d.yaml \ --pretrained_model ${PRETRAINED_MODEL}
-
Train with multiple machines
sh scripts/ADA/slurm_train_active_st3d.sh ${PARTITION} ${JOB_NAME} ${NUM_NODES} \ --cfg_file ./cfgs/ADA/waymo-kitti/pvrcnn/active_st3d.yaml \ --pretrained_model ${PRETRAINED_MODEL}
We report the cross-dataset adaptation results including Waymo-to-KITTI, nuScenes-to-KITTI, Waymo-to-nuScenes, and Waymo-to-Lyft.
- All LiDAR-based models are trained with 2 NVIDIA A100 GPUs and are available for download.
training time | Adaptation | Car@R40 | download | |
---|---|---|---|---|
PV-RCNN | ~23h@4 A100 | Source Only | 67.95 / 27.65 | - |
PV-RCNN | ~1.5h@2 A100 | Bi3D (1% annotation budget) | 87.12 / 78.03 | Model-58M |
PV-RCNN | ~10h@2 A100 | Bi3D (5% annotation budget) | 89.53 / 81.32 | Model-58M |
PV-RCNN | ~1.5h@2 A100 | TQS | 82.00 / 72.04 | Model-58M |
PV-RCNN | ~1.5h@2 A100 | CLUE | 82.13 / 73.14 | Model-50M |
PV-RCNN | ~10h@2 A100 | Bi3D+ST3D | 87.83 / 81.23 | Model-58M |
Voxel R-CNN | ~16h@4 A100 | Source Only | 64.87 / 19.90 | - |
Voxel R-CNN | ~1.5h@2 A100 | Bi3D (1% annotation budget) | 88.09 / 79.14 | Model-72M |
Voxel R-CNN | ~6h@2 A100 | Bi3D (5% annotation budget) | 90.18 / 81.34 | Model-72M |
Voxel R-CNN | ~1.5h@2 A100 | TQS | 78.26 / 67.11 | Model-72M |
Voxel R-CNN | ~1.5h@2 A100 | CLUE | 81.93 / 70.89 | Model-72M |
training time | Adaptation | Car@R40 | download | |
---|---|---|---|---|
PV-RCNN | ~23h@4 A100 | Source Only | 68.15 / 37.17 | Model-150M |
PV-RCNN | ~1.5h@2 A100 | Bi3D (1% annotation budget) | 87.00 / 77.55 | Model-58M |
PV-RCNN | ~9h@2 A100 | Bi3D (5% annotation budget) | 89.63 / 81.02 | Model-58M |
PV-RCNN | ~1.5h@2 A100 | TQS | 84.66 / 75.40 | Model-58M |
PV-RCNN | ~1.5h@2 A100 | CLUE | 74.77 / 64.43 | Model-50M |
PV-RCNN | ~7h@ 2 A100 | Bi3D+ST3D | 89.28 / 79.69 | Model-58M |
Voxel R-CNN | ~16h@4 A100 | Source Only | 68.45 / 33.00 | Model-191M |
Voxel R-CNN | ~1.5h@2 A100 | Bi3D (1% annotation budget) | 87.33 / 77.24 | Model-72M |
Voxel R-CNN | ~5.5h@2 A100 | Bi3D (5% annotation budget) | 87.66 / 80.22 | Model-72M |
Voxel R-CNN | ~1.5h@2 A100 | TQS | 79.12 / 68.02 | Model-73M |
Voxel R-CNN | ~1.5h@2 A100 | CLUE | 77.98 / 66.02 | Model-65M |
training time | Adaptation | Car@R40 | download | |
---|---|---|---|---|
PV-RCNN | ~23h@4 A100 | Source Only | 31.02 / 21.21 | - |
PV-RCNN | ~4h@2 A100 | Bi3D (1% annotation budget) | 45.00 / 30.81 | Model-58M |
PV-RCNN | ~12h@4 A100 | Bi3D (5% annotation budget) | 48.03 / 32.02 | Model-58M |
PV-RCNN | ~4h@2 A100 | TQS | 35.47 / 25.00 | Model-58M |
PV-RCNN | ~3h@2 A100 | CLUE | 38.18 / 26.96 | Model-50M |
Voxel R-CNN | ~16h@4 A100 | Source Only | 29.08 / 19.42 | - |
Voxel R-CNN | ~2.5h@2 A100 | Bi3D (1% annotation budget) | 45.47 / 30.49 | Model-72M |
Voxel R-CNN | ~4h@4 A100 | Bi3D (5% annotation budget) | 46.78 / 32.14 | Model-72M |
Voxel R-CNN | ~4h@2 A100 | TQS | 36.38 / 24.18 | Model-72M |
Voxel R-CNN | ~3h@2 A100 | CLUE | 37.27 / 25.12 | Model-65M |
SECOND | ~3h@2 A100 | Bi3D(1%) | 46.15 / 26.24 | Model-54M |
training time | Adaptation | Car@R40 | download | |
---|---|---|---|---|
PV-RCNN | ~23h@4 A100 | Source Only | 70.10 / 53.11 | - |
PV-RCNN | ~7h@2 A100 | Bi3D (1% annotation budget) | 79.07 / 63.74 | Model-58M |
PV-RCNN | ~22h@2 A100 | Bi3D (5% annotation budget) | 80.19 / 66.09 | Model-58M |
PV-RCNN | ~7h@2 A100 | TQS | 70.87 / 55.25 | Model-58M |
PV-RCNN | ~5h@2 A100 | CLUE | 75.23 / 62.17 | Model-50M |
Voxel R-CNN | ~16h@4 A100 | Source Only | 70.52 / 53.48 | - |
Voxel R-CNN | ~7h@2 A100 | Bi3D (1% annotation budget) | 77.00 / 61.23 | Model-72M |
Voxel R-CNN | ~19h@2 A100 | Bi3D (5% annotation budget) | 79.15 / 65.26 | Model-72M |
Voxel R-CNN | ~8h@2 A100 | TQS | 71.11 / 56.28 | Model-73M |
Voxel R-CNN | ~5h@2 A100 | CLUE | 75.61 / 59.34 | Model-65M |