The official implementation of AAAI2024 paper DI-V2X: Learning Domain-Invariant Representation for Vehicle-Infrastructure Collaborative 3D Object Detection. Papaer.
Please follow the feishu docs CoAlign Installation Guide Chinese Ver. or English Ver. to learn how to install and run this repo.
Or you can refer to OpenCOOD data introduction and OpenCOOD installation guide to prepare data and install CoAlign. The installation is totally the same as OpenCOOD, except some dependent packages required by CoAlign.
Prepare the dair-v2x dataset following the official guide and then prepare the complemeted annotations.
Prepare the domain-mixing instance bank(DMA) following:
cd ~/DI-V2X
python opencood/data_utils/datasets/basedataset/dairv2x_basedataset.py
- The folder structure shall be like this:
- cooperative-vehicle-infrastructure
- cooperative
- gt_database_fusion
- infrastructure-side
- vehilce-side
- dairv2x_dbinfos_fusion.pkl
- train.json
- val.json
- cooperative-vehicle-infrastructure
bash opencood/tools/scripts/dist_train.sh 4 opencood/hypes_yaml/dairv2x/lidar_only/pointpillar_early_gtsample_multiscale.yaml early
then the teacher model will be saved in {teacher_model_path}.
First set parameter of the kd_flag->teacher_path to {teacher_model_path} in the pointpillar_pdd_distillation.yaml. Then train the model:
bash opencood/tools/scripts/train_w_kd.sh opencood/hypes_yaml/dairv2x/lidar_only/pointpillar_pdd_distillation.yaml
then the student model will be save in {student_model_path}
python opencood/tools/inference.py --model_dir {student_model_path} --fusion_method intermediate
The DI-V2X teacher and student models and evaluation files can be found in opencood/logs.
The authors are grateful to School of Computer Science, Beijing Institute of Technology, Inceptio and University of Macau.
The code is based on CoAlgin.