Skip to content

Latest commit

 

History

History
66 lines (57 loc) · 2.74 KB

README.md

File metadata and controls

66 lines (57 loc) · 2.74 KB

DG-TrajGen

The official repository for paper ''Domain Generalization for Vision-based Driving Trajectory Generation'' submitted to ICRA 2022.

arXiv Project YouTube Bilibili License: MIT

Our Method

structure

  • Trajectory representation:
    • Model: ./learning/model.py/Generator
  • Latent Action Space Learning:
    • Generator model: ./learning/model.py/Generator
    • Discriminator model: ./learning/model.py/Discriminator
    • Training: ./scripts/Ours/stage1_train_GAN.py
  • Encoder Pre-training:
    • Training: ./scripts/Ours/stage2_pretrain_encoder.py
  • End-to-End Training the Encoder:
    • Training: ./scripts/Ours/stage3_train_e2e.py

Comparative Study

  • RIP:
    • Training: ./scripts/RIP/train.py
    • Referenced official code: github
    • Paper: arxiv
  • MixStyle:
    • Training: ./scripts/MixStyle/train.py
    • Referenced official code: github
    • Paper: arxiv
  • DIVA:
    • Training: ./scripts/DIVA/train.py
    • Referenced official code: github
    • Paper: arxiv
  • DAL:
    • Training: ./scripts/DAL/train.py
  • E2E NT:
    • Training: ./scripts/E2ENT/train.py
    • Referenced official code: github
    • Paper: arxiv

comp

Closed-loop Experiments:

We train the model on the Oxford RobotCar dataset and directly generalize it to the CARLA simulation.

  • Run: ./scripts/CARLA/run_ours.py

carla

ClearNoon WetCloudySunset HardRainSunset HeavyFogMorning

Citation

If you use our source code, please consider citing the following:

@article{wang2021domain,
  title={Domain Generalization for Vision-based Driving Trajectory Generation},
  author={Wang, Yunkai and Zhang, Dongkun and Cui, Yuxiang and Chen, Zexi and Jing, Wei and Chen, Junbo and Xiong, Rong and Wang, Yue},
  journal={arXiv preprint arXiv:2109.13858},
  year={2021}
}