DriveLM
DriveLM copied to clipboard
[ECCV 2024 Oral] DriveLM: Driving with Graph Visual Question Answering
DriveLM: Driving with Graph Visual Question Answering
Autonomous Driving Challenge 2024
Driving-with-Language track is activated!
https://github.com/OpenDriveLab/DriveLM/assets/54334254/cddea8d6-9f6e-4e7e-b926-5afb59f8dce2
Highlights
🔥 We instantiate datasets (DriveLM-Data) built upon nuScenes and CARLA, and propose a VLM-based baseline approach (DriveLM-Agent) for jointly performing Graph VQA and end-to-end driving.
🏁 DriveLM serves as a main track in the CVPR 2024 Autonomous Driving Challenge
. Everything you need for the challenge is HERE, including baseline, test data and submission format and evaluation pipeline!
Table of Contents
- Highlights
-
Getting Started
- Prepare DriveLM-nuScenes
- Current Endeavors and Future Horizons
-
News and TODO List
- News
- TODO List
-
DriveLM-Data
- Comparison and Stats
- GVQA Details
- Annotation and Features
- License and Citation
- Other Resources
Getting Started
To get started with DriveLM:
- Prepare DriveLM-nuScenes
- Challenge devkit
- More content coming soon
(back to top)
Current Endeavors and Future Directions
- The advent of GPT-style multimodal models in real-world applications motivates the study of the role of language in driving.
- Date below reflects the arXiv submission date.
- If there is any missing work, please reach out to us!
DriveLM attempts to address some of the challenges faced by the community.
- Lack of data: DriveLM-Data serves as a comprehensive benchmark for driving with language.
- Embodiment: GVQA provides a potential direction for embodied applications of LLMs / VLMs.
- Closed-loop: DriveLM-CARLA attempts to explore closed-loop planning with language.
(back to top)
News and TODO List
News
-
[2024/03/25]
Challenge test server is online and the test questions are released. Chekc it out! -
[2024/02/29]
Challenge repo release. Baseline, data and submission format, evaluation pipeline. Have a look! -
[2023/08/25]
DriveLM-nuScenes demo released. -
[2023/12/22]
DriveLM-nuScenes fullv1.0
and paper released. -
[Early 2024]
DriveLM-Agent inference code. -
Note:
We plan to release a simple, flexible training code that supports multi-view inputs as a starter kit for the AD challenge (stay tuned for details).
TODO List
- [ ] DriveLM-Data
- [x] DriveLM-nuScenes
- [ ] DriveLM-CARLA
- [x] DriveLM-Metrics
- [x] GPT-score
- [ ] DriveLM-Agent
- [x] Inference code on DriveLM-nuScenes
- [ ] Inference code on DriveLM-CARLA
(back to top)
DriveLM-Data
We facilitate the Perception, Prediction, Planning, Behavior, Motion
tasks with human-written reasoning logic as a connection between them. We propose the task of GVQA on the DriveLM-Data.
📊 Comparison and Stats
DriveLM-Data is the first language-driving dataset facilitating the full stack of driving tasks with graph-structured logical dependencies.
Links to details about GVQA task, Dataset Features, and Annotation.
(back to top)
License and Citation
All assets and code in this repository are under the Apache 2.0 license unless specified otherwise. The language data is under CC BY-NC-SA 4.0. Other datasets (including nuScenes) inherit their own distribution licenses. Please consider citing our paper and project if they help your research.
@article{sima2023drivelm,
title={DriveLM: Driving with Graph Visual Question Answering},
author={Sima, Chonghao and Renz, Katrin and Chitta, Kashyap and Chen, Li and Zhang, Hanxue and Xie, Chengen and Luo, Ping and Geiger, Andreas and Li, Hongyang},
journal={arXiv preprint arXiv:2312.14150},
year={2023}
}
@misc{contributors2023drivelmrepo,
title={DriveLM: Driving with Graph Visual Question Answering},
author={DriveLM contributors},
howpublished={\url{https://github.com/OpenDriveLab/DriveLM}},
year={2023}
}
(back to top)
Other Resources
OpenDriveLab
Autonomous Vision Group
- tuPlan garage | CARLA garage | Survey on E2EAD
- PlanT | KING | TransFuser | NEAT
(back to top)