Skip to content

End-to-end driving pipeline for CARLA Leaderboard 2.0 and official implementation of LEAD: Minimizing Learner Expert Asymmetry in End-to-End Driving

License

Notifications You must be signed in to change notification settings

autonomousvision/lead

Repository files navigation

LEAD: Minimizing Learner–Expert Asymmetry in End-to-End Driving

Project Page · Documentation · Weights · Paper · Supplementary Material

Python 3.10 PyTorch 2.0+ CARLA 0.9.15 License: MIT

Official implementation of LEAD and TransFuser v6, an expert-student policy pair for autonomous driving research in CARLA. Includes a complete pipeline for data collection, training, and closed-loop evaluation.

LEAD Banner

Main Features

LEAD is a model-agnostic entry point to end-to-end driving research in the CARLA simulator. It is built to make working with many parallel long-running experiments manageable, whether you are iterating on the model or data.

  • Lean pipeline: Pure PyTorch with minimal dependencies and lightweight implementation.
  • Cross-dataset training: Training and evaluation support for NAVSIM and Waymo datasets, with optional co-training on synthetic CARLA data.
  • Data-centric infrastructure:
    • Always know what type and shape your tensors have, enforced with BearType and JaxTyping.
    • Extensive visualizations make it easier to spot bugs in the data pipeline and during closed-loop evaluation.
    • Compact datasets with lower storage overhead (72h of driving fits in ~200GB).

Table of Contents

Roadmap

  • ✅ Checkpoints and inference code
  • 🚧 Documentation, training pipeline and expert code
  • Full dataset release on HuggingFace
  • Cross-dataset training tools and documentation

Status: Active development. Core code and checkpoints are released; remaining components coming soon.

Updates

  • 2025/12/24 Arxiv paper and code release

Setup Project

⏱️ 15 minutes

1. Clone project

git clone https://github.com/autonomousvision/lead.git
cd lead

2. Setup environment variables

{
  echo
  echo "export LEAD_PROJECT_ROOT=$(pwd)"
  echo "source $(pwd)/scripts/main.sh"
} >> ~/.bashrc

source ~/.bashrc
For Zsh
{
  echo
  echo "export LEAD_PROJECT_ROOT=$(pwd)"
  echo "source $(pwd)/scripts/main.sh"
} >> ~/.zshrc

source ~/.zshrc

3. Create python environment with Miniconda

pip install conda-lock
conda-lock install -n lead conda-lock.yml
conda activate lead
pip install -r requirements.txt
pip install -e . # Install project

4. Setup CARLA

Install CARLA 0.9.15 at 3rd_party/CARLA_0915

bash scripts/setup_carla.sh
Or softlink existing CARLA
ln -s /your/carla/path $LEAD_PROJECT_ROOT/3rd_party/CARLA_0915

5. Further setup

pre-commit install # Git hooks
conda install conda-forge::ffmpeg conda-forge::parallel conda-forge::tree # Misc

Note

We also provide a minimal docker compose setup (not extensively tested yet) here.

Quick Start

⏱️ 5 minutes

1. Download model checkpoints

We provide pre-trained checkpoints on HuggingFace for reproducibility.

Checkpoint Description Bench2Drive Longest6 v2 Town13
tfv6_regnety032 TFv6 95.2 62 5.01
tfv6_resnet34 ResNet34 Backbone 94.7 57 3.31
4cameras_resnet34 Additional rear camera 95.1 53 -
noradar_resnet34 No radar sensor 94.7 52 -
visiononly_resnet34 Vision-only driving model 91.6 43 -
town13heldout_resnet34 Generalization evaluation 93.1 52 2.65

To download one checkpoint:

mkdir -p outputs/checkpoints/tfv6_resnet34
wget https://huggingface.co/ln2697/TFv6/resolve/main/tfv6_resnet34/config.json -O outputs/checkpoints/tfv6_resnet34/config.json
wget https://huggingface.co/ln2697/TFv6/resolve/main/tfv6_resnet34/model_0030_0.pth -O outputs/checkpoints/tfv6_resnet34/model_0030_0.pth
Alternatively, to download all checkpoints at once with git lfs:
git clone https://huggingface.co/ln2697/TFv6 outputs/checkpoints
cd outputs/checkpoints
git lfs pull

2. Run model evaluation

See evaluation configuration at config_closed_loop. Turn off the options produce_demo_video and produce_debug_video for faster evaluation. By default, the pipeline loads all three seeds of a checkpoint as an ensemble. If memory is a problem, simply change prefix of two of the three seeds so only the first seed is loaded.

bash scripts/start_carla.sh # Start CARLA server
bash scripts/eval_bench2drive.sh # Evaluate one Bench2Drive route
bash scripts/clean_carla.sh # Optional: clean CARLA server
Results will be saved to outputs/local_evaluation with the following structure:
outputs/local_evaluation
├── 23687
│   ├── checkpoint_endpoint.json
│   ├── debug_images
│   ├── demo_images
│   └── metric_info.json
├── 23687_debug.mp4
└── 23687_demo.mp4

3. Run expert evaluation

Evaluate expert and collect data

bash scripts/start_carla.sh # Start CARLA if not done already
bash scripts/run_expert.sh # Run expert on one route
Data collected will be stored at data/expert_debug and should have following structure:
data/expert_debug
├── data
│   └── BlockedIntersection
│       └── 999_Rep-1_Town06_13_route0_12_22_22_34_45
│           ├── bboxes
│           ├── depth
│           ├── depth_perturbated
│           ├── hdmap
│           ├── hdmap_perturbated
│           ├── lidar
│           ├── metas
│           ├── radar
│           ├── radar_perturbated
│           ├── results.json
│           ├── rgb
│           ├── rgb_perturbated
│           ├── semantics
│           └── semantics_perturbated
└── results
    └── Town06_13_result.json

Bench2Drive Results

We evaluate TFv6 on the Bench2Drive benchmark, which consists of 220 routes across multiple towns with challenging weather conditions and traffic scenarios.

Method Venue DS SR Merge Overtake EmgBrake Give Way Traffsign
TF++ (TFv5) ICCV23 84.21 67.27 58.75 57.77 83.33 40.00 82.11
SimLingo CVPR25 85.07 67.27 54.01 57.04 88.33 53.33 82.45
R2SE - 86.28 69.54 53.33 61.25 90.00 50.00 84.21
HiP-AD ICCV25 86.77 69.09 50.00 84.44 83.33 40.00 72.10
BridgeDrive - 86.87 72.27 63.50 57.77 83.33 40.00 82.11
DiffRefiner AAAI26 87.10 71.40 63.80 60.00 85.00 50.00 86.30
TFv6 (Ours) - 95.28 86.80 72.50 97.77 91.66 40.00 89.47

DS = Driving Score, SR = Success Rate; Metrics follow the CARLA Leaderboard 2.0 protocol. Higher is better.

Documentation and Resources

For detailed training, data-collection, and large-scale experiment instructions, see the full documentation. In particular, we provide:

We maintain custom forks of CARLA evaluation tools with our modifications:

External Resources

Useful documentations from other repositories:

Other helpful repositories:

E2E self-driving research:

Acknowledgements

Special thanks to carla_garage for the foundational codebase. We also thank the creators of the numerous open-source projects we use.

Long Nguyen led development of the project. Kashyap Chitta, Bernhard Jaeger, and Andreas Geiger contributed through technical discussion and advisory feedback.

Citation

If you find this work useful, please consider give this repository a star ⭐. Also cite our work if you use it in your research:

@article{Nguyen2025ARXIV,
  title={LEAD: Minimizing Learner-Expert Asymmetry in End-to-End Driving},
  author={Nguyen, Long and Fauth, Micha and Jaeger, Bernhard and Dauner, Daniel and Igl, Maximilian and Geiger, Andreas and Chitta, Kashyap},
  journal={arXiv preprint arXiv:2512.20563},
  year={2025}
}

License

This project is released under the MIT License. See LICENSE for details.

About

End-to-end driving pipeline for CARLA Leaderboard 2.0 and official implementation of LEAD: Minimizing Learner Expert Asymmetry in End-to-End Driving

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Languages