Multi-Agent System + Closed-loop Evaluation

Mobility Lab presents

MDrive: A Cooperative Driving Benchmark for End-to-End Closed-loop Multi-Agent System

A benchmark for multi-agent, multi-granular cooperative driving. Focus on end-to-end closed-loop systems to improve safety and coordination in complex urban environments.

Submissions Open March 2, 2026
Submission Deadline May 1, 2026
Workshop Date June 3, 2026

Overview

The MDrive Cooperative Driving Challenge evaluates whether multi-agent coordination and V2X communication can significantly improve autonomous driving safety and efficiency in high-stakes urban scenarios. While traditional models rely on single-vehicle onboard perception, MDrive introduces a closed-loop benchmark where connected agents share sensory data to navigate complex environments.

"Can models that incorporate real-time V2X messages from infrastructure and neighboring vehicles achieve higher driving success rates than state-of-the-art single-agent methods?"

This challenge is built on MDriveBench, a multi-agent extension of high-fidelity simulators. Scenarios include occluded intersections, unprotected turns, and emergency yields, providing a robust testbed for cooperative intelligence.

Evaluation Metrics

Submissions are evaluated using metrics tailored for closed-loop safety and cooperative efficiency:

  • Driving Score (DS): A weighted average of success rate penalized by collisions and traffic rule violations.
  • Success Rate (SR): The percentage of trials where the agent reaches the goal safely within the time limit.

Challenge task

Participants should design models that:

Multi-Agent Reasoning

Reason over multiple agents and shared observations to build a comprehensive environmental understanding.

Communication & Coordination

Leverage inter-agent communication or implicit coordination to synchronize movements safely.

Joint Driving Policies

Output joint driving policies in dynamic traffic scenes to optimize global flow and safety.

Challenge Objectives

1
Closed-loop Multi-Agent Driving

Develop policies that control multiple vehicles simultaneously in realistic urban scenarios.

2
Collaborative Scene Understanding

Enable agents to exchange or infer complementary information to improve perception and decision-making.

3
End-to-End Reasoning

Learn mappings from raw sensory inputs (and optional messages) directly to driving actions.

We hypothesize that cooperative models will show significant improvements particularly in occluded environments and high-density traffic merges where single-agent perception is fundamentally limited.

Challenge Roadmap

FEB 20
Challenge announcement & Warmup leaderboard opens
MAR 02
Official Challenge Launch with Open-Loop metrics
MAY 01
Challenge End & Final Submission Deadline

Challenge Submission Instructions

Ready to submit? Use the official submission platform to upload your results.

Submission Checklist

  • Single .zip file containing your model and code.
  • Required ZIP file structure as defined below.

Required ZIP File Structure

Your ZIP file must be organized as follows:

team_name.zip
├── agents.py           # Main agent class (must inherit from BaseAgent)
├── config/            # Folder containing all .yaml or .py configs
├── src/              # Folder containing model architecture & utilities
├── weights/          # Folder containing all trained checkpoints (.pth/.ckpt)
└── model_env.yaml     # Conda environment specification

Environment Specification

MDriveBench supports two methods of environment provisioning. To ensure 100% reproducibility, we strongly recommend providing a Dockerfile.

  • Docker (Primary): Your Dockerfile should be based on a stable CUDA image (e.g., nvidia/cuda:11.3.1-devel-ubuntu20.04). It must install all necessary libraries so that the agent can run immediately upon container launch.
  • Conda (Fallback): If no Dockerfile is provided, we will build a dedicated environment using your model_env.yaml.
    Note: Your code must be compatible with Python 3.7 to interface with the CARLA 0.9.12 API. Do not include CARLA in your environment files; the evaluation server will automatically link the standardized CARLA build.

For further details, please refer to the Official Documentation.

Main Organizers

Seth Z. Zhao
Seth Z. Zhao

UCLA

Website   LinkedIn

Zewei Zhou

UCLA

Website   LinkedIn

Dr. Rui Song
Dr. Rui Song

UCLA & University of Cambridge

Website   LinkedIn

Marco Coscoy

UCLA

LinkedIn

Angela Magtoto
Angela Magtoto

UCLA

LinkedIn

Henry Wei
Henry Wei

UCLA

LinkedIn

Johnson Liu
Johnson Liu

UCLA

LinkedIn

Dr. Zhiyu Huang
Dr. Zhiyu Huang

UCLA

Website   LinkedIn

Dr. Walter Zimmer
Dr. Walter Zimmer

UCLA & Technical University of Munich

Website   LinkedIn

Prof. Bolei Zhou
Prof. Bolei Zhou

UCLA

Website   LinkedIn

Prof. Jiaqi Ma
Prof. Jiaqi Ma

UCLA

Website   LinkedIn