DriveWorld: 4D Pre-trained Scene Understanding via World Models for Autonomous Driving

Kavli Affiliate: Zheng Zhu

| First 5 Authors: Chen Min, Dawei Zhao, Liang Xiao, Jian Zhao, Xinli Xu

| Summary:

Vision-centric autonomous driving has recently raised wide attention due to
its lower cost. Pre-training is essential for extracting a universal
representation. However, current vision-centric pre-training typically relies
on either 2D or 3D pre-text tasks, overlooking the temporal characteristics of
autonomous driving as a 4D scene understanding task. In this paper, we address
this challenge by introducing a world model-based autonomous driving 4D
representation learning framework, dubbed emph{DriveWorld}, which is capable
of pre-training from multi-camera driving videos in a spatio-temporal fashion.
Specifically, we propose a Memory State-Space Model for spatio-temporal
modelling, which consists of a Dynamic Memory Bank module for learning
temporal-aware latent dynamics to predict future changes and a Static Scene
Propagation module for learning spatial-aware latent statics to offer
comprehensive scene contexts. We additionally introduce a Task Prompt to
decouple task-aware features for various downstream tasks. The experiments
demonstrate that DriveWorld delivers promising results on various autonomous
driving tasks. When pre-trained with the OpenScene dataset, DriveWorld achieves
a 7.5% increase in mAP for 3D object detection, a 3.0% increase in IoU for
online mapping, a 5.0% increase in AMOTA for multi-object tracking, a 0.1m
decrease in minADE for motion forecasting, a 3.0% increase in IoU for occupancy
prediction, and a 0.34m reduction in average L2 error for planning.

| Search Query: ArXiv Query: search_query=au:”Zheng Zhu”&id_list=&start=0&max_results=3

Read More