ReconDreamer: Crafting World Models for Driving Scene Reconstruction via Online Restoration

Kavli Affiliate: Zheng Zhu

| First 5 Authors: Chaojun Ni, Guosheng Zhao, Xiaofeng Wang, Zheng Zhu, Wenkang Qin

| Summary:

Closed-loop simulation is crucial for end-to-end autonomous driving. Existing
sensor simulation methods (e.g., NeRF and 3DGS) reconstruct driving scenes
based on conditions that closely mirror training data distributions. However,
these methods struggle with rendering novel trajectories, such as lane changes.
Recent works have demonstrated that integrating world model knowledge
alleviates these issues. Despite their efficiency, these approaches still
encounter difficulties in the accurate representation of more complex
maneuvers, with multi-lane shifts being a notable example. Therefore, we
introduce ReconDreamer, which enhances driving scene reconstruction through
incremental integration of world model knowledge. Specifically, DriveRestorer
is proposed to mitigate artifacts via online restoration. This is complemented
by a progressive data update strategy designed to ensure high-quality rendering
for more complex maneuvers. To the best of our knowledge, ReconDreamer is the
first method to effectively render in large maneuvers. Experimental results
demonstrate that ReconDreamer outperforms Street Gaussians in the NTA-IoU,
NTL-IoU, and FID, with relative improvements by 24.87%, 6.72%, and 29.97%.
Furthermore, ReconDreamer surpasses DriveDreamer4D with PVG during large
maneuver rendering, as verified by a relative improvement of 195.87% in the
NTA-IoU metric and a comprehensive user study.

| Search Query: ArXiv Query: search_query=au:”Zheng Zhu”&id_list=&start=0&max_results=3

Read More