LRM-Zero: Training Large Reconstruction Models with Synthesized Data

Kavli Affiliate: Yi Zhou

| First 5 Authors: Desai Xie, Sai Bi, Zhixin Shu, Kai Zhang, Zexiang Xu

| Summary:

We present LRM-Zero, a Large Reconstruction Model (LRM) trained entirely on
synthesized 3D data, achieving high-quality sparse-view 3D reconstruction. The
core of LRM-Zero is our procedural 3D dataset, Zeroverse, which is
automatically synthesized from simple primitive shapes with random texturing
and augmentations (e.g., height fields, boolean differences, and wireframes).
Unlike previous 3D datasets (e.g., Objaverse) which are often captured or
crafted by humans to approximate real 3D data, Zeroverse completely ignores
realistic global semantics but is rich in complex geometric and texture details
that are locally similar to or even more intricate than real objects. We
demonstrate that our LRM-Zero, trained with our fully synthesized Zeroverse,
can achieve high visual quality in the reconstruction of real-world objects,
competitive with models trained on Objaverse. We also analyze several critical
design choices of Zeroverse that contribute to LRM-Zero’s capability and
training stability. Our work demonstrates that 3D reconstruction, one of the
core tasks in 3D vision, can potentially be addressed without the semantics of
real-world objects. The Zeroverse’s procedural synthesis code and interactive
visualization are available at: https://desaixie.github.io/lrm-zero/.

| Search Query: ArXiv Query: search_query=au:”Yi Zhou”&id_list=&start=0&max_results=3

Read More