TAIL: A Terrain-Aware Multi-Modal SLAM Dataset for Robot Locomotion in Deformable Granular Environments

Kavli Affiliate: Zheng Zhu

| First 5 Authors: Chen Yao, Yangtao Ge, Guowei Shi, Zirui Wang, Ningbo Yang

| Summary:

Terrain-aware perception holds the potential to improve the robustness and
accuracy of autonomous robot navigation in the wilds, thereby facilitating
effective off-road traversals. However, the lack of multi-modal perception
across various motion patterns hinders the solutions of Simultaneous
Localization And Mapping (SLAM), especially when confronting non-geometric
hazards in demanding landscapes. In this paper, we first propose a
Terrain-Aware multI-modaL (TAIL) dataset tailored to deformable and sandy
terrains. It incorporates various types of robotic proprioception and distinct
ground interactions for the unique challenges and benchmark of multi-sensor
fusion SLAM. The versatile sensor suite comprises stereo frame cameras,
multiple ground-pointing RGB-D cameras, a rotating 3D LiDAR, an IMU, and an RTK
device. This ensemble is hardware-synchronized, well-calibrated, and
self-contained. Utilizing both wheeled and quadrupedal locomotion, we
efficiently collect comprehensive sequences to capture rich unstructured
scenarios. It spans the spectrum of scope, terrain interactions, scene changes,
ground-level properties, and dynamic robot characteristics. We benchmark
several state-of-the-art SLAM methods against ground truth and provide
performance validations. Corresponding challenges and limitations are also
reported. All associated resources are accessible upon request at
url{https://tailrobot.github.io/}.

| Search Query: ArXiv Query: search_query=au:”Zheng Zhu”&id_list=&start=0&max_results=3

Read More