Learning Orientation Field for OSM-Guided Autonomous Navigation

Kavli Affiliate: Wei Gao

| First 5 Authors: Yuming Huang, Wei Gao, Zhiyuan Zhang, Maani Ghaffari, Dezhen Song

| Summary:

OpenStreetMap (OSM) has gained popularity recently in autonomous navigation
due to its public accessibility, lower maintenance costs, and broader
geographical coverage. However, existing methods often struggle with noisy OSM
data and incomplete sensor observations, leading to inaccuracies in trajectory
planning. These challenges are particularly evident in complex driving
scenarios, such as at intersections or facing occlusions. To address these
challenges, we propose a robust and explainable two-stage framework to learn an
Orientation Field (OrField) for robot navigation by integrating LiDAR scans and
OSM routes. In the first stage, we introduce the novel representation, OrField,
which can provide orientations for each grid on the map, reasoning jointly from
noisy LiDAR scans and OSM routes. To generate a robust OrField, we train a deep
neural network by encoding a versatile initial OrField and output an optimized
OrField. Based on OrField, we propose two trajectory planners for OSM-guided
robot navigation, called Field-RRT* and Field-Bezier, respectively, in the
second stage by improving the Rapidly Exploring Random Tree (RRT) algorithm and
Bezier curve to estimate the trajectories. Thanks to the robustness of OrField
which captures both global and local information, Field-RRT* and Field-Bezier
can generate accurate and reliable trajectories even in challenging conditions.
We validate our approach through experiments on the SemanticKITTI dataset and
our own campus dataset. The results demonstrate the effectiveness of our
method, achieving superior performance in complex and noisy conditions. Our
code for network training and real-world deployment is available at
https://github.com/IMRL/OriField.

| Search Query: ArXiv Query: search_query=au:”Wei Gao”&id_list=&start=0&max_results=3

Read More