MAER-Nav: Bidirectional Motion Learning Through Mirror-Augmented Experience Replay for Robot Navigation

Kavli Affiliate: Biao Huang

| First 5 Authors: Shanze Wang, Mingao Tan, Zhibo Yang, Biao Huang, Xiaoyu Shen

| Summary:

Deep Reinforcement Learning (DRL) based navigation methods have demonstrated
promising results for mobile robots, but suffer from limited action flexibility
in confined spaces. Conventional DRL approaches predominantly learn
forward-motion policies, causing robots to become trapped in complex
environments where backward maneuvers are necessary for recovery. This paper
presents MAER-Nav (Mirror-Augmented Experience Replay for Robot Navigation), a
novel framework that enables bidirectional motion learning without requiring
explicit failure-driven hindsight experience replay or reward function
modifications. Our approach integrates a mirror-augmented experience replay
mechanism with curriculum learning to generate synthetic backward navigation
experiences from successful trajectories. Experimental results in both
simulation and real-world environments demonstrate that MAER-Nav significantly
outperforms state-of-the-art methods while maintaining strong forward
navigation capabilities. The framework effectively bridges the gap between the
comprehensive action space utilization of traditional planning methods and the
environmental adaptability of learning-based approaches, enabling robust
navigation in scenarios where conventional DRL methods consistently fail.

| Search Query: ArXiv Query: search_query=au:”Biao Huang”&id_list=&start=0&max_results=3

Read More