Learning to Socially Navigate in Pedestrian-rich Environments with Interaction Capacity

Kavli Affiliate: Jing Wang

| First 5 Authors: Quecheng Qiu, Shunyi Yao, Jing Wang, Jun Ma, Guangda Chen

| Summary:

Existing navigation policies for autonomous robots tend to focus on collision
avoidance while ignoring human-robot interactions in social life. For instance,
robots can pass along the corridor safer and easier if pedestrians notice them.
Sounds have been considered as an efficient way to attract the attention of
pedestrians, which can alleviate the freezing robot problem. In this work, we
present a new deep reinforcement learning (DRL) based social navigation
approach for autonomous robots to move in pedestrian-rich environments with
interaction capacity. Most existing DRL based methods intend to train a general
policy that outputs both navigation actions, i.e., expected robot’s linear and
angular velocities, and interaction actions, i.e., the beep action, in the
context of reinforcement learning. Different from these methods, we intend to
train the policy via both supervised learning and reinforcement learning. In
specific, we first train an interaction policy in the context of supervised
learning, which provides a better understanding of the social situation, then
we use this interaction policy to train the navigation policy via multiple
reinforcement learning algorithms. We evaluate our approach in various
simulation environments and compare it to other methods. The experimental
results show that our approach outperforms others in terms of the success rate.
We also deploy the trained policy on a real-world robot, which shows a nice
performance in crowded environments.

| Search Query: ArXiv Query: search_query=au:”Jing Wang”&id_list=&start=0&max_results=10

Read More