Learning Fused State Representations for Control from Multi-View Observations

Kavli Affiliate: Li Xin Li

| First 5 Authors: Zeyu Wang, Yao-Hui Li, Xin Li, Hongyu Zang, Romain Laroche

| Summary:

Multi-View Reinforcement Learning (MVRL) seeks to provide agents with
multi-view observations, enabling them to perceive environment with greater
effectiveness and precision. Recent advancements in MVRL focus on extracting
latent representations from multiview observations and leveraging them in
control tasks. However, it is not straightforward to learn compact and
task-relevant representations, particularly in the presence of redundancy,
distracting information, or missing views. In this paper, we propose Multi-view
Fusion State for Control (MFSC), firstly incorporating bisimulation metric
learning into MVRL to learn task-relevant representations. Furthermore, we
propose a multiview-based mask and latent reconstruction auxiliary task that
exploits shared information across views and improves MFSC’s robustness in
missing views by introducing a mask token. Extensive experimental results
demonstrate that our method outperforms existing approaches in MVRL tasks. Even
in more realistic scenarios with interference or missing views, MFSC
consistently maintains high performance.

| Search Query: ArXiv Query: search_query=au:”Li Xin Li”&id_list=&start=0&max_results=3

Read More