Subgoal-based Hierarchical Reinforcement Learning for Multi-Agent Collaboration

Kavli Affiliate: Ran Wang

| First 5 Authors: Cheng Xu, Changtian Zhang, Yuchen Shi, Ran Wang, Shihong Duan

| Summary:

Recent advancements in reinforcement learning have made significant impacts
across various domains, yet they often struggle in complex multi-agent
environments due to issues like algorithm instability, low sampling efficiency,
and the challenges of exploration and dimensionality explosion. Hierarchical
reinforcement learning (HRL) offers a structured approach to decompose complex
tasks into simpler sub-tasks, which is promising for multi-agent settings. This
paper advances the field by introducing a hierarchical architecture that
autonomously generates effective subgoals without explicit constraints,
enhancing both flexibility and stability in training. We propose a dynamic goal
generation strategy that adapts based on environmental changes. This method
significantly improves the adaptability and sample efficiency of the learning
process. Furthermore, we address the critical issue of credit assignment in
multi-agent systems by synergizing our hierarchical architecture with a
modified QMIX network, thus improving overall strategy coordination and
efficiency. Comparative experiments with mainstream reinforcement learning
algorithms demonstrate the superior convergence speed and performance of our
approach in both single-agent and multi-agent environments, confirming its
effectiveness and flexibility in complex scenarios. Our code is open-sourced
at: url{https://github.com/SICC-Group/GMAH}.

| Search Query: ArXiv Query: search_query=au:”Ran Wang”&id_list=&start=0&max_results=3

Read More