Dynamic Deep Factor Graph for Multi-Agent Reinforcement Learning

Kavli Affiliate: Ran Wang

| First 5 Authors: Yuchen Shi, Shihong Duan, Cheng Xu, Ran Wang, Fangwen Ye

| Summary:

This work introduces a novel value decomposition algorithm, termed
textit{Dynamic Deep Factor Graphs} (DDFG). Unlike traditional coordination
graphs, DDFG leverages factor graphs to articulate the decomposition of value
functions, offering enhanced flexibility and adaptability to complex value
function structures. Central to DDFG is a graph structure generation policy
that innovatively generates factor graph structures on-the-fly, effectively
addressing the dynamic collaboration requirements among agents. DDFG strikes an
optimal balance between the computational overhead associated with aggregating
value functions and the performance degradation inherent in their complete
decomposition. Through the application of the max-sum algorithm, DDFG
efficiently identifies optimal policies. We empirically validate DDFG’s
efficacy in complex scenarios, including higher-order predator-prey tasks and
the StarCraft II Multi-agent Challenge (SMAC), thus underscoring its capability
to surmount the limitations faced by existing value decomposition algorithms.
DDFG emerges as a robust solution for MARL challenges that demand nuanced
understanding and facilitation of dynamic agent collaboration. The
implementation of DDFG is made publicly accessible, with the source code
available at url{https://github.com/SICC-Group/DDFG}.

| Search Query: ArXiv Query: search_query=au:”Ran Wang”&id_list=&start=0&max_results=3

Read More