AgentCoord: Visually Exploring Coordination Strategy for LLM-based Multi-Agent Collaboration

Kavli Affiliate: Ke Wang

| First 5 Authors: Bo Pan, Jiaying Lu, Ke Wang, Li Zheng, Zhen Wen

| Summary:

The potential of automatic task-solving through Large Language Model
(LLM)-based multi-agent collaboration has recently garnered widespread
attention from both the research community and industry. While utilizing
natural language to coordinate multiple agents presents a promising avenue for
democratizing agent technology for general users, designing coordination
strategies remains challenging with existing coordination frameworks. This
difficulty stems from the inherent ambiguity of natural language for specifying
the collaboration process and the significant cognitive effort required to
extract crucial information (e.g. agent relationship, task dependency, result
correspondence) from a vast amount of text-form content during exploration. In
this work, we present a visual exploration framework to facilitate the design
of coordination strategies in multi-agent collaboration. We first establish a
structured representation for LLM-based multi-agent coordination strategy to
regularize the ambiguity of natural language. Based on this structure, we
devise a three-stage generation method that leverages LLMs to convert a user’s
general goal into an executable initial coordination strategy. Users can
further intervene at any stage of the generation process, utilizing LLMs and a
set of interactions to explore alternative strategies. Whenever a satisfactory
strategy is identified, users can commence the collaboration and examine the
visually enhanced execution result. We develop AgentCoord, a prototype
interactive system, and conduct a formal user study to demonstrate the
feasibility and effectiveness of our approach.

| Search Query: ArXiv Query: search_query=au:”Ke Wang”&id_list=&start=0&max_results=3

Read More