Kavli Affiliate: Jia Liu
| First 5 Authors: Jia Liu, Jie Shuai, , ,
| Summary:
Current prompting approach for language model inference mainly rely on
Language Model’s (LLM) autonomous exploration of reasoning paths, confronts an
inevitable retracing operation when erroneous routes are encountered. This is
followed by the pursuit of alternative reasoning paths. However, humans are
adept at abstracting optimal solutions from problems, thereby facilitating
swift and precise reasoning for similar problems resolution. In light of this,
we delves into the potential of harnessing expert knowledge to enhance
problem-solving within LLMs. We introduce a novel paradigm, the State Machine
of Thought (SMoT), which employs predefined state machines to furnish LLMs with
efficient reasoning paths, thereby eliminating fruitless exploration.
Furthermore, we propose a multi-agent mechanism that assigns different
objectives to agents, aiming to enhance the accuracy of SMoT reasoning. The
experimental results, derived from an array reasoning task, reveal that SMoT
realizes an extraordinary accuracy of 95%, surpassing the performance of the
state-of-the-art baselines.
| Search Query: ArXiv Query: search_query=au:”Jia Liu”&id_list=&start=0&max_results=3