Kavli Affiliate: Ke Wang | First 5 Authors: Qinhao Zhou, Zihan Zhang, Xiang Xiang, Ke Wang, Yuchuan Wu | Summary: Open-source pre-trained Large Language Models (LLMs) exhibit strong language understanding and generation capabilities, making them highly successful in a variety of tasks. However, when used as agents for dealing with complex problems in the real […]
Continue.. Enhancing the General Agent Capabilities of Low-Parameter LLMs through Tuning and Multi-Branch Reasoning