Kavli Affiliate: Xiang Zhang | First 5 Authors: Xiang Zhang, Juntai Cao, Jiaqi Wei, Chenyu You, Dujian Ding | Summary: Despite the remarkable successes of large language models (LLMs), the underlying Transformer architecture has inherent limitations in handling complex reasoning tasks. Chain-of-thought (CoT) prompting has emerged as a practical workaround, but most CoT-based methods rely […]
Continue.. Why Prompt Design Matters and Works: A Complexity Analysis of Prompt Search Space in LLMs