Beyond Prompt Content: Enhancing LLM Performance via Content-Format Integrated Prompt Optimization

Kavli Affiliate: Cheng Peng

| First 5 Authors: Yuanye Liu, Jiahang Xu, Li Lyna Zhang, Qi Chen, Xuan Feng

| Summary:

Large Language Models (LLMs) have shown significant capability across various
tasks, with their real-world effectiveness often driven by prompt design. While
recent research has focused on optimizing prompt content, the role of prompt
formatting, a critical but often overlooked dimension, has received limited
systematic investigation. In this paper, we introduce Content-Format Integrated
Prompt Optimization (CFPO), an innovative methodology that jointly optimizes
both prompt content and formatting through an iterative refinement process.
CFPO leverages natural language mutations to explore content variations and
employs a dynamic format exploration strategy that systematically evaluates
diverse format options. Our extensive evaluations across multiple tasks and
open-source LLMs demonstrate that CFPO demonstrates measurable performance
improvements compared to content-only optimization methods. This highlights the
importance of integrated content-format optimization and offers a practical,
model-agnostic approach to enhancing LLM performance. Code will be available at
https://github.com/HenryLau7/CFPO.

| Search Query: ArXiv Query: search_query=au:”Cheng Peng”&id_list=&start=0&max_results=3

Read More