Improving Parallel Program Performance with LLM Optimizers via Agent-System Interface

Kavli Affiliate: Ke Wang

| First 5 Authors: Anjiang Wei, Allen Nie, Thiago S. F. X. Teixeira, Rohan Yadav, Wonchan Lee

| Summary:

Modern scientific discovery increasingly relies on high-performance computing
for complex modeling and simulation. A key challenge in improving parallel
program performance is efficiently mapping tasks to processors and data to
memory, a process dictated by intricate, low-level system code known as
mappers. Developing high-performance mappers demands days of manual tuning,
posing a significant barrier for domain scientists without systems expertise.
We introduce a framework that automates mapper development with generative
optimization, leveraging richer feedback beyond scalar performance metrics. Our
approach features the Agent-System Interface, which includes a Domain-Specific
Language (DSL) to abstract away low-level complexity of system code and define
a structured search space, as well as AutoGuide, a mechanism that interprets
raw execution output into actionable feedback. Unlike traditional reinforcement
learning methods such as OpenTuner, which rely solely on scalar feedback, our
method finds superior mappers in far fewer iterations. With just 10 iterations,
it outperforms OpenTuner even after 1000 iterations, achieving 3.8X faster
performance. Our approach finds mappers that surpass expert-written mappers by
up to 1.34X speedup across nine benchmarks while reducing tuning time from days
to minutes.

| Search Query: ArXiv Query: search_query=au:”Ke Wang”&id_list=&start=0&max_results=3

Read More