GitTaskBench: A Benchmark for Code Agents Solving Real-World Tasks Through Code Repository Leveraging

Kavli Affiliate: Li Xin Li

| First 5 Authors: Ziyi Ni, Ziyi Ni, , ,

| Summary:

Beyond scratch coding, exploiting large-scale code repositories (e.g.,
GitHub) for practical tasks is vital in real-world software development, yet
current benchmarks rarely evaluate code agents in such authentic,
workflow-driven scenarios. To bridge this gap, we introduce GitTaskBench, a
benchmark designed to systematically assess this capability via 54 realistic
tasks across 7 modalities and 7 domains. Each task pairs a relevant repository
with an automated, human-curated evaluation harness specifying practical
success criteria. Beyond measuring execution and task success, we also propose
the alpha-value metric to quantify the economic benefit of agent performance,
which integrates task success rates, token cost, and average developer
salaries. Experiments across three state-of-the-art agent frameworks with
multiple advanced LLMs show that leveraging code repositories for complex task
solving remains challenging: even the best-performing system, OpenHands+Claude
3.7, solves only 48.15% of tasks. Error analysis attributes over half of
failures to seemingly mundane yet critical steps like environment setup and
dependency resolution, highlighting the need for more robust workflow
management and increased timeout preparedness. By releasing GitTaskBench, we
aim to drive progress and attention toward repository-aware code reasoning,
execution, and deployment — moving agents closer to solving complex,
end-to-end real-world tasks. The benchmark and code are open-sourced at
https://github.com/QuantaAlpha/GitTaskBench.

| Search Query: ArXiv Query: search_query=au:”Li Xin Li”&id_list=&start=0&max_results=3

Read More