Kavli Affiliate: Li Xin Li
| First 5 Authors: Weihao Xuan, Rui Yang, Heli Qi, Qingcheng Zeng, Yunze Xiao
| Summary:
Traditional benchmarks struggle to evaluate increasingly sophisticated
language models in multilingual and culturally diverse contexts. To address
this gap, we introduce MMLU-ProX, a comprehensive multilingual benchmark
covering 13 typologically diverse languages with approximately 11,829 questions
per language. Building on the challenging reasoning-focused design of MMLU-Pro,
our framework employs a semi-automatic translation process: translations
generated by state-of-the-art large language models (LLMs) are rigorously
evaluated by expert annotators to ensure conceptual accuracy, terminological
consistency, and cultural relevance. We comprehensively evaluate 25
state-of-the-art LLMs using 5-shot chain-of-thought (CoT) and zero-shot
prompting strategies, analyzing their performance across linguistic and
cultural boundaries. Our experiments reveal consistent performance degradation
from high-resource languages to lower-resource ones, with the best models
achieving over 70% accuracy on English but dropping to around 40% for languages
like Swahili, highlighting persistent gaps in multilingual capabilities despite
recent advances. MMLU-ProX is an ongoing project; we are expanding our
benchmark by incorporating additional languages and evaluating more language
models to provide a more comprehensive assessment of multilingual capabilities.
| Search Query: ArXiv Query: search_query=au:”Li Xin Li”&id_list=&start=0&max_results=3