Kavli Affiliate: Yi Zhou
| First 5 Authors: Haoyi Wu, Wenyang Hui, Yezeng Chen, Weiqi Wu, Kewei Tu
| Summary:
Mathematical understanding and reasoning are crucial tasks for assessing the
capabilities of artificial intelligence (AI). However, existing benchmarks
either require just a few steps of reasoning, or only contain a small amount of
data in one specific topic, making it hard to analyse AI’s behaviour with
reference to different problems within a specific topic in detail. In this
work, we propose Conic10K, a challenging math problem dataset on conic sections
in Chinese senior high school education. Our dataset contains various problems
with different reasoning depths, while only the knowledge from conic sections
is required. Since the dataset only involves a narrow range of knowledge, it is
easy to separately analyse the knowledge a model possesses and the reasoning
ability it has. For each problem, we provide a high-quality formal
representation, the reasoning steps, and the final solution. Experiments show
that existing large language models, including GPT-4, exhibit weak performance
on complex reasoning. We hope that our findings could inspire more advanced
techniques for precise natural language understanding and reasoning. Our
dataset and codes are available at https://github.com/whyNLP/Conic10K.
| Search Query: ArXiv Query: search_query=au:”Yi Zhou”&id_list=&start=0&max_results=3