Kavli Affiliate: Long Zhang
| First 5 Authors: Long Zhang, Meng Zhang, Wei Lin Wang, Yu Luo,
| Summary:
The advancement of Artificial Intelligence (AI) has created opportunities for
e-learning, particularly in automated assessment systems that reduce educators’
workload and provide timely feedback to students. However, developing effective
AI-based assessment tools remains challenging due to the substantial resources
required for collecting and annotating real student data. This study
investigates the potential and gap of simulative data to address this
limitation. Through a two-phase experimental study, we examined the
effectiveness and gap of Large Language Model generated synthetic data in
training educational assessment systems. Our findings reveal that while
simulative data demonstrates promising results in training automated assessment
models, outperforming state-of-the-art GPT-4o in most question types, its
effectiveness has notable limitations. Specifically, models trained on
synthetic data show excellent performance in simulated environment but need
progress when applied to real-world scenarios. This performance gap highlights
the limitations of only using synthetic data in controlled experimental
settings for AI training. The absence of real-world noise and biases, which are
also present in over-processed real-world data, contributes to this limitation.
We recommend that future development of automated assessment agents and other
AI tools should incorporate a mixture of synthetic and real-world data, or
introduce more realistic noise and biases patterns, rather than relying solely
on synthetic or over-processed data.
| Search Query: ArXiv Query: search_query=au:”Long Zhang”&id_list=&start=0&max_results=3