Automatic coding of students’ writing via Contrastive Representation Learning in the Wasserstein space

Kavli Affiliate: Eric Miller

| First 5 Authors: Ruijie Jiang, Julia Gouvea, David Hammer, Eric Miller, Shuchin Aeron

| Summary:

Qualitative analysis of verbal data is of central importance in the learning
sciences. It is labor-intensive and time-consuming, however, which limits the
amount of data researchers can include in studies. This work is a step towards
building a statistical machine learning (ML) method for achieving an automated
support for qualitative analyses of students’ writing, here specifically in
score laboratory reports in introductory biology for sophistication of
argumentation and reasoning. We start with a set of lab reports from an
undergraduate biology course, scored by a four-level scheme that considers the
complexity of argument structure, the scope of evidence, and the care and
nuance of conclusions. Using this set of labeled data, we show that a popular
natural language modeling processing pipeline, namely vector representation of
words, a.k.a word embeddings, followed by Long Short Term Memory (LSTM) model
for capturing language generation as a state-space model, is able to
quantitatively capture the scoring, with a high Quadratic Weighted Kappa (QWK)
prediction score, when trained in via a novel contrastive learning set-up. We
show that the ML algorithm approached the inter-rater reliability of human
analysis. Ultimately, we conclude, that machine learning (ML) for natural
language processing (NLP) holds promise for assisting learning sciences
researchers in conducting qualitative studies at much larger scales than is
currently possible.

| Search Query: ArXiv Query: search_query=au:”Eric Miller”&id_list=&start=0&max_results=10

Read More

Leave a Reply