A framework for effectively utilising human grading input in automated short answer grading

Andrew Kwok Fai Lui, Sin Chun Ng, Stella Wing Nga Cheung

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)

Abstract

Short answer questions are effective for recall knowledge assessment. Grading a large amount of short answers is costly and time consuming. To apply short answer questions on MOOCs platforms, the issues of scalability and responsiveness must be addressed. Automated grading uses a computing process and a machine learning grading model to classify answers into correct, wrong, and other levels of correctness. The divide-and-grade approach is proven effective in reducing the annotation effort needed for the learning the grading model. This paper presents an improvement on the divide-and-grade approach that is designed to increase the utility of human actions. A novel short answer grading framework is proposed that addresses the selection of impactful answers for grading, the injection of the ground-truth grades for steering towards purer final clusters, and the final grade assignments. Experiment results indicate the grading quality can be improved with the same level of human actions.

Original languageEnglish
Pages (from-to)266-286
Number of pages21
JournalInternational Journal of Mobile Learning and Organisation
Volume16
Issue number3
DOIs
Publication statusPublished - 2022

Keywords

  • automated grading
  • automated short answer grading
  • clustering
  • MOOCs
  • semi-supervised clustering

Fingerprint

Dive into the research topics of 'A framework for effectively utilising human grading input in automated short answer grading'. Together they form a unique fingerprint.

Cite this