Comparative Study of GenAI (ChatGPT) vs. Human in Generating Multiple Choice Questions Based on the PIRLS Reading Assessment Framework

Yu Yan Lam, Samuel Kai Wah Chu, Elsie Li Chen Ong, Winnie Wing Lam Suen, Lingran Xu, Lavender Chin Lui Lam, Scarlett Man Yu Wong

Research output: Contribution to journalArticlepeer-review

Abstract

Human-generated multiple-choice questions (MCQs) are commonly used to ensure objective evaluation in education. However, generating high-quality questions is difficult and time-consuming. Generative artificial intelligence (GenAI) has emerged as an automated approach for question generation, but challenges remain in terms of biases and diversity in training data. This study aims to compare the quality of GenAI-generated MCQs with humans-created ones. In Part 1 of this study, 16 MCQs were created by humans and GenAI individually with alignment to the Progress in International Reading Literacy Study (PIRLS) assessment framework. In Part 2, the quality of MCQs generated was assessed based on the clarity, appropriateness, suitability, and alignment to PIRLS by four assessors. Wilcoxon rank sum tests were conducted to compare GenAI versus humans generated MCQs. The findings highlight GenAI's potential as it was difficult to differentiate from human created questions and offer recommendations for integrating AI technology for the future.

Original languageEnglish
Pages (from-to)537-540
Number of pages4
JournalProceedings of the Association for Information Science and Technology
Volume61
Issue number1
DOIs
Publication statusPublished - Oct 2024

Keywords

  • GenAI
  • PIRLS
  • Reading
  • question assessment
  • question creation

Cite this