Few-shot Question Generation for Reading Comprehension

Yin Poon, John S.Y. Lee, Yu Yan Lam, Wing Lam Suen, Elsie Li Chen Ong, Samuel Kai Wah Chu

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

Abstract

According to the internationally recognized PIRLS (Progress in International Reading Literacy Study) assessment standards, reading comprehension questions should require not only information retrieval, but also higher-order processes such as inferencing, interpreting and evaluation. However, these kinds of questions are often not available in large quantities for training question generation models. This paper investigates whether pre-trained Large Language Models (LLMs) can produce higher-order questions. Human assessment on a Chinese dataset shows that few-shot LLM prompting generates more usable and higher-order questions than two competitive neural baselines.

Original languageEnglish
Title of host publicationSIGHAN 2024 - 10th SIGHAN Workshop on Chinese Language Processing, Proceedings of the Workshop
EditorsKam-Fai Wong, Min Zhang, Ruifeng Xu, Jing Li, Zhongyu Wei, Lin Gui, Bin Liang, Runcong Zhao
Pages21-27
Number of pages7
ISBN (Electronic)9798891761551
Publication statusPublished - 2024
Event10th SIGHAN Workshop on Chinese Language Processing, SIGHAN 2024 - Bangkok, Thailand
Duration: 16 Aug 2024 → …

Publication series

NameSIGHAN 2024 - 10th SIGHAN Workshop on Chinese Language Processing, Proceedings of the Workshop

Conference

Conference10th SIGHAN Workshop on Chinese Language Processing, SIGHAN 2024
Country/TerritoryThailand
CityBangkok
Period16/08/24 → …

Fingerprint

Dive into the research topics of 'Few-shot Question Generation for Reading Comprehension'. Together they form a unique fingerprint.

Cite this