Framework and Literature Analysis for Crowdsourcing's Answer Aggregation

Alireza Moayedikia, William Yeoh, Kok-Leong Ong, Yee-Ling Boo

Research output: Contribution to journalArticlepeer-review

13 Citations (Scopus)

Abstract

This paper presents a classification framework and a systematic analysis of literature on answer aggregation techniques for the most popular and important type of crowdsourcing, i.e., micro-task crowdsourcing. In doing so, we analyzed research articles since 2006 and developed four classification taxonomies. First, we provided a classification framework based on the algorithmic characteristics of answer aggregation techniques. Second, we outlined the statistical and probabilistic foundations used by different types of algorithms and micro-tasks. Third, we provided a matrix catalog of the data characteristics for which an answer aggregation algorithm is designed. Fourth, a matrix catalog of the commonly used evaluation metrics for each type of micro-task was presented. This paper represents the first systematic literature analysis and classification of the answer aggregation techniques for micro-task crowdsourcing.

Original languageEnglish
Pages (from-to)49-60
Number of pages12
JournalJournal of Computer Information Systems
Volume60
Issue number1
DOIs
Publication statusPublished - 2 Jan 2020

Keywords

  • Crowdsourcing
  • Answer aggregation
  • Classification framework
  • Systematic literature analysis

Fingerprint

Dive into the research topics of 'Framework and Literature Analysis for Crowdsourcing's Answer Aggregation'. Together they form a unique fingerprint.

Cite this