What Is Unclear? Computational Assessment of Task Clarity in Crowdsourcing

Zahra Nouri, Ujwal Gadiraju, Gregor Engels, Henning Wachsmuth

Research output: Chapter in Book/Conference proceedings/Edited volumeConference contributionScientificpeer-review

4 Citations (Scopus)
48 Downloads (Pure)

Abstract

Designing tasks clearly to facilitate accurate task completion is a challenging endeavor for requesters on crowdsourcing platforms. Prior research shows that inexperienced requesters fail to write clear and complete task descriptions which directly leads to low quality submissions from workers. By complementing existing works that have aimed to address this challenge, in this paper we study whether clarity flaws in task descriptions can be identified automatically using natural language processing methods. We identify and synthesize seven clarity flaws in task descriptions that are grounded in relevant literature. We build both BERT-based and feature-based binary classifiers, in order to study the extent to which clarity flaws in task descriptions can be computationally assessed, and understand textual properties of descriptions that affect task clarity. Through a crowdsourced study, we collect annotations of clarity flaws in 1332 real task descriptions. Using this dataset, we evaluate several configurations of the classifiers. Our results indicate that nearly all the clarity flaws in task descriptions can be assessed reasonably by the classifiers. We found that the content, style, and readability of tasks descriptions are particularly important in shaping their clarity. This work has important implications on the design of tools to help requesters in improving task clarity on crowdsourcing platforms. Flaw-specific properties can provide for valuable guidance in improving task descriptions.

Original languageEnglish
Title of host publicationHT 2021 - Proceedings of the 32nd ACM Conference on Hypertext and Social Media
PublisherAssociation for Computing Machinery (ACM)
Pages165-175
Number of pages11
ISBN (Electronic)9781450385510
DOIs
Publication statusPublished - 2021
Event32nd ACM Conference on Hypertext and Social Media, HT 2021 - Virtual, Online, Ireland
Duration: 30 Aug 20212 Sept 2021

Publication series

NameHT 2021 - Proceedings of the 32nd ACM Conference on Hypertext and Social Media

Conference

Conference32nd ACM Conference on Hypertext and Social Media, HT 2021
Country/TerritoryIreland
CityVirtual, Online
Period30/08/212/09/21

Bibliographical note

Green Open Access added to TU Delft Institutional Repository ‘You share, we take care!’ – Taverne project https://www.openaccess.nl/en/you-share-we-take-care
Otherwise as indicated in the copyright section: the publisher is the copyright holder of this work and the author uses the Dutch legislation to make this work public.

Keywords

  • BERT-based binary classification
  • crowdsourcing
  • feature-based binary classification
  • task clarity assessment
  • task design
  • unclear task descriptions

Fingerprint

Dive into the research topics of 'What Is Unclear? Computational Assessment of Task Clarity in Crowdsourcing'. Together they form a unique fingerprint.

Cite this