title   
  

How do annotators label short texts? toward understanding the temporal dynamics of tweet labeling

Rabiger, Stefan and Spiliopoulou, Myra and Saygın, Yücel (2018) How do annotators label short texts? toward understanding the temporal dynamics of tweet labeling. Information Sciences, 457 . pp. 29-47. ISSN 0020-0255 (Print) 1872-6291 (Online)

Full text not available from this repository.

Official URL: http://dx.doi.org/10.1016/j.ins.2018.05.036

Abstract

Crowdsourcing is a popular means to obtain human-crafted information, for example labels of tweets, which can then be used in text mining tasks. Many studies investigate the quality of the labels submitted by human annotators, but there is less work on understanding how annotators label. It is quite natural to expect that annotators learn how to annotate and do so gradually, in the sense that they do not know in advance which of the tweets they will see are positive and which are negative, but rather figure out gradually what makes up the positive and the negative sentiment in a tweet. In this paper, we investigate this gradual process and its temporal dynamics. We show that annotators undergo two phases, a learning phase during which they build a conceptual model of the characteristics determining the sentiment of a tweet, and an exploitation phase during which they use their conceptual model, albeit learning and refinement of the model continues. As case study we investigate a hierarchical tweet labeling task, distinguishing first between relevant and irrelevant tweets, before classifying the relevant ones into factual and non-factual, and further splitting the non-factual ones into positive and negative. As indicator of learning we use the annotation time, i.e. the elapsed time for the inspection of a tweet before the labels across the hierarchy are assigned to it. We show that this annotation time drops as an annotator proceeds through the set of tweets she has to process. We report on our results on identifying the learning phase and its follow-up exploitation phase, and on the differences in annotator behavior during each phase.

Item Type:Article
Uncontrolled Keywords:Human factors; Crowdsourcing; Annotation behavior; Learning effect; Active learning
Subjects:T Technology > T Technology (General)
ID Code:35922
Deposited By:Yücel Saygın
Deposited On:16 Aug 2018 16:09
Last Modified:16 Aug 2018 16:09

Repository Staff Only: item control page