BYU CS Logo
Computing That Serves

The Annotation Cost of Context Switching: How Topic Models and Active Learning [May Not] Work Together

Nozomu Okuda: MS Thesis Defense

Thursday, July 27, 2:00PM

3350 TMCB

Advisor: Kevin Seppi

The labeling of language resources is a time consuming task, whether aided by machine
learning or not. Much of the prior work in this area has focused on accelerating human annotation in the context of machine learning, yielding a variety of active learning approaches. Most of these attempt to lead an annotator to label the items which are most likely to improve the quality of an automated, machine learning-based model. These active learning approaches seek to understand the effect of item selection on the machine learning model, but give significantly less emphasis to the effect of item selection on the human annotator.
 
In this work, we consider a sentiment labeling task where existing, traditional active learning seems to have little or no value. We focus instead on the human annotator by ordering the items for better annotator efficiency.