Presentation is loading. Please wait.

Presentation is loading. Please wait.

Presenter: Chien-Ju Ho 2009.4.21.  Introduction to Amazon Mechanical Turk  Applications  Demographics and statistics  The value of using MTurk Repeated.

Similar presentations


Presentation on theme: "Presenter: Chien-Ju Ho 2009.4.21.  Introduction to Amazon Mechanical Turk  Applications  Demographics and statistics  The value of using MTurk Repeated."— Presentation transcript:

1 Presenter: Chien-Ju Ho

2  Introduction to Amazon Mechanical Turk  Applications  Demographics and statistics  The value of using MTurk Repeated labeling A machine-learning perspective

3  Automaton Chess Player built in 80s.

4  Human Intelligence Task (HIT) Tasks hard for computers  Developer Prepay the money Publish HITs Get results  Worker Complete the HITs Get paid

5  User Survey

6  Image Tagging

7  Data Collection

8  Audio Transcription Split the audio into 30sec pieces  Image Filtering Filter porn or inappropriate image  Lots of applications

9  It depends on the task.  Some information: Payment >= 0.01: 586 Payment >= 0.05: 357 Payment >= 0.10: 264 Payment >= 0.50: 74 Payment >= 1.00: 48 Payment >= 5.00: 5

10

11  Survey on 1000 Turkers Conduct the survey twice (Dec and Oct. 2008) Consistent statistics Blog Post:  A Computer Scientist in a Business School A Computer Scientist in a Business School  Where are Turkers from? United States76.25% India 8.03% United Kingdom 3.34% Canada 2.34%

12 Degree Age Gender Income/year

13  Use the data from ComScore  In summary, Tukers are younger  Portion of years old: 51% vs. 22% in internet mainly female  70% female vs. 50 % female having lower income  65% turkers with income < 60k/year vs. 45% in internet having smaller family  55% turkers have no children vs. 40% in internet

14

15

16

17 Victor S. Sheng, Foster Provost, and Panagiotis G. Ipeirotis New York University KDD 2008

18  Imperfect labeling Amazon mechanical Turk Games with a purpose  Repeated labeling Improve the supervised induction  Increase the single-label accuracy  Decrease the cost for acquiring training data

19  Increase single-label accuracy  Decrease cost for training data Labeling is cheap (using MTurk or GWAP) Obtaining data sample might be expensive (taking new pictures, feature extraction)

20  How repeated labeling influence quality of the label accuracy of the model cost of acquiring data and the label  Selections of data points to label repeatedly

21  Uniform labeler quality All labelers exhibit the same quality p p is the probability labeler label correctly For 2N+1 labelers, the label quality q is Label quality for different settings of p

22  Different labeler quality Repeated labeling is helpful in some cases An example:  three labelers with quality p, p+d, p-d  Repeated labeling is preferable to single labeler with quality p+d when settings is in the blue region  No detailed analysis in the paper

23  Majority voting (MV) Simple and intuitive Drawback of information lost  Uncertainty-preserved labeling Multiplied Example procedure (ME) Using frequency as the weight of the label

24  Round-robin strategy Label the example with the fewest labels Repeated label the examples in a fixed order

25  The definition of the cost C U : the cost for the unlabeled portion C L : the cost for labeling  Single labeling (SL): Acquire a new training example cost C U +C L  Repeated labeling with majority vote (MV) Get another label for existing example cost C L

26  Round-robin strategy, C U << C L C U << C L means C U +C L ~ C L The cost is similar in SL and MV  Which strategy (SL or MV) is better? It depends

27  Round-robin strategy, general cost C D : the cost for data acquisition T r : number of examples N L : number of labels  Experiment settings N L = kT r : each example is labeled k times ρ= C U / C L

28  Experiment Result: (p=0.6, ρ=3, k=5 ) 12 dataset-experiments in the paper

29

30  Select data with highest uncertainty.  Which data point should be selected to label repeatedly? {+,-,+} {+,+,+,+,+}  Three approaches Entropy Label uncertainty Model uncertainty

31  Entropy Find the most impure one to repeat labeling ENTROPY IS NOT A GOOD MEASURE!!!  Noisy labeler is considered.  E.g positive and 4000 negative labels with p = 0.6

32  Label uncertainty (LU) L pos : number of positive label observed L neg : number of negative label observed  Posterior label probability p(y) follows the beta function B(L pos +1, L neg +1)  The uncertainty can be estimated by the CDF of the beta distribution

33  Model Uncertainty (MU) The uncertainty for model to predict the label For a set of learning models H i  Label and Model Uncertainty (LMU) Combining both the label and model uncertainty

34  Results, for p = 0.6  Notations GRR: General Round-Robin strategy MU: Model Uncertainty LU: Label Uncertainty LMU: Label and Model Uncertainty

35  Under a wide range of conditions: Repeated labeling can improve the quality of both labels and models. Selected labeling can further improve the quality. Repeated labeling can give advantages in the cost of acquiring examples and labels.  Assumptions Fixed labeler quality and cost  Experiments are conducted in only one of the learning algorithm.

36  Amazon Mechanical Turk provides a platform for collecting non-expert opinions easily.  The collected data would be useful for proper data integration algorithms, such as repeated labeling.

37


Download ppt "Presenter: Chien-Ju Ho 2009.4.21.  Introduction to Amazon Mechanical Turk  Applications  Demographics and statistics  The value of using MTurk Repeated."

Similar presentations


Ads by Google