Presentation is loading. Please wait.

Presentation is loading. Please wait.

Crowdsourcing using Mechanical Turk Quality Management and Scalability Panos Ipeirotis – New York University.

Similar presentations


Presentation on theme: "Crowdsourcing using Mechanical Turk Quality Management and Scalability Panos Ipeirotis – New York University."— Presentation transcript:

1 Crowdsourcing using Mechanical Turk Quality Management and Scalability Panos Ipeirotis – New York University

2 “A Computer Scientist in a Business School” http://behind-the-enemy-lines.blogspot.com/ Email: panos@nyu.edu “A Computer Scientist in a Business School” http://behind-the-enemy-lines.blogspot.com/ Email: panos@nyu.edu Panos Ipeirotis - Introduction  New York University, Stern School of Business

3

4 4

5 5

6

7 Example: Build an “Adult Web Site” Classifier  Need a large number of hand-labeled sites  Get people to look at sites and classify them as: G (general audience) PG (parental guidance) R (restricted) X (porn) Cost/Speed Statistics  Undergrad intern: 200 websites/hr, cost: $15/hr Cost/Speed Statistics  Undergrad intern: 200 websites/hr, cost: $15/hr

8 Amazon Mechanical Turk: Paid Crowdsourcing

9 Example: Build an “Adult Web Site” Classifier  Need a large number of hand-labeled sites  Get people to look at sites and classify them as: G (general audience) PG (parental guidance) R (restricted) X (porn) Cost/Speed Statistics  Undergrad intern: 200 websites/hr, cost: $15/hr  MTurk: 2500 websites/hr, cost: $12/hr Cost/Speed Statistics  Undergrad intern: 200 websites/hr, cost: $15/hr  MTurk: 2500 websites/hr, cost: $12/hr

10 Bad news: Spammers! Worker ATAMRO447HWJQ labeled X (porn) sites as G (general audience) Worker ATAMRO447HWJQ labeled X (porn) sites as G (general audience)

11 Improve Data Quality through Repeated Labeling  Get multiple, redundant labels using multiple workers  Pick the correct label based on majority vote  Probability of correctness increases with number of workers  Probability of correctness increases with quality of workers 1 worker 70% correct 1 worker 70% correct 11 workers 93% correct 11 workers 93% correct

12 11-vote Statistics  MTurk: 227 websites/hr, cost: $12/hr  Undergrad: 200 websites/hr, cost: $15/hr 11-vote Statistics  MTurk: 227 websites/hr, cost: $12/hr  Undergrad: 200 websites/hr, cost: $15/hr Single Vote Statistics  MTurk: 2500 websites/hr, cost: $12/hr  Undergrad: 200 websites/hr, cost: $15/hr Single Vote Statistics  MTurk: 2500 websites/hr, cost: $12/hr  Undergrad: 200 websites/hr, cost: $15/hr But Majority Voting is Expensive

13 Using redundant votes, we can infer worker quality  Look at our spammer friend ATAMRO447HWJQ together with other 9 workers Our “friend” ATAMRO447HWJQ mainly marked sites as G. Obviously a spammer…  We can compute error rates for each worker Error rates for ATAMRO447HWJQ  P[X → X]=9.847%P[X → G]=90.153%  P[G → X]=0.053%P[G → G]=99.947%

14 Rejecting spammers and Benefits Random answers error rate = 50% Average error rate for ATAMRO447HWJQ: 45.2%  P[X → X]=9.847%P[X → G]=90.153%  P[G → X]=0.053%P[G → G]=99.947% Action: REJECT and BLOCK Results:  Over time you block all spammers  Spammers learn to avoid your HITS  You can decrease redundancy, as quality of workers is higher

15 After rejecting spammers, quality goes up  Spam keeps quality down  Without spam, workers are of higher quality  Need less redundancy for same quality  Same quality of results for lower cost With spam 1 worker 70% correct With spam 1 worker 70% correct With spam 11 workers 93% correct With spam 11 workers 93% correct Without spam 1 worker 80% correct Without spam 1 worker 80% correct Without spam 5 workers 94% correct Without spam 5 workers 94% correct

16 Correcting biases  Classifying sites as G, PG, R, X  Sometimes workers are careful but biased  Classifies G → P and P → R  Average error rate for ATLJIK76YH1TF: too high Is she a spammer? Error Rates for CEO of AdSafe P[G → G]=20.0%P[G → P]=80.0%P[G → R]=0.0%P[G → X]=0.0% P[P → G]=0.0%P[P → P]=0.0%P[P → R]=100.0%P[P → X]=0.0% P[R → G]=0.0%P[R → P]=0.0%P[R → R]=100.0%P[R → X]=0.0% P[X → G]=0.0%P[X → P]=0.0%P[X → R]=0.0%P[X → X]=100.0% Error Rates for CEO of AdSafe P[G → G]=20.0%P[G → P]=80.0%P[G → R]=0.0%P[G → X]=0.0% P[P → G]=0.0%P[P → P]=0.0%P[P → R]=100.0%P[P → X]=0.0% P[R → G]=0.0%P[R → P]=0.0%P[R → R]=100.0%P[R → X]=0.0% P[X → G]=0.0%P[X → P]=0.0%P[X → R]=0.0%P[X → X]=100.0%

17 Correcting biases  For ATLJIK76YH1TF, we simply need to “reverse the errors” (technical details omitted) and separate error and bias  True error-rate ~ 9% Error Rates for Worker: ATLJIK76YH1TF P[G → G]=20.0%P[G → P]=80.0%P[G → R]=0.0%P[G → X]=0.0% P[P → G]=0.0%P[P → P]=0.0%P[P → R]=100.0%P[P → X]=0.0% P[R → G]=0.0%P[R → P]=0.0%P[R → R]=100.0%P[R → X]=0.0% P[X → G]=0.0%P[X → P]=0.0%P[X → R]=0.0%P[X → X]=100.0% Error Rates for Worker: ATLJIK76YH1TF P[G → G]=20.0%P[G → P]=80.0%P[G → R]=0.0%P[G → X]=0.0% P[P → G]=0.0%P[P → P]=0.0%P[P → R]=100.0%P[P → X]=0.0% P[R → G]=0.0%P[R → P]=0.0%P[R → R]=100.0%P[R → X]=0.0% P[X → G]=0.0%P[X → P]=0.0%P[X → R]=0.0%P[X → X]=100.0%

18 Too much theory? Demo and Open source implementation available at: http://qmturk.appspot.com  Input: –Labels from Mechanical Turk –Cost of incorrect labelings (e.g., X  G costlier than G  X)  Output: –Corrected labels –Worker error rates –Ranking of workers according to their quality  Beta version, more improvements to come!  Suggestions and collaborations welcomed!

19 Scaling Crowdsourcing: Use Machine Learning  Human labor is expensive, even when paying cents  Need to scale crowdsourcing  Basic idea: Build a machine learning model and use it instead of humans Data from existing crowdsourced answers Data from existing crowdsourced answers New Case Automatic Model (through machine learning) Automatic Answer Automatic Answer

20 20 Tradeoffs for Automatic Models: Effect of Noise  Get more data  Improve model accuracy  Improve data quality  Improve classification Example Case: Porn or not? Data Quality = 50% Data Quality = 60% Data Quality = 80% Data Quality = 100%

21 Confident Automatic Model (through machine learning) Scaling Crowdsourcing: Iterative training  Use machine when confident, humans otherwise  Retrain with new human input → improve model → reduce need for humans Get human(s) to answer New Case Not confident Automatic Answer Automatic Answer Data from existing crowdsourced answers Data from existing crowdsourced answers

22 22 Tradeoffs for Automatic Models: Effect of Noise  Get more data  Improve model accuracy  Improve data quality  Improve classification Example Case: Porn or not? Data Quality = 50% Data Quality = 60% Data Quality = 80% Data Quality = 100%

23 Not confident Confident Automatic Model (through machine learning) Scaling Crowdsourcing: Iterative training, with noise  Use machine when confident, humans otherwise  Ask as many humans as necessary to ensure quality Get human(s) to answer New Case Automatic Answer Automatic Answer Confident for quality? Not confident for quality? Data from existing crowdsourced answers Data from existing crowdsourced answers

24 Thank you! Questions? “A Computer Scientist in a Business School” http://behind-the-enemy-lines.blogspot.com/ Email: panos@nyu.edu “A Computer Scientist in a Business School” http://behind-the-enemy-lines.blogspot.com/ Email: panos@nyu.edu


Download ppt "Crowdsourcing using Mechanical Turk Quality Management and Scalability Panos Ipeirotis – New York University."

Similar presentations


Ads by Google