Presentation is loading. Please wait.

Presentation is loading. Please wait.

Introduction to Labor Marketplaces: Taskcn Uichin Lee KAIST KSE KSE801: Human Computation and Crowdsourcing.

Similar presentations


Presentation on theme: "Introduction to Labor Marketplaces: Taskcn Uichin Lee KAIST KSE KSE801: Human Computation and Crowdsourcing."— Presentation transcript:

1 Introduction to Labor Marketplaces: Taskcn Uichin Lee KAIST KSE KSE801: Human Computation and Crowdsourcing

2 Category Rewards: >=1000CYN Rewards: 500-1000CYN

3 Task Categories Design Logo / VI Design Graphic design Software interface design Design of Buildings Brochure Three-dimensional modeling Product identification / packaging Web Site Web Design / Production Site planning Web application development Flash animation Construction site as a whole Search Engine Optimization Writing Named / slogan Technical / Application Writing Event Planning Business plans / tenders Literature Writing / Creative Translation and other writing Programming Applications Scripts / Tools Database Development Mobile / embedded development System Management MultimediaSystem Management PPT presentation / courseware Video capture / editing Photography / photo post Audio / audio processing Multimedia data collection

4 Signed up / Submitted Remaining Time RewardTask Classification

5 20% commission (regular user) vs. 18% commission (gold user) Winner takes all (single bid) 1001 viewed; 38 signed up; 27 submitted

6 Below: viewing submissions (all 27 submissions) Only “gold users” can hide their submissions

7 Taskcn Papers Crowdsourcing and Knowledge Sharing: Strategic User Behavior on Taskcn, Jiang Yang, Lada A. Adamic, Mark S. Ackerman, EC 2008 Competing to Share Expertise: the Taskcn Knowledge Sharing Community, Jiang Yang, Lada A. Adamic, Mark S. Ackerman, ICWSM, 2008 The Networks of Markets: Online Services for Workers and Job Providers (Taskcn), MSR-TR 2010

8 Skill and Workload vs. Reward Human-coded variables: skill and workload – Skill: minimum skill required to complete a task – Workload: average time to finish a task – Ordinal scale is used for rating Two raters evaluated 157 randomly selected tasks in the design category – Raters do not know the reward of a task RewardMinimum skill 0.493 Workload-0.443-0.629 Spearman’s rank correlation coefficient

9 Taskcn: Stats Data set: 4 year data set: 17,000 tasks + 1.7m submissions RMB/CNY(Chinese Yuan): 5CYN ~ $1

10 Open Tasks over Time (Weekly Bins)

11 Offered Reward per Task (dev: programming) (CNY)

12 Worker Characterization 3 groups (based on how many submissions were made) – At least 10 attempts (submissions), at least 50 attempts, and All Average revenue per submission (CNY) 100 CNY

13 Worker Characterization # of submissions per task follows power law distribution Number of submissions per task (CCDF)

14 Worker Characterization Joint distribution of workers’ submissions across different categories

15 Market Segmentation Individual worker behavior: – A typical worker tends to focus submissions to a specific range of rewards. – Specifically, a typical worker tends to submit most frequently solutions to tasks from a narrow range of rewards and attempts higher-reward tasks with diminishing frequency with the value of the reward. Collective worker behavior: – When workers are viewed as a community, however, higher rewards tend to attract larger number of submissions

16 Individual Behavior Histogram of submissions by top 10 workers (left fig) Experienced workers have a narrow reward range Reward (CYN) Fraction of submissions unique mode (= occurs most frequently) 1000, 500, 300, 200, ….

17 Collective Behavior Number of submissions per task (across all workers) increases as the associated reward increases – Due to large number of workers who only made few attempts and never came back to Taskcn (heavy tail: # submissions per worker)

18 Winning as Incentive to Continue High number of registered users never attempted any task (89%) – June 2006 – May 2007 (EC 2008) – 66,182 registered users Appears that people want to avoid the futility of their efforts (like lotteries?) Winning experience is an important incentive: – First attempt: 2307 won vs. 169,456 others failed – The winner group has attempted more trials than the loser group – Cox proportional hazard analysis: 19% lower probability of stopping after each subsequent attempt

19 User’s Prestige Network Community expertise network (CEN): people’s expertise can be measured by structural prestige

20 Centrality Metrics Calculate “centrality” metrics of a worker – Degree centrality: sum of weights of out-edges of worker u – Eigenvector centrality: steady-state visit probability of user u when a random walker traverses the normalized graph – Closeness centrality: the inverse of the average length of a shortest path that originates from worker u and terminates at worker v (for every worker v in the graph) – Betweenness centrality: the sum of the fraction of shortest paths between every pair of workers that pass through worker u

21 User’s Prestige Network Same two users compete twice: – same winner 77% of the time (compared to 1/2 chance) Same two users compete 3x: – same winner all 3 times in 56% of the cases (compared to 1/4 chance) Node size: proportional to Eigenvector centrality (PageRank) Blue : a user who has won at least once 28 winners out of total 800 users

22 User’s Prestige Network Indegree/outdegree distribution (design) win lose

23 Task’s Prestige Network If winners of other tasks lose in this task, this task is more prestigious... User A won task X, but lost task Y Task Y is more prestigious than task X (directed edge from X to Y)

24 Motif Profiles Motif analysis provides a finer grained, local view into the networks of users/tasks Below table shows the frequencies of dyadic and triadic motifs

25 Average Expertise of All Users of a Task For a given task, test association between centrality of the task (Task’s Prestige Net) vs. avg. indegree and PageRank of the users who submitted to the task (User’s Prestige Net) – Task outdegree (lost) vs. PageRank (low)

26 Importance of Experience The reward of selected tasks by typical workers exhibits a diminishing increase with the number of submissions (of course) The expected revenue per submission by typical workers tends to increase until it settles around a constant with the number of submissions.

27 Importance of Experience Users learn to choose tasks with less competitive tasks Skilled users survive and continue to participate the work (?)

28 Importance of Experience Winning probability increases as # submissions increases (also avg. rewards increases, yet it tapers off); thus avg. revenue also increases and it tapers off Estimated Prob. of Winning Average Rewards (CNY) Average Revenue (CNY)

29 Summary Amount of reward does not correlate with – # submissions – Expertise level Yet, it does correlate with the number of views Can infer expertise from expertise networks Successful users – Choose less popular tasks – Focus on specific reward range (best suited for one’s expertise?) – Increase revenue with # of attempts (but it tapers off)


Download ppt "Introduction to Labor Marketplaces: Taskcn Uichin Lee KAIST KSE KSE801: Human Computation and Crowdsourcing."

Similar presentations


Ads by Google