Presentation is loading. Please wait.

Presentation is loading. Please wait.

SCG Court: A Crowdsourcing Platform for Innovation Karl Lieberherr Northeastern University College of Computer and Information Science Boston, MA joint.

Similar presentations


Presentation on theme: "SCG Court: A Crowdsourcing Platform for Innovation Karl Lieberherr Northeastern University College of Computer and Information Science Boston, MA joint."— Presentation transcript:

1 SCG Court: A Crowdsourcing Platform for Innovation Karl Lieberherr Northeastern University College of Computer and Information Science Boston, MA joint work with Ahmed Abdelmeged Karl Lieberherr Northeastern University College of Computer and Information Science Boston, MA joint work with Ahmed Abdelmeged Supported by Novartis

2 4/24/20112Crowdsourcing SOLVE ORGANIZATIONAL PROBLEM HOW TO COMBINE THE WORK OF HUNDREDS OF SCHOLARS?

3 Organizational Problem Solved How to organize a loosely coupled collaboration among several scholars to agree on claims that can be refuted or defended constructively using a dialog. – fair recognition of scholars strong scholars cannot be ignored – output: answer: “is claim refuted” plus the dialog When game is over: interested in – know-how! – list of claims that scholars agree with. 4/24/20113 defend(Alice,Bob,c) = ! refute(Alice,Bob,c) Crowdsourcing

4 Organizational Problem Solved How to design a happy scientific community that creates the science that society needs. Classical game solution: Egoistic scholars produce social welfare: knowledge base and know-how how to defend it. Control of scientific community – SCG rules – Specific domain, claim definition to narrow scope. 4/24/2011Crowdsourcing4 happy = no scholar is ignored.

5 What is a loose collaboration? Scholars can work independently on an aspect of the same problem. Problem = decide which claims in playground to oppose or agree with. How is know-how combined? Using a protocol. – Alice claimed that for the input that Alice provides, Bob cannot find an output of quality q. But Bob finds such an output. Alice corrects. – Bug reports that need to be addressed and corrections. 4/24/20115Crowdsourcing Playground = Instantiation of Platform

6 Claims Protocol. Defines scientific discourse. Scholars make a prediction about their performance in protocol. Predicate that decides whether refutation is successful. Refutation protocol collects data for predicate. As a starter: Think of a claim as a mathematical statement: EA or AE. – all planar graphs have a 4 coloring. 4/24/20116Crowdsourcing

7 Benefits Return On Investment for playground designers: a small investment in defining a playground (Domain=(Instance,Solution,valid,quality), Claim=(Protocol, etc.)) produces an interactive environment to assimilate and create domain knowledge. 4/24/20117Crowdsourcing

8 Benefits Return on Investment for scholars and avatar designers: The SCG rules need to be learned only once because they are the same across playgrounds. A small investment in learning the SCG rules and a domain etc. leads to numerous learning and teaching and innovation opportunities. The more a scholar teaches, the higher the scholar’s reputation. 4/24/20118Crowdsourcing

9 Global Warming Alice’ Claim: The earth is warming significantly. – Refutation protocol: Bob tries to refute. Alice must provide a data set DS satisfying a property defined precisely by the refutation protocol. Bob applies one of the allowed analysis methods M defined precisely by the refutation protocol. Bob wins iff M(DS) holds. 4/24/20119Crowdsourcing

10 Independent Set Protocol / claim: At Least As Good – Bob provides undirected graph G. – Bob computes independent set sB for G (secret). – Alice computes independent set sA for G. – Alice wins, if size(sA) >= size(sB). 4/24/201110Crowdsourcing

11 Overview 1.Organizational problem that SCG solves 2.What is SCG in detail? 3.Crowdsourcing 4.Formal Properties of SCG 5.Applications 6.Disadvantages 7.Conclusions 4/24/2011Crowdsourcing11

12 What is SCG(X) Crowdsourcing12 no automation human plays full automation avatar plays degree of automation used by scholar some automation human plays 0 1 more applications: test constructive knowledge transfer to reliable, efficient software avatar Bob scholar Alice 4/24/2011

13 A Virtual World Avatar’s View Administrator Avatar Opponents’ communication, Feedback Claims, Instances, Solutions Results Agreed Claims: statements about algorithms = Social welfare Algorithms in Avatar 13Crowdsourcing4/24/2011 does simple checking (usually efficient) does complex work

14 Avatars propose and (oppose|agree) Crowdsourcing14 CA1 CA2 CA3 CA4 egoistic Alice egoistic Bob reputation 1000 reputation 10 CB1 CB2 opposes (1) provides instance (2) solves instance not as well as she expected based on CA2 (3) WINS! LOSES proposed claims transfer 200 Life of an avatar: (propose+ (oppose |agree)+ provide* solve*)* 4/24/2011

15 What Scholars think about! If I propose claim C, what is the probability that – C is successfully refuted – C is successfully strengthened If I try to refute claim C, what is the probability that I will fail. If I try to strengthen claim C, what is the probability that I will fail? 15Crowdsourcing4/24/2011

16 Essence of Game Rules actors: – proposer=verifier (1. arg to refute, usually Alice), – opposer=falsifier (2. arg to refute, usually Bob) LifeOfClaim(c) = propose(Alice,c) followed by (oppose(Alice,Bob,c)|agree(Alice,Bob,c)). oppose(Alice,Bob,c) = (refute(Alice,Bob,c)|strengthen(Alice,Bob,c,cs)), where stronger(c,cs). strengthen(Alice,Bob,c,cs) = !refute(Bob,Alice,cs). agree(Alice,Bob,c) = !refute(Alice,Bob,c) and !refute(Bob,Alice,c) and refute(Alice,Bob,!c) and refute(Bob,Alice,!c) 4/24/201116 blamed decisions: propose(Alice,c) refute(A,B,c) strengthen(Alice,Bob,c,cs) agree(A,B,c) Crowdsourcing

17 Winning/Losing propose(Alice,c), refutationTry(Alice,Bob,c) If Alice first violates a game rule, Bob is the winner. If Bob first violates a game rule, Alice is the winner. If none violate a game rule: the claim predicate c.p(Alice,Bob,in,out) decides. 4/24/201117Crowdsourcing

18 Game Rules for Playground legal(in) legal(out) valid(in,out) belongsTo(in, instanceSet) each move must be within time-limit 4/24/201118Crowdsourcing

19 Protocol Language ProtocolSpec = List(Step). Step = Action "from" Role. interface Role = Alice | Bob. Alice = "Alice". Bob = "Bob". interface Action = ProvideAction | SolveAction. ProvideAction = "instance". // solve the instance provided in // step # stepNo. // stepNo is 0-based. SolveAction = "solution" "of" int. 4/24/2011Crowdsourcing19

20 How to achieve loosely coupled collaboration? Information exchange is based on values. Knowledge how to produce values is secret. Assign blame correctly to Alice or Bob based on outcome of refutation protocol. Every claim has a negation (using the idea of Hintikka’s dual game). 4/24/201120Crowdsourcing

21 Dual Game / Negation Each game G has a dual game which is the same as G except that the players ∀ and ∃ are transposed in both the rules for playing and the rules for winning. The game G(¬φ) is the dual of G(φ). 4/24/201121Crowdsourcing

22 A claim is Meta information about one’s performance when interacting with another clever being. Meta information about the performance of one’s program. 4/24/201122Crowdsourcing

23 How is collaboration working? Scholars make claim about their performance in a given context. Scholars make claim about the performance of their avatar in a given context. Opponent finds input in context that contradicts claim. Claim is refuted. 4/24/201123Crowdsourcing

24 Playground Design Define several languages – Instance – Solution – Claim InstanceSet Define protocol or reuse existing protocol. Implement interfaces for corresponding classes. 4/24/2011Crowdsourcing24

25 Who are the scholars? Students in a class room – High school – University Members of the Gig Economy – Between 1995 and 2005, the number of self- employed independent workers grew by 27 percent. Potential employees Anyone with web access; Intelligent crowd. 4/24/201125Crowdsourcing

26 How to engage scholars? Several binary games between Alice and Bob. Alice must propose C or !C for one of the allowed C. Bob must agree with or oppose what Alice proposes. Agree(C) – Bob defends C against Alice. – Bob refutes !C against Alice. – Alice defends C against Bob. – Alice refutes !C against Bob. 4/24/201126Crowdsourcing

27 How to engage scholars? Opposition Central to opposition is refutation. Claim defined by protocol. Simplest protocol: – Alice provides Input in. – Bob computes Output out: valid(in,out) – Alice defends if quality(in,out)<q. – Bob refutes if quality(in,out)>=q. Claims: C(q), q in [0,1]. 4/24/201127Crowdsourcing

28 Overview 1.Organizational problem solved by SCG 2.What is SCG in detail? 3.Crowdsourcing 4.Formal Properties of SCG 5.Applications 6.Disadvantages 7.Conclusions 4/24/2011Crowdsourcing28

29 Crowdsourcing Active area: Recent Communication of the ACM article. Wikipedia, FoldIt, TopCoder, … We want a family of crowdsourcing systems with provable properties. 4/24/2011Crowdsourcing29

30 Crowdsourcing Platform Crowdsourcing – is the act of taking a job traditionally performed by a designated agent (usually an employee) and outsourcing it to an undefined, generally large group of people in the form of an open call. – enlists a crowd of humans to help solve a problem defined by the system owners. A crowdsourcing platform is a generic tool that makes it easy to develop a crowdsourcing system. 4/24/201130Crowdsourcing

31 Crowdsourcing Platform The job, target problem is – to solve instances of a problem and make claims about the solution process. – to build knowledge base of claims and techniques to defend the claims 4/24/201131Crowdsourcing

32 Requirements for Crowdsourcing Platform Find a good way to combine user contributions to solve the target problem. Find a good way to evaluate users and their contributions. Find a good way to recruit and retain users. 4/24/201132Crowdsourcing

33 SCG Court Web application Software developers register with SCG Court and choose playgrounds they want to compete in. They register their avatars in the appropriate playgrounds in time for the next tournament. Avatars get improved between tournaments based on ranking achieved and game history. 4/24/2011Crowdsourcing33

34 Combine user contributions Users build on each others work: strengthening and checking. Users check each others claims for correct judgment. – Claims are defended and refuted. Users trade reputation for information. 4/24/201134Crowdsourcing

35 Learning cycle Alice wins reputation with claim c because Bob made a wrong decision – Alice gives information about artifact related to c. Alice teaches Bob. Bob integrates information into his know-how. Bob learns from Alice. – Bob hopefully has learned enough and will no longer make a wrong decision about c. 4/24/2011Crowdsourcing35

36 Voting with Justification I vote – for this claim (agree) because I can defend it and refute its negation. – against this claim because I can oppose it (refute or strengthen). 4/24/201136Crowdsourcing

37 Evaluate users and their contributions Calculate reputation – confidence by the proposer that a claim is good (gc) – confidence by the opposer (refute or strengthen) that the claim is bad (bc) The scholars are encouraged to set their confidences truthfully. Otherwise they don't gain enough reputation or they lose too much reputation. 4/24/201137Crowdsourcing

38 Reputation Update Claimgoodbad proposeupdown opposedownup up: if you are good, there is a chance that you win down: if the other is good, there is a chance that you lose up: reputation goes up, but has to provide knowledge that might reveal secret technique. down: reputation goes down, but might gain knowledge that reveals secret technique. 4/24/201138Crowdsourcing

39 Reputation Update Claimgoodbad proposeupdown opposedownup up: if you are good, there is a chance that you win down: if the other is good, there is a chance that you lose confidence: proposer: claim is good: gc opposer: claim is bad: bc r = result of reputation protocol. Reputation update: r*gc*bc (various refinements are possible) 4/24/201139Crowdsourcing

40 Perfect Being perfect means to make perfect decisions. up: if you are perfect, you will not lose. down: if the other is perfect, you will not win. Claimgoodbad proposeupdown opposedownup up: if you are good, there is a chance that you win down: if the other is good, there is a chance that you lose 4/24/201140Crowdsourcing

41 Overview 1.Organizational problem solved by SCG 2.What is SCG in detail? 3.Crowdsourcing 4.Formal Properties of SCG 5.Applications 6.Disadvantages 7.Conclusions 4/24/2011Crowdsourcing41

42 Formal Properties of SCG Soundness: – Only false claims are refuted. – Only true claims are defended. SCG is not sound because it adapts to the skill level of the scholars. E.g., – Alice proposes a false claim and still defend it, because Alice and Bob are weak, or – Alice proposes a true claim and not defend it, because Alice is weak. We want to prove formal properties that don’t imply soundness. 4/24/2011Crowdsourcing42

43 Formal Properties Properties – Community Property – Equilibrium – Convergence Assumption: claims are bivalent (true or false); disallow indeterminate claims. 4/24/2011Crowdsourcing43

44 For every faulty decision action there exists an exposing reaction. decision propose(A,c): if c is not true, refute(A,B,c) or strengthen(A,B,c,cs) expose. decision oppose(Alice,Bob,c)|agree(Alice,Bob,c): – if Bob decides to oppose but does not oppose successfully, his oppose action is blamed. Bob discouraged to attack without good reason. – if Bob decides to agree but does not agree successfully, his agree action is blamed. 4/24/201144Crowdsourcing

45 Community Property For every faulty decision action there exists an exposing reaction that blames the bad decision. – Reasons: We want the system to be egalitarian. – It is important that clever crowd members can shine and expose others who don’t promote the social welfare of the community. Faulty decisions must be exposable. It may take effort. 4/24/201145Crowdsourcing

46 Community Property Alternative formulation If all decisions by Alice are not faulty, there is no chance of Alice losing against Bob. – if Alice is perfect, there is no chance of losing. If there exists a faulty decision by Alice, there is a chance of Alice losing against Bob. – egalitarian game 4/24/201146Crowdsourcing

47 Summary: faulty decisions 1.propose(Alice,c),c=false 2.propose(Alice,c),c=not optimum, c=true 3.refute(Alice,Bob,c),c=true 4.strengthen(Alice,Bob,c,cs),c=optimum 5.strengthen(Alice,Bob,c,cs),c=false 6.agree(Alice,Bob,c),c=false 7.agree(Alice,Bob,c),c=not optimum, c=true 4/24/201147Crowdsourcing

48 Community Property Case 1 Alice’ decision propose(Alice,c) proposes claim c as a claim that is true. Let’s assume c is false. Alice introduced a fault into the knowledge base. There must be a reaction that assigns blame to Alice’ decision. Here it is: Bob decides to oppose: oppose(Alice,Bob,c), specifically to refute: refute(Alice,Bob,c). There must be a successful refutation. 4/24/201148Crowdsourcing 1. propose(Alice,c),c=false

49 Community Property Case 2 Alice’ decision propose(Alice,c) proposes claim c as a claim that is optimum. Let’s assume c is not optimum, but true, and can be strengthened. Alice introduced a fault into the knowledge base. There must be a reaction that assigns blame to Alice’ decision. Here it is: Bob decides to oppose: oppose(Alice,Bob,c), specifically to strengthen: strengthen(Alice,Bob,c,cs). There must be a choice for cs so that refute(Bob,Alice,cs) returns false, independent of Alice’ strategy. 4/24/201149Crowdsourcing 2. propose(Alice,c),c=not optimum, c=true

50 Community Property Case 3 Bob’s decision refute(Alice,Bob,c) is wrong, if c is true. Bob tries to introduce a fault into the knowledge base. There must be a reaction by Alice that assigns blame to Bob’ decision to refute. Because c is true, there must be a defense of c by Alice, i.e., refute(Alice,Bob,c) returns false independent of Bob’s strategy. Bob’s decision to refute is blamed. 4/24/201150Crowdsourcing 3. refute(Alice,Bob,c),c=true

51 Community Property Case 4 Bob’s decision strengthen(Alice,Bob,c,cs) is wrong, if c is optimum. Bob tries to introduce a fault into the knowledge base. There must be a reaction by Alice that assigns blame to Bob’s decision to strengthen. Because c is optimum, there must be a refutation of cs by Alice, i.e., refute(Bob,Alice,cs) returns true independent of Bob’s strategy. Bob’s decision to strengthen is blamed. 4/24/201151Crowdsourcing 4. strengthen(Alice,Bob,c,cs),c=optimum

52 Community Property Case 5 Bob’s decision strengthen(Alice,Bob,c,cs) is wrong, if c is false. Bob tries to introduce a fault into the knowledge base. There must be a reaction by Alice that assigns blame to Bob’s decision to strengthen. Because c is false, there must be a refutation of cs by Alice, i.e., refute(Bob,Alice,cs) returns true independent of Bob’s strategy. Bob’s decision to strengthen is blamed. 4/24/201152Crowdsourcing 5. strengthen(Alice,Bob,c,cs),c=false

53 Community Property Case 6 Bob’s decision agree(Alice,Bob,c,) is wrong, if c is false. Let’s assume c is false. Bob tries to introduce a fault into the knowledge base. There must be a reaction by Alice that assigns blame to Bob’s decision to agree. Because c is false, there is a strategy for Alice so that refute(Bob,Alice,c) returns false independent of Bob’s strategy. Bob’s decision to agree is blamed. 4/24/201153Crowdsourcing 6. agree(Alice,Bob,c),c=false

54 Community Property Case 7 Bob’s decision agree(Alice,Bob,c,) is wrong, if c is not optimum. Let’s assume c is not optimum, but true. Bob tries to introduce a fault into the knowledge base. There must be a reaction by Alice that assigns blame to Bob’s decision to agree. Because c is not optimum and true, there must be a strengthening of c by Alice to cs, i.e., refute(Alice,Bob,cs) returns false independent of Bob’s strategy. Bob’s decision to agree is blamed. 4/24/201154Crowdsourcing 7. agree(Alice,Bob,c),c=not optimum, c=true

55 SCG Equilibrium reputations of scholars are stable the science does not progress; bugs are not fixed, no new ideas are introduced extreme example: All scholars are perfect: they propose optimal claims that can neither be strengthened nor refuted. Crowdsourcing554/24/2011

56 Claims Crowdsourcing56 0 1 quality strengthening correct valuation over strengthening true claims (defendable) false claims (refutable) 4/24/2011

57 Convergence if every faulty action is exposed, convergence guaranteed. 4/24/201157Crowdsourcing

58 Related Work Argumentation Theory Argumentation Mechanism Design – strategy-proof mechanism Logic – Paul Lorenzen Dialog games – Independence Friendly Logic by Hintikka/Sandu Logical games of imperfect information. 4/24/201158Crowdsourcing

59 Independence Friendly Logic (Hintikka and Sandu) Protocol / claim – Bob provides positive real number r in R +. – Bob computes square root sB of r in R (secret). – Alice computes square root sA of r in R. – Alice wins, if sA and sB are equal (within a small error bound). Claim is neither true nor false (Imperfect information). ForAll r in R + ForAll sB in R Exists sA/sB in R: (sA=sB) and (sB=B(r) and sA=A(r)) Exists sA/sB means that the Verifier’s choice prompted by Exists sA is independent of the Falsifier’s choice prompted by ForAll sB. 4/24/201159 Verifier = Alice Falsifier = Bob Crowdsourcing

60 In SCG Protocol Language instance from Bob // r solution of 0 from Bob // sB for r solution of 0 from Alice // sA for r 4/24/2011Crowdsourcing60

61 Independence Friendly Logic (IF Logic) Protocol / claim: At Least As Good – Bob provides undirected graph G. – Bob computes independent set sB for G (secret). – Alice computes independent set sA for G. – Alice wins, if size(sA) >= size(sB). Alice has a winning strategy: search for the maximum independent set. But does she have a practical winning strategy? 4/24/201161Crowdsourcing

62 Claims that are neither true nor false ForAll x Exists y/x (x=y) Has indeterminate truth in any model with cardinality > 1. Reason: game of imperfect information. Verifier and Falsifier will choose values for x and y without knowing each other’s choice. Classical logic is bivalent. IF logic is more expressive than ordinary first-order languages. 4/24/201162Crowdsourcing

63 Game-Theoretic Semantics Every sentence is associated to a game with two players: the Verifier (Alice) and the Falsifier (Bob). Universal quantifier prompts move of Falsifier. Existential quantifier prompts move of Verifier. A sentence is said to be true (false) if there exists a winning strategy for the Verifier (Falsifier). A sentence is said to be refuted (defended) if the Falsifier (Verifier) wins on a specific game. 4/24/201163Crowdsourcing

64 Long History (It came to light sometime later that C. S. Peirce had already suggested explaining the difference between ‘every’ and ‘some’ in terms of who chooses the object, in 1898) 4/24/2011Crowdsourcing64

65 Significance of Refutation or Defense Forget about winning strategies for Verifier and Falsifier. Want to come up with winning strategies incrementally. When Verifier wins a game, we have some evidence that claim is true. Falsifier is blamed for trying to refute. When Falsifier wins a game, we have some evidence that claim is false. Verifier is blamed for proposing the claim. 4/24/201165Crowdsourcing

66 Collaboration between Verifier (Alice) and Falsifier (Bob) IF formulas have special form: – ForAll i Exists oA: p(i,oA) and oA=A(i) and PB(i) – ForAll i ForAll oB Exists oA/oB: p(i,oA,oB) and oA=A(i) and oB=B(i) and PB(i) – Exists i ForAll oB: p(i,oB) and oB=B(i) and PA(i) We are interested in improving A,B and PB through playing the game several times. A is the know-how of Alice and B the know-how of Bob. A and B are functions. PB(i) is Bob’s provide relation to find hard inputs i. The claim makes a prediction about A and B and PB. A game defends the prediction or refutes it. 4/24/201166Crowdsourcing

67 Collaboration between Verifier (Alice) and Falsifier (Bob) After a successful defense, the blame is assigned to Bob. Specifically to Bob’s decision to oppose the claim. After a successful refutation, the blame is assigned to Alice. Specifically to Alice’ decision to propose the claim. It is the responsibility of Alice and Bob to assign the blame more specifically and improve their know-how about A, B, PA, PB and the claim. 4/24/201167Crowdsourcing

68 Overview 1.Organizational problem that SCG solves 2.What is SCG in detail? 3.Crowdsourcing 4.Formal Properties of SCG 5.Applications 6.Disadvantages 7.Conclusions 4/24/2011Crowdsourcing68

69 Applications My Applications of SCG in teaching – Software Development classes Developing SCG Court Developing software for MAX CSP – Algorithms classes Crowdsourcing know-how in constructive domains. 4/24/2011Crowdsourcing69

70 Claim involving Experiment Claim ExperimentalTechnique(X,Y,q,r) I claim, given raw materials x in X, I can produce product y in Y of quality q and using resources at most r. 70Crowdsourcing4/24/2011

71 Gamification of Software Development etc. Want reliable software to solve a computational problem? Design a game where the winning team will create the software you want. Want to teach a STEM domain? Design a game where the winning students demonstrate superior domain knowledge. Crowdsourcing Doesn’t TopCoder already do this? STEM = Science, Technology, Engineering, and Mathematics 714/24/2011

72 SCG and TopCoder SCG is an abstraction and generalization of TopCoder. Crowdsourcing724/24/2011

73 Planned Applications Require Prize Money IT recruiting tool: need employees good in a computational domain? Design a game and pick the winners. Need a software package for solving an optimization problem? Design a game and pick the winning avatar. 4/24/2011Crowdsourcing73

74 What we want Engage software developers – let them produce software that models an organism that fends for itself in a real virtual world while producing the software we want. Have fun. Focus them. – let them propose claims about the software they produce. Reward them when they defend their claims successfully or oppose the claims of others successfully. Crowdsourcing74 Clear FeedbackSense of Progress Possibility of Success Authenticity (Facebook) 4/24/2011

75 Overview 1.Organizational problem that SCG solves 2.What is SCG in detail? 3.Crowdsourcing 4.Formal Properties of SCG 5.Applications 6.Disadvantages 7.Conclusions 4/24/2011Crowdsourcing75

76 Disadvantages Overhead for avatar developers – Overhead of learning SCG (rules) – Overhead of learning SCG Court (how to register your avatar) – Amortization: SCG(X1) -> SCG(X2) -> SCG(X3) Overhead for playground developers – Playgrounds need to be well tested (cheating) – Definition of what you want must be precise – Get what you ordered 4/24/2011Crowdsourcing76

77 Disadvantages of SCG The game is addictive. After Bob has spent 4 hours to fix his avatar and still losing against Alice, Bob really wants to know why! 77Crowdsourcing4/24/2011

78 Disadvantages of SCG The administrator for SCG(X) must perfectly supervise the game. – if admin does not, cheap play is possible – watching over the admin 78Crowdsourcing4/24/2011

79 How to compensate for those disadvantages Warn the scholars about addictive game. Use a gentleman’s security policy: report administrator problems, don’t exploit them to win. Occasionally have a non-counting “attack the administrator” competitions to find vulnerabilities in administrator. – both generic as well as X-specific vulnerabilities. 79Crowdsourcing4/24/2011

80 Overview 1.Organizational problem that SCG solves 2.What is SCG in detail? 3.Crowdsourcing 4.Formal Properties of SCG 5.Applications 6.Disadvantages 7.Conclusions 4/24/2011Crowdsourcing80

81 Conclusions SCG Court is a platform for creating happy communities of scholars/avatars that create science in specific domains. The egoistic scholars create social welfare: knowledge and the know-how to support it. Evaluates fairly, frequently, constructively and dynamically. Encourages retrieval of state-of-the-art know-how, integration and discovery. Challenges humans, drives innovation, both competitive and collaborative. 4/24/2011Crowdsourcing81

82 The End 4/24/2011Crowdsourcing82

83 Highest Safe Rung You are doing stress-testing on various models of glass jars to determine the height from which they can be dropped and still not break. The setup for this experiment, on a particular type of jar, is as follows. Crowdsourcing834/24/2011

84 Highest Safe Rung Only two identical bottles to determine highest safe rung Alice Bob 84Crowdsourcing You have a ladder with n rungs, and you want to find the highest rung from which you can drop a copy of the jar and not have it break. We call this the highest safe rung. You have a fixed ``budget'' of k > 0 jars. 4/24/2011

85 Highest Safe Rung Only two identical bottles to determine highest safe rung HSR(9,2) ≤ 4 I doubt it: refutation attempt! Alice Bob Alice constructs decision tree T of depth 4 and gives it to Bob. He checks whether T is valid. Bob wins if he finds a flaw. 85Crowdsourcing4/24/2011

86 3 1 0 6 12 4 3 5 9 9 7 6 87 2 4 5 8 x yz yes no u highest safe rung Highest Safe Rung Decision Tree HSR(9,2)=5 86Crowdsourcing4/24/2011

87 Finding solution for HSR(n,2) Approximate min x in [0,n] (n/x) + x Exact – MaxRungs(x,y) =MaxRungs(x-1,y-1)+MaxRungs(x-1,y) – MaxRungs(x, 2) = x + MaxRungs(x – 1, 2) – MaxRungs(0, 2) = 1 – Applied to HSR(9,2) MaxRungs(3,2) = 7 < 9 MaxRungs(4,2) = 11 > 9 87Crowdsourcing Keith Levin CS 4800 Fall 2010 MaxRungs(x,y) = the largest number of rungs we can test with y jars and x experiments. breaks at rootdoes not break at root Find minimum x, s.t. MaxRungs(x,2) > n 4/24/2011

88 MaxRungs MaxRungs(x,y) = sum [k=0.. y] binomial(x,k) All paths are of length x. At most k branches may be left branches. Note: y = x implies MaxRungs(x,y) = 2 x meaning a complete binary tree of depth x. Example: binomial(3,2)+binomial(3,1)+ binomial(3,0) = 7 Crowdsourcing884/24/2011

89 Formal: HSR Domain: – Problem: (n,k), k <= n. – Solution: Decision tree to determine highest safe rung. – quality(problem, solution): depth of decision tree / number of rungs – valid(problem, solution): at most k left branches,... 89Crowdsourcing4/24/2011

90 Crowdsourcing90

91 Community Principle 2 If all decisions by Alice are good, there is no chance of Alice losing against Bob. – if Alice is perfect, there is no chance of losing. If there exists a bad decision by Alice, there is a chance of Alice losing against Bob. – egalitarian game 4/24/201191Crowdsourcing

92 Bad Decisions (detectable efficiently during game) a.Proposing a claim and not supporting it. b.Opposing a claim and not opposing it successfully. c.Agreeing with a claim that one cannot defend nor refute its negation. 4/24/201192Crowdsourcing

93 Under the Radar Under the radar: a game can progress without detectable faults of kinds a,b,c. Still not sound. With 7 fault kinds: if no faults: have soundness but cannot check it efficiently. With a,b,c: guaranteed loss if caught. 4/24/2011Crowdsourcing93

94 Questions from ETH Talk Michael Franz – electronic trading analogy, improve trading software over night Walter Huersch – value created by game: how to distribute it among participants? Based on reputation of scholars. – Volkswirtschaftlich vernueftig? Is it more efficient – get scholars to evaluate each other. Christoph Roduner – intranet: start collaboration. Game as collaboration starter. Focused brainstorming. Von CMU: Poersch? – How does it work with students. Mention baby avatar. MAX CSP. – Constructive nature. 4/24/2011Crowdsourcing94

95 Questions Thomas Gross – meta game: trying to break the game. – students pose each other questions and correct each other’s answers still need a TA because of unsoundness of game 4/24/2011Crowdsourcing95

96 Emanuele (by email) Claim sets to share (close under negation) – HSR(n,k)<=q – CNF(k)>=1-2 -k – MAX-CSP(R)>=t R – MAX(ProblemName,i)>=o MAX(NetworkFlow,g)>=f 4/24/2011Crowdsourcing96


Download ppt "SCG Court: A Crowdsourcing Platform for Innovation Karl Lieberherr Northeastern University College of Computer and Information Science Boston, MA joint."

Similar presentations


Ads by Google