Presentation is loading. Please wait.

Presentation is loading. Please wait.

Strategy-Proof Classification Reshef Meir School of Computer Science and Engineering, Hebrew University A joint work with Ariel. D. Procaccia and Jeffrey.

Similar presentations


Presentation on theme: "Strategy-Proof Classification Reshef Meir School of Computer Science and Engineering, Hebrew University A joint work with Ariel. D. Procaccia and Jeffrey."— Presentation transcript:

1 Strategy-Proof Classification Reshef Meir School of Computer Science and Engineering, Hebrew University A joint work with Ariel. D. Procaccia and Jeffrey S. Rosenschein

2 Strategy-Proof Classification Introduction – Learning and Classification – An Example of Strategic Behavior Motivation: – Decision Making – Machine Learning Our Model Some Results

3 Classification The Supervised Classification problem: – Input: a set of labeled data points {(x i,y i )} i=1..m – output: a classifier c from some predefined concept class C ( functions of the form f : X  {-,+} ) – We usually want c to classify correctly not just the sample, but to generalize well, i.e.to minimize Risk(c) ≡ E (x,y)~D [ L (c(x)≠y) ], Where D is the distribution from which we sampled the training data, L is some loss function. MotivationModelResultsIntroduction

4 Classification (cont.) A common approach is to return the ERM, i.e. the concept in C that is the best w.r.t. the given samples (a.k.a. training data) – Try to approximate it if finding it is hard Works well under some assumptions on the concept class C Should we do the same with many experts? MotivationModelResultsIntroduction

5 ERM MotivationModelResults Strategic labeling: an example Introduction 5 errors

6 There is a better classifier! (for me…) MotivationModelResultsIntroduction

7 If I will only change the labels… MotivationModelResultsIntroduction 2+4 = 6 errors

8 - Decision making ECB makes decisions based on reports from national banks National bankers gather positive/negative data from local institutions Each country reports to ECB Yes/no decision taken at European level Bankers might misreport their data in order to sway the central decision IntroductionModelResultsMotivation

9 Labels Managers Reported Dataset Classification Algorithm Classifier (Spam filter) Outlook IntroductionModelResults Machine Learning (spam filter) Motivation

10 Learning (cont.) Some e-mails may be considered spam by certain managers, and relevant by others A manager might misreport labels to bias the final classifier towards her point-of-view IntroductionModelResultsMotivation

11 A Problem is characterized by An input space X A set of classifiers (concept class) C Every classifier c  C is a function c: X  {+,-} Optional assumptions and restrictions Example 1: All Linear Separators in R n Example 2: All subsets of a finite set Q IntroductionMotivationResultsModel

12 A problem instance is defined by Set of agents I = {1,...,n} A partial dataset for each agent i  I, X i = {x i1,...,x i,m(i) }  X For each x ik  X i agent i has a label y ik  { ,  } – Each pair s ik=  x ik,y ik  is an example – All examples of a single agent compose the labeled dataset S i = {s i1,...,s i,m(i) } The joint dataset S=  S 1, S 2,…, S n  is our input – m=|S| We denote the dataset with the reported labels by S’ IntroductionMotivationResultsModel

13 Agent 1 Agent 2 Agent 3 Input: Example + + – – – – + + – – – – – – – – – – + + + + + + + + + + + + – – X1  Xm1X1  Xm1 X2  Xm2X2  Xm2 X3  Xm3X3  Xm3 Y 1  {-,+} m 1 Y 2  {-,+} m 2 Y 3  {-,+} m 3 S =  S 1, S 2,…, S n  =  (X 1,Y 1 ),…, (X n,Y n )  IntroductionMotivationResultsModel

14 Mechanisms A Mechanism M receives a labeled dataset S’ and outputs c  C Private risk of i: R i (c,S) = |{k: c(x ik )  y ik }| / m i Global risk: R (c,S) = |{i,k: c(x ik )  y ik }| / m We allow non-deterministic mechanisms – The outcome is a random variable – Measure the expected risk IntroductionMotivationResultsModel

15 ERM We compare the outcome of M to the ERM: c* = ERM(S) = argmin( R (c),S) r* = R (c*,S) c  Cc  C Can our mechanism simply compute and return the ERM? IntroductionMotivationResultsModel

16 Requirements 1.Good approximation:  S R ( M (S),S) ≤ β∙r* 2.Strategy-Proofness:  i,S,S i ‘ R i ( M (S -i, S i ‘),S) ≤ R i ( M (S),S) ERM(S) is 1-approximating but not SP ERM(S 1 ) is SP but gives bad approximation Are there any mechanisms that guarantee both SP and good approximation? IntroductionMotivationResultsModel

17 Suppose | C |=2 Like in the ECB example There is a trivial deterministic SP 3- approximation mechanism Theorem: There are no deterministic SP α-approximation mechanisms, for any α<3 R. Meir, A. D. Procaccia and J. S. Rosenschein, Incentive Compatible Classification under Constant Hypotheses: A Tale of Two Functions, AAAI 2008 IntroductionMotivationModelResults

18 Proof C = {“all positive”, “all negative”} R. Meir, A. D. Procaccia and J. S. Rosenschein, Incentive Compatible Classification under Constant Hypotheses: A Tale of Two Functions, AAAI 2008 IntroductionMotivationModelResults

19 Randomization comes to the rescue There is a randomized SP 2-approximation mechanism (when |C|=2) – Randomization is non-trivial Once again, there is no better SP mechanism R. Meir, A. D. Procaccia and J. S. Rosenschein, Incentive Compatible Classification under Constant Hypotheses: A Tale of Two Functions, AAAI 2008 IntroductionMotivationModelResults

20 Negative results  Theorem: There are concept classes (including linear separators), for which there are no SP mechanisms with constant approximation Proof idea: – we first construct a classification problem that is equivalent to a voting problem – Then use impossibility results from Social-Choice theory to prove that there must be a dictator IntroductionMotivationModelResults R. Meir, A. D. Procaccia and J. S. Rosenschein, On the Power of Dictatorial Classification, in submission.

21 Classification as Voting Only 2 errors (m-2) errors IntroductionMotivationModelResults

22 More positive results Suppose all agents control the same data points, i.e. X 1 = X 2 =…= X n Theorem: Selecting a dictator at random is SP and guarantees 3-approximation – True for any concept class C – 2-approximation when each S i is separable Agent 1 Agent 2 Agent 3 IntroductionMotivationModelResults R. Meir, A. D. Procaccia and J. S. Rosenschein, Incentive Compatible Classification with Shared Inputs, in submission.

23 Proof idea IntroductionMotivationModelResults The average pair-wise distance between green dots, cannot be much higher than the average distance to the star

24 Generalization So far, we only compared our results to the ERM, i.e. to the data at hand We want learning algorithms that can generalize well from sampled data – with minimal strategic bias – Can we ask for SP algorithms? IntroductionMotivationModelResults

25 Generalization (cont.) There is a fixed distribution D X on X Each agent holds a private function Y i : X  {+,-} – Possibly non-deterministic The algorithm is allowed to sample from D X and ask agents for their labels We evaluate the result vs. the optimal risk, averaging over all agents, i.e. IntroductionMotivationModelResults Model

26 Generalization (cont.) IntroductionMotivationModelResults Model DXDX Y1Y1 Y3Y3 Y2Y2

27 Generalization Mechanisms Our mechanism is used as follows: 1.Sample m data points i.i.d 2.Ask agents for their labels 3.Use the SP mechanism on the labeled data, and return the result Does it work? – Depends on our game-theoretic and learning- theoretic assumptions IntroductionMotivationModelResults

28 The “truthful approach” Assumption A: Agents do not lie unless they gain at least ε Theorem: W.h.p. the following occurs – There is no ε-beneficial lie – Approximation ratio (if no one lies) is close to 3 Corollary: with enough samples, the expected approximation ratio is close to 3 The number of required samples is polynomial in n and 1/ε IntroductionMotivationModelResults R. Meir, A. D. Procaccia and J. S. Rosenschein, Incentive Compatible Classification with Shared Inputs, in submission.

29 The “Rational approach” Assumption B: Agents always pick a dominant strategy, if one exists. Theorem: with enough samples, the expected approximation ratio is close to 3 The number of required samples is polynomial in 1/ε (and not on n) IntroductionMotivationModelResults R. Meir, A. D. Procaccia and J. S. Rosenschein, Incentive Compatible Classification with Shared Inputs, in submission.

30 Previous and future work A study of SP mechanisms in Regression learning 1 No SP mechanisms for Clustering 2 Future directions Other concept classes Other loss functions Alternative assumptions on structure of data 1 O. Dekel, F. Fischer and A. D. Procaccia, Incentive Compatible Regression Learning, SODA 2008 2 J. Perote-Peña and J. Perote. The impossibility of strategy-proof clustering, Economics Bulletin, 2003. IntroductionMotivationModelResults

31


Download ppt "Strategy-Proof Classification Reshef Meir School of Computer Science and Engineering, Hebrew University A joint work with Ariel. D. Procaccia and Jeffrey."

Similar presentations


Ads by Google