Presentation is loading. Please wait.

Presentation is loading. Please wait.

A Theory of Learning and Clustering via Similarity Functions Maria-Florina Balcan 09/17/2007 Joint work with Avrim Blum and Santosh Vempala Carnegie Mellon.

Similar presentations


Presentation on theme: "A Theory of Learning and Clustering via Similarity Functions Maria-Florina Balcan 09/17/2007 Joint work with Avrim Blum and Santosh Vempala Carnegie Mellon."— Presentation transcript:

1 A Theory of Learning and Clustering via Similarity Functions Maria-Florina Balcan 09/17/2007 Joint work with Avrim Blum and Santosh Vempala Carnegie Mellon University

2 2-Minute Version Generic classification problem: learn to distinguish men from women. Problem: pixel representation not so good. Powerful technique: use a kernel, a special kind of similarity function K(, ). Can we develop a theory that views K as a measure of similarity? What are general sufficient conditions for K to be useful for learning? But, theory in terms of implicit mappings. Nice SLT theory

3 2-Minute Version Generic classification problem: learn to distinguish men from women. Problem: pixel representation not so good. Powerful technique: use a kernel, a special kind of similarity function K(, ). What if don’t have any labeled data? (i.e., clustering) Can we develop a theory of conditions sufficient for K to be useful now?

4 Part I: On Similarity Functions for Classification

5 Kernel Functions and Learning                   E.g., given images labeled by gender, learn a rule to distinguish men from women. [Goal: do well on new data] Problem: our best algorithms learn linear separators, not good for data in natural representation. Old approach: learn a more complex class of functions. New approach: use a Kernel.

6 Kernels, Kernalizable Algorithms K kernel if 9 implicit mapping  s.t. K(x,y)=  (x) ¢  (y). Point: many algorithms interact with data only via dot-products. If replace x ¢ y with K(x,y), it acts implicitly as if data was in higher-dimensional  -space. If data is linearly separable by large margin in  -space, don’t have to pay in terms of sample complexity or comp time. If margin  in  -space, only need 1/  2 examples to learn well. w  (x)  1

7 Kernels and Similarity Functions Our Work: analyze more general similarity functions. Kernels: useful for many kinds of data, elegant SLT. Characterization of good similarity functions: 1) In terms of natural direct properties. no implicit high-dimensional spaces no requirement of positive-semidefiniteness 2) If K satisfies these, can be used for learning. 3) Is broad: includes usual notion of “good kernel”. has a large margin sep. in  -space

8 A First Attempt: Definition Satisfying (1) and (2) K:(x,y) ! [-1,1] is an ( ,  )-good similarity for P if a 1-  prob. mass of x satisfy: E y~P [K(x,y)| l (y)= l (x)] ¸ E y~P [K(x,y)| l (y)  l (x)]+  Note: might not be a legal kernel. E.g., K(x,y) ¸ 0.2, l (x) = l (y) P distribution over labeled examples (x, l (x)) K(x,y) random in [-1,1], l (x)  l (y)

9 A First Attempt: Definition Satisfying (1) and (2). How to use it? K:(x,y) ! [-1,1] is an ( ,  )-good similarity for P if a 1-  prob. mass of x satisfy: Algorithm Draw S + of O((1/  2 ) ln(1/  2 )) positive examples. Draw S - of O((1/  2 ) ln(1/  2 )) negative examples. Classify x based on which gives better score. Guarantee: with probability ¸ 1- , error ·  +  E y~P [K(x,y)| l (y)= l (x)] ¸ E y~P [K(x,y)| l (y)  l (x)]+ 

10 A First Attempt: Definition Satisfying (1) and (2). How to use it? Hoeffding: for any given “good x”, prob. of error w.r.t. x (over draw of S +, S - ) is ·  2. At most  chance that the error rate over GOOD is ¸ . Guarantee: with probability ¸ 1- , error ·  +  Overall error rate ·  + . K:(x,y) ! [-1,1] is an ( ,  )-good similarity for P if a 1-  prob. mass of x satisfy: E y~P [K(x,y)| l (y)= l (x)] ¸ E y~P [K(x,y)| l (y)  l (x)]+ 

11 A First Attempt: Not Broad Enough K:(x,y) ! [-1,1] is an ( ,  )-good similarity for P if a 1-  prob. mass of x satisfy: E y~P [K(x,y)| l (y)= l (x)] ¸ E y~P [K(x,y)| l (y)  l (x)]+  K(x,y)=x ¢ y has large margin separator but doesn’t satisfy our definition. + + + ++ + - - - -- - more similar to + than to typical -

12 A First Attempt: Not Broad Enough K:(x,y) ! [-1,1] is an ( ,  )-good similarity for P if a 1-  prob. mass of x satisfy: Broaden: OK if 9 non-negligible R s.t. most x are on average more similar to y 2 R of same label than to y 2 R of other label. E y~P [K(x,y)| l (y)= l (x)] ¸ E y~P [K(x,y)| l (y)  l (x)]+  R + + + ++ + - - - -- -

13 Broader/Main Definition K:(x,y) ! [-1,1] is an ( ,  )-good similarity for P if exists a weighting function w(y) 2 [0,1] a 1-  prob. mass of x satisfy: E y~P [w(y)K(x,y)| l (y)= l (x)] ¸ E y~P [w(y)K(x,y)| l (y)  l (x)]+  Algorithm Draw S + ={y 1, , y d }, S - ={z 1, , z d }, d=O((1/  2 ) ln(1/  2 )). “Triangulate” data: F(x) = [K(x,y 1 ), …,K(x,y d ), K(x,z d ),…,K(x,z d )]. Take a new set of labeled examples, project to this space, and run any alg for learning lin. separators. Theorem: with probability ¸ 1- , exists linear separator of error ·  +  at margin  /4.

14 Main Definition & Algorithm, Implications S + ={y 1, , y d }, S - ={z 1, , z d }, d=O((1/  2 ) ln(1/  2 )). “Triangulate” data:F(x) = [K(x,y 1 ), …,K(x,y d ), K(x,z d ),…,K(x,z d )]. Theorem: with prob. ¸ 1- , exists linear separator of error ·  +  at margin  /4. legal kernel K arbitrary sim. function ( ,  )-good sim. function (  + ,  /4)-good kernel function Any ( ,  )-good kernel is an (  ’,  ’)-good similarity function. Theorem (some penalty:  ’ =  +  extra,  ’ =  2  extra )

15 Similarity Functions for Classification, Summary Formal way of understanding kernels as similarity functions. Algorithms and guarantees for general similarity functions that aren’t necessarily PSD.

16 Part II: Can we use this angle to help think about Clustering?

17 What if only unlabeled examples available? [documents, images] [topic] Problem: only have unlabeled data! S set of n objects. There is some (unknown) “ground truth” clustering. Goal: h of low error up to isomorphism of label names. But we have a Similarity function! Each object has true label l(x) in {1,…, t }. [sports] [fashion] Err(h) = min  Pr x~S [  (h(x))  l(x)]

18 [documents, images] [topic] Problem: only have unlabeled data! S set of n objects. There is some (unknown) “ground truth” clustering. Goal: h of low error up to isomorphism of label names. But we have a Similarity function! Each object has true label l(x) in {1,…, t }. [sports] [fashion] Err(h) = min  Pr x~S [  (h(x))  l(x)] What conditions on a similarity function would be enough to allow one to cluster well?

19 - closer to learning mixtures of Gaussians - analyze algos to optimize various criteria - which criterion produces “better-looking” results We flip this perspective around. - discriminative, not generative More natural, since the input graph/similarity is merely based on some heuristic. Contrast with “Standard” Approach Traditional approach: the input is a graph or embedding of points into R d.

20 [sports] [fashion] What conditions on a similarity function would be enough to allow one to cluster well? Condition that trivially works. K(x,y) > 0 for all x,y, l(x) = l(y). K(x,y) 0 for all x,y, l(x) = l(y). K(x,y) < 0 for all x,y, l(x)  l(y).

21 What conditions on a similarity function would be enough to allow one to cluster well? Problem: same K can satisfy it for two very different clusterings of the same data! K is s.t. all x are more similar to points y in their own cluster than to any y’ in other clusters. Still Strong Unlike learning, you can’t even test your hypotheses! sports fashion soccer tennis Lacoste Coco Chanel sports fashion soccer tennis Lacoste Coco Chanel Strict Ordering Property

22 Relax Our Goals soccer tennis Lacoste Coco Chanel 1. Produce a hierarchical clustering s.t. correct answer is approximately some pruning of it.

23 Relax Our Goals 1. Produce a hierarchical clustering s.t. correct answer is approximately some pruning of it. soccer sports fashion Coco Chanel tennis Lacoste All topics

24 Relax Our Goals 1. Produce a hierarchical clustering s.t. correct answer is approximately some pruning of it. soccer sports fashion Coco Chanel tennis Lacoste All topics

25 Relax Our Goals 1. Produce a hierarchical clustering s.t. correct answer is approximately some pruning of it. sports fashion Coco Chanel Lacoste All topics

26 Relax Our Goals 1. Produce a hierarchical clustering s.t. correct answer is approximately some pruning of it. soccer sports fashion Coco Chanel tennis Lacoste All topics

27 Relax Our Goals 1. Produce a hierarchical clustering s.t. correct answer is approximately some pruning of it. soccer sports fashion Coco Chanel tennis Lacoste All topics

28 Relax Our Goals 1. Produce a hierarchical clustering s.t. correct answer is approximately some pruning of it. 2. List of clusterings s.t. at least one has low error. Tradeoff strength of assumption with size of list. soccer sports fashion tennis All topics

29 Start Getting Nice Algorithms/Properties For all clusters C, C’, for all A in C, A’ in C’: at least one of A, A’ is more attracted to its own cluster than to the other. A A’ K is s.t. all x are more similar to points y in their own cluster than to any y’ in other clusters. Sufficient for hierarchical clustering Strict Ordering Property Weak Stability Property Sufficient for hierarchical clustering

30 Example Analysis for Strong Stability Property K is s.t. for all C, C’, all A in C, A’ in C’ K(A,C-A) > K(A,A’), Failure iff merge P 1, P 2 s.t. P 1 ½ C, P 2 Å C = . But must exist P 3 ½ C s.t. K(P 1,P 3 ) ¸ K(P 1,C-P 1 ) and K(P 1,C-P 1 ) > K(P 1,P 2 ). Average Single-Linkage. merge “parts” whose average similarity is highest. All “parts” made are laminar wrt target clustering. Contradiction. Algorithm Analysis: (K(A,A’) - average attraction between A and A’)

31 Strong Stability Property, Inductive Setting Assume for all C, C’, all A ½ C, A’ µ C’: K(A,C-A) > K(A,A’)+  –Need to argue that sampling preserves stability. Insert new points as they arrive. Draw sample S, hierarchically partition S. –A sample cplx type argument using Regularity type results of [AFKK]. Inductive Setting

32 Weaker Conditions E x’ 2 C(x) [K(x,x’)] > E x’ 2 C’ [K(x,x’)]+  E x’ 2 C(x) [K(x,x’)] > E x’ 2 C’ [K(x,x’)]+  ( 8 C’  C(x)) Can produce a small list of clusterings. Upper bound t O( t /  2 ). [doesn’t depend on n] Lower bound ~ t  (1/  ). Might cause bottom-up algorithms to fail. Find hierarchy using learning-based algorithm. Average Attraction Property Stability of Large Subsets Property Not Sufficient for hierarchy Sufficient for hierarchy (running time t O( t /  2 ) ) A A’

33 Similarity Functions for Clustering, Summary Minimal conditions on K to be useful for clustering. –List Clustering –Hierarchical clustering Discriminative/SLT-style model for Clustering with non-interactive feedback. Our notion of property: analogue of a data dependent concept class in classification.

34


Download ppt "A Theory of Learning and Clustering via Similarity Functions Maria-Florina Balcan 09/17/2007 Joint work with Avrim Blum and Santosh Vempala Carnegie Mellon."

Similar presentations


Ads by Google