Presentation is loading. Please wait.

Presentation is loading. Please wait.

Learning with General Similarity Functions Maria-Florina Balcan.

Similar presentations


Presentation on theme: "Learning with General Similarity Functions Maria-Florina Balcan."— Presentation transcript:

1 Learning with General Similarity Functions Maria-Florina Balcan

2 2-Minute Version Generic classification problem: Problem: pixel representation not so good. Powerful technique: use a kernel, a special kind of similarity function K(, ). Develop a theory that views K as a measure of similarity. General sufficient conditions for K to be useful for learning. But, standard theory in terms of implicit mappings. [Balcan-Blum, ICML 2006][Balcan-Blum-Srebro, MLJ 2008] Our Work: [Balcan-Blum-Srebro, COLT 2008]

3 3 Kernel Methods A kernel K is a legal def of dot-product: i.e. there exists an implicit mapping  such that K(, )=  ( ) ¢  ( ). E.g., K(x,y) = (x ¢ y + 1) d  (n-dimensional space) ! n d -dimensional space Why Kernels matter? Many algorithms interact with data only via dot-products. So, if replace x ¢ y with K(x,y), they act implicitly as if data was in the higher-dimensional  -space. What is a Kernel? Prominent method for supervised classification today. The learning alg. interacts with the data via a similarity fns

4 4 x2x2 x1x1 O O O O O O O O X X X X X X X X X X X X X X X X X X z1z1 z3z3 O O O O O O O O O X X X X X X X X X X X X X X X X X X Example E.g., for n=2, d=2, the kernel z2z2 K(x,y) = (x ¢ y) d corresponds to original space  space

5 5 Generalize Well if Good Margin If data is linearly separable by margin in  -space, then good sample complexity. |  (x)| · 1 + + + + + + - - - - -   If margin  in  -space, then need sample size of only Õ(1/  2 ) to get confidence in generalization.

6 6 Kernel Methods Significant percentage of ICML, NIPS, COLT. Very useful in practice for dealing with many different types of data. Prominent method for supervised classification today

7 7 Limitations of the Current Theory Existing Theory: in terms of margins in implicit spaces. In practice: kernels are constructed by viewing them as measures of similarity. Kernel requirement rules out many natural similarity functions. Difficult to think about, not great for intuition. Better theoretical explanation?

8 8 Better Theoretical Framework Existing Theory: in terms of margins in implicit spaces. In practice: kernels are constructed by viewing them as measures of similarity. Kernel requirement rules out natural similarity functions. Difficult to think about, not great for intuition. Yes! We provide a more general and intuitive theory that formalizes the intuition that a good kernel is a good measure of similarity. Better theoretical explanation? [Balcan-Blum, ICML 2006][Balcan-Blum-Srebro, MLJ 2008] [Balcan-Blum-Srebro, COLT 2008]

9 9 More General Similarity Functions 2) Is broad: includes usual notion of good kernel. We provide a notion of a good similarity function: 1)Simpler, in terms of natural direct quantities. no implicit high-dimensional spaces no requirement that K(x,y)=  (x) ¢  (y) K can be used to learn well. has a large margin sep. in  -space Good kernels First attempt Main notion 3) Allows one to learn classes that have no good kernels.

10 10 A First Attempt K is ( ,  )-good for P if a 1-  prob. mass of x satisfy: E y~P [K(x,y)| l (y)= l (x)] ¸ E y~P [K(x,y)| l (y)  l (x)]+  P distribution over labeled examples (x, l (x)) K is good if most x are on average more similar to points y of their own type than to points y of the other type. Goal: output classification rule good for P Average similarity to points of opposite label gap Average similarity to points of the same label

11 11 A First Attempt E.g., K(x,y) ¸ 0.2, l (x) = l (y) K(x,y) random in {-1,1}, l (x)  l (y) 1 1 0.5 0.4 Example: K is ( ,  )-good for P if a 1-  prob. mass of x satisfy: E y~P [K(x,y)| l (y)= l (x)] ¸ E y~P [K(x,y)| l (y)  l (x)]+  0.3

12 12 A First Attempt Algorithm Draw sets S +, S - of positive and negative examples. Classify x based on average similarity to S + versus to S -. K is ( ,  )-good for P if a 1-  prob. mass of x satisfy: S+S+ 1 1 0.5 0.4 S-S- xx E y~P [K(x,y)| l (y)= l (x)] ¸ E y~P [K(x,y)| l (y)  l (x)]+ 

13 13 A First Attempt K is ( ,  )-good for P if a 1-  prob. mass of x satisfy: E y~P [K(x,y)| l (y)= l (x)] ¸ E y~P [K(x,y)| l (y)  l (x)]+  Theorem Algorithm Draw sets S +, S - of positive and negative examples. Classify x based on average similarity to S + versus to S -. For a fixed good x prob. of error w.r.t. x (over draw of S +, S - ) is ± ² ’. [Hoeffding] Overall error rate ·  +  ’. If |S + | and |S - | are  ((1/  2 ) ln(1/  ’)), then with probability ¸ 1- , error ·  +  ’. At most  chance that the error rate over GOOD is ¸  ’.

14 14 A First Attempt: Not Broad Enough E y~P [K(x,y)| l (y)= l (x)] ¸ E y~P [K(x,y)| l (y)  l (x)]+  has a large margin separator; + + + ++ + - - - -- - more similar to - than to typical + ½ versus ¼ 30 o Similarity function K(x,y)=x ¢ y does not satisfy our definition. ½ versus ½ ¢ 1 + ½ ¢ (- ½)

15 15 A First Attempt: Not Broad Enough E y~P [K(x,y)| l (y)= l (x)] ¸ E y~P [K(x,y)| l (y)  l (x)]+  + + + ++ + - - - -- - R Broaden: 9 non-negligible R s.t. most x are on average more similar to y 2 R of same label than to y 2 R of other label. [even if do not know R in advance] 30 o

16 16 Broader Definition K is ( , ,  ) if 9 a set R of “reasonable” y (allow probabilistic) s.t. 1-  fraction of x satisfy: Draw S={y 1, , y d } set of landmarks. F(x) = [K(x,y 1 ), …,K(x,y d )]. RdRd F F(P) Property x ! If enough landmarks (d=  (1/  2  )), then with high prob. there exists a good L 1 large margin linear separator. Re-represent data. w=[0,0,1/n+,1/n+,0,0,0,-1/n-,0,0] P At least  prob. mass of reasonable positives & negatives. E y~P [K(x,y)| l (y)= l (x), R(y)] ¸ E y~P [K(x,y)| l (y)  l (x), R(y)]+ 

17 17 Broader Definition Draw S={y 1, , y d } set of landmarks. F(x) = [K(x,y 1 ), …,K(x,y d )] RdRd F F(P) Algorithm x ! Re-represent data. O O O O O X X X X X XX X XX O O O O O and run a good L 1 linear separator alg.Take a new set of labeled examples, project to this space, and run a good L 1 linear separator alg. P K is ( , ,  ) if 9 a set R of “reasonable” y (allow probabilistic) s.t. 1-  fraction of x satisfy: d u =Õ(1/(  2  )) d l =O(1/(  2 ² acc ln (d u ) )) At least  prob. mass of reasonable positives & negatives. E y~P [K(x,y)| l (y)= l (x), R(y)] ¸ E y~P [K(x,y)| l (y)  l (x), R(y)]+ 

18 18 Kernels versus Similarity Functions Theorem K is also a good similarity function. Main Technical Contributions Our Work Good Kernels Good Similarities (but  gets squared). K is a good kernel If K has margin  in implicit space, then for any , K is ( ,  2,  )-good in our sense.

19 19 Kernels versus Similarity Functions Can also show a Strict Separation. Main Technical Contributions Our Work Good Kernels Good Similarities Strictly more general For any class C of n pairwise uncorrelated functions, 9 a similarity function good for all f in C, but no such good kernel function exists. Theorem K is also a good similarity function. (but  gets squared). K is a good kernel

20 20 Kernels versus Similarity Functions Can also show a Strict Separation. For any class C of n pairwise uncorrelated functions, 9 a similarity function good for all f in C, but no such good kernel function exists. Theorem In principle, should be able to learn from O(  -1 log(|C|/  )) labeled examples. Claim 1: can define generic (0,1,1/|C|)-good similarity function achieving this bound. (Assume D not too concentrated) Claim 2: There is no ( ,  ) good kernel in hinge loss, even if  =1/2 and  =1/ |C| -1/2. So, margin based SC is d=  (1/|C|).

21 21 Learning with Multiple Similarity Functions Let K 1, …, K r be similarity functions s. t. some (unknown) convex combination of them is ( ,  )-good. Guarantee: Whp the induced distribution F(P) in R 2dr has a separator of error ·  +  at L 1 margin at least Algorithm Draw S={y 1, , y d } set of landmarks. Concatenate features. Sample complexity only increases by log(r) factor! F(x) = [K 1 (x,y 1 ), …,K r (x,y 1 ), …, K 1 (x,y d ),…,K r (x,y d )].

22 Conclusions Theory of learning with similarity fns that provides a formal way of understanding good kernels as good similarity fns. Our algorithms work for similarity fns that aren’t necessarily PSD (or even symmetric). Algorithmic Implications Can use non-PSD similarities, no need to “transform” them into PSD functions and plug into SVM. E.g., Liao and Noble, Journal of Computational Biology

23 Conclusions Theory of learning with similarity fns that provides a formal way of understanding good kernels as good similarity fns. Our algorithms work for similarity fns that aren’t necessarily PSD (or even symmetric). Open Questions Analyze other notions of good similarity fns. Our Work Good Kernels

24 24

25 25 Similarity Functions for Classification Algorithmic Implications Can use non-PSD similarities, no need to “transform” them into PSD functions and plug into SVM. E.g., Liao and Noble, Journal of Computational Biology Give justification to the following rule: Also show that anything learnable with SVM is learnable this way!


Download ppt "Learning with General Similarity Functions Maria-Florina Balcan."

Similar presentations


Ads by Google