Presentation is loading. Please wait.

Presentation is loading. Please wait.

A Geometric Perspective on Machine Learning 何晓飞 浙江大学计算机学院 1.

Similar presentations


Presentation on theme: "A Geometric Perspective on Machine Learning 何晓飞 浙江大学计算机学院 1."— Presentation transcript:

1 A Geometric Perspective on Machine Learning 何晓飞 浙江大学计算机学院 1

2 Machine Learning: the problem f 何晓飞 Information (training data) f: X → Y X and Y are usually considered as a Euclidean spaces. 2

3 Manifold Learning: geometric perspective  The data space may not be a Euclidean space, but a nonlinear manifold. ☒ Euclidean distance. ☒ f is defined on Euclidean space. ☒ ambient dimension ☑ geodesic distance. ☑ f is defined on nonlinear manifold. ☑ manifold dimension. instead… 3

4 Manifold Learning: the challenges The manifold is unknown! We have only samples!  How do we know M is a sphere or a torus, or else?  How to compute the distance on M?  versus This is unknown: This is what we have: ? ? or else…? Topology Geometry Functional analysis 4

5 Manifold Learning: current solution  Find a Euclidean embedding, and then perform traditional learning algorithms in the Euclidean space. 5

6 Simplicity 6

7 7

8 Simplicity is relative 8

9 Manifold-based Dimensionality Reduction  Given high dimensional data sampled from a low dimensional manifold, how to compute a faithful embedding?  How to find the mapping function ?  How to efficiently find the projective function ? 9

10 A Good Mapping Function  If x i and x j are close to each other, we hope f(x i ) and f(x j ) preserve the local structure (distance, similarity …)  k-nearest neighbor graph:  Objective function: Different algorithms have different concerns 10

11 Locality Preserving Projections Principle: if x i and x j are close, then their maps y i and y j are also close. 11

12 Locality Preserving Projections Principle: if x i and x j are close, then their maps y i and y j are also close. Mathematical formulation: minimize the integral of the gradient of f. 12

13 Locality Preserving Projections Principle: if x i and x j are close, then their maps y i and y j are also close. Mathematical formulation: minimize the integral of the gradient of f. Stokes’ Theorem: 13

14 Locality Preserving Projections Principle: if x i and x j are close, then their maps y i and y j are also close. Mathematical formulation: minimize the integral of the gradient of f. Stokes’ Theorem: LPP finds a linear approximation to nonlinear manifold, while preserving the local geometric structure. 14

15 Manifold of Face Images Expression (Sad >>> Happy) Pose (Right >>> Left) 15

16 Manifold of Handwritten Digits Thickness Slant 16

17  Learning target:  Training Examples:  Linear Regression Model Active and Semi-Supervised Learning: A Geometric Perspective 17

18 Generalization Error  Goal of Regression Obtain a learned function that minimizes the generalization error (expected error for unseen test input points).  Maximum Likelihood Estimate 18

19 Gauss-Markov Theorem For a given x, the expected prediction error is: 19

20 Gauss-Markov Theorem For a given x, the expected prediction error is: Good! Bad! 20

21 Experimental Design Methods Three most common scalar measures of the size of the parameter (w) covariance matrix:  A-optimal Design: determinant of Cov(w).  D-optimal Design: trace of Cov(w).  E-optimal Design: maximum eigenvalue of Cov(w). Disadvantage: these methods fail to take into account unmeasured (unlabeled) data points. 21

22 Manifold Regularization: Semi- Supervised Setting  Measured (labeled) points: discriminant structure  Unmeasured (unlabeled) points: geometrical structure ? 22

23  Measured (labeled) points: discriminant structure  Unmeasured (unlabeled) points: geometrical structure ? random labeling Manifold Regularization: Semi- Supervised Setting 23

24  Measured (labeled) points: discriminant structure  Unmeasured (unlabeled) points: geometrical structure ? random labeling active learning active learning + semi-supervsed learning Manifold Regularization: Semi- Supervised Setting 24

25 Unlabeled Data to Estimate Geometry  Measured (labeled) points: discriminant structure 25

26 Unlabeled Data to Estimate Geometry  Measured (labeled) points: discriminant structure  Unmeasured (unlabeled) points: geometrical structure 26

27 Unlabeled Data to Estimate Geometry  Measured (labeled) points: discriminant structure  Unmeasured (unlabeled) points: geometrical structure Compute nearest neighbor graph G 27

28 Unlabeled Data to Estimate Geometry  Measured (labeled) points: discriminant structure  Unmeasured (unlabeled) points: geometrical structure Compute nearest neighbor graph G 28

29 Unlabeled Data to Estimate Geometry  Measured (labeled) points: discriminant structure  Unmeasured (unlabeled) points: geometrical structure Compute nearest neighbor graph G 29

30 Unlabeled Data to Estimate Geometry  Measured (labeled) points: discriminant structure  Unmeasured (unlabeled) points: geometrical structure Compute nearest neighbor graph G 30

31 Unlabeled Data to Estimate Geometry  Measured (labeled) points: discriminant structure  Unmeasured (unlabeled) points: geometrical structure Compute nearest neighbor graph G 31

32 Laplacian Regularized Least Square (Belkin and Niyogi, 2006)  Linear objective function  Solution 32

33 Active Learning How to find the most representative points on the manifold? 33

34  Objective: Guide the selection of the subset of data points that gives the most amount of information.  Experimental design: select samples to label  Manifold Regularized Experimental Design Share the same objective function as Laplacian Regularized Least Squares, simultaneously minimize the least square error on the measured samples and preserve the local geometrical structure of the data space. Active Learning 34

35  ,  In order to make the estimator as stable as possible, the size of the covariance matrix should be as small as possible.  D-optimality: minimize the determinant of the covariance matrix Analysis of Bias and Variance 35

36 Select the first data point such that is maximized, Suppose k points have been selected, choose the (k+1)th point such that. Update Manifold Regularized Experimental Design Where are selected from The algorithm 36

37  Consider feature space F induced by some nonlinear mapping φ, and =K(x i, x i ).  K(·, ·): positive semi-definite kernel function  Regression model in RKHS:  Objective function in RKHS: Nonlinear Generalization in RKHS 37

38 Select the first data point such that is maximized, Suppose k points have been selected, choose the (k+1)th point such that. Update Kernel Graph Regularized Experimental Design where are selected from Nonlinear Generalization in RKHS 38

39 A Synthetic Example A-optimal Design Laplacian Regularized Optimal Design 39

40 A Synthetic Example A-optimal Design Laplacian Regularized Optimal Design 40

41 Application to image/video compression 41

42 Video compression 42

43 Topology Can we always map a manifold to a Euclidean space without changing its topology? … ? 43

44 Topology Simplicial Complex Homology Group Betti NumbersEuler Characteristic Good CoverSample Points Homotopy Number of components, dimension,… 44

45 Topology The Euler Characteristic is a topological invariant, a number that describes one aspect of a topological space’s shape or structure. 1 -2 012 The Euler Characteristic of Euclidean space is 1! 00 45

46 Challenges  Insufficient sample points  Choose suitable radius  How to identify noisy holes (user interaction?) Noisy hole homotopy homeomorphsim 46

47 Q & A 47


Download ppt "A Geometric Perspective on Machine Learning 何晓飞 浙江大学计算机学院 1."

Similar presentations


Ads by Google