Download presentation
Presentation is loading. Please wait.
Published byGriselda Sutton Modified over 8 years ago
1
Feature Selection and Dimensionality Reduction
2
“Curse of dimensionality” – The higher the dimensionality of the data, the more data is needed to learn a good model. – Learned model may be overly complex, overfitted, especially if some dimensions are irrelevant to the classification task How to deal with the curse of dimensionality? Assume a strong “prior”: e.g., data follows a Gaussian distribution (so all we have do to is estimate the parameters) Reduce the dimensionality via: – Selecting only most relevant / useful features – Projecting data to lower dimensional space (e.g., principal components analysis)
3
Feature Selection Goal: Select a subset of d features (d < n) in order to maximize classification performance.
4
Types of Feature Selection Methods Filter Wrapper Embedded
5
Filter Methods Independent of the classification algorithm. Apply a filtering function to the features before applying classifier. Examples of filtering functions: – Information gain of individual features – Statistical variance of individual features – Statistical correlation among features Training data Filter Selected features Classifier
6
Information Gain Measures, from training data, how much information about the classification task an individual feature f provides. Let S denote the training set. Information Gain = Entropy ( S ) − Entropy ( S f )
7
Intuitive idea of entropy Entropy (or “information”) = “degree of disorder” Entropy measures the degree of uniformity or non-uniformity in a collection. Example {1, 1, 1, 1, 1, 1, 1} vs. {1, −1, −1, −1, 1, 1, −1} Note, this is the same as {1, 1, 1, 1, 1, 1, 1} vs. {−1, −1, −1, −1, 1, 1, 1}
8
Intuitive idea of entropy of a training set Task: classify as Female or Male Training set 1: Barbara, Elizabeth, Susan, Lisa, Ellen, Kristi, Mike, John (lower entropy) Training set 2: Jane, Mary, Alice, Jenny, Bob, Allen, Doug, Steve (higher entropy)
9
Intuitive idea of entropy with respect to a feature Task: classify as Female or Male Instances: Jane, Mary, Alice, Bob, Allen, Doug Features: Each instance has two binary features: “wears lipstick” and “has long hair” Wears lipstick T F Jane, Mary, Alice, Jenny Pure split (feature has low entropy) Impure split (feature has high entropy) Bob, Allen, Doug, Steve T F Jane, Mary, Bob, Steve Alice, Jenny, Allen, Doug Has long hair
10
Entropy of a training set Let S be a set of training examples. p + = proportion of positive examples. p − = proportion of negative examples Roughly measures how predictable collection is, only on basis of distribution of + and − examples. Examples?
11
Let f be a numeric feature. Let t be a threshold. Split training set S into two subsets: Let Entropy with respect to a feature Then define:
12
Example Wears lipstick T F Jane, Mary, Alice, Jenny Bob, Allen, Doug, Steve T F Jane, Mary, Bob, Steve Alice, Jenny, Allen, Doug Has long hair Training set S: Jane, Mary, Alice, Jenny, Bob, Allen, Doug, Steve
13
Wrapper Methods Training data Choose subset of features Classifier (cross-validation) Accuracy on validation set Classifier (Entire training set with best subset of features) Evaluate classifier on test set After cross-validation
14
Filter Methods Pros: Fast Cons: Chosen filter might not be relevant for a specific kind of classifier. Doesn’t take into account interactions among features Often hard to know how many features to select.
15
Wrapper Methods Pros: Features are evaluated in context of classification Wrapper method selects number of features to use Cons: Slow Filter Methods Pros: Fast Cons: Chosen filter might not be relevant for a specific kind of classifier. Doesn’t take into account interactions among features Often hard to know how many features to select.
16
Wrapper Methods Pros: Features are evaluated in context of classification Wrapper method selects number of features to use Cons: Slow Filter Methods Pros: Fast Cons: Chosen filter might not be relevant for a specific kind of classifier. Doesn’t take into account interactions among features Often hard to know how many features to select. Intermediate method, often used with SVMs: Train SVM using all features Select features f i with highest | w i | Retrain SVM with selected features Test SVM on test set
17
From M., Thomure, The Role of Prototype Learning in Hierarchical Models of Vision Number of Features Used
18
Embedded Methods Feature selection is intrinsic part of learning algorithm One example: L 1 SVM: Instead of minimizing (the “L2 norm”) we minimize the L 1 norm of the weight vector: Result is that most of the weights go to zero, leaving a small subset of the weights. Cf., field of “sparse coding”
19
Dimensionality Reduction: Principal Components Analysis Reading: Smith, A Tutorial on Principal Components Analysis (linked to class webpage)
20
http://www.youtube.com/watch?v=BfTMmoDFXyE
21
Data x2x2 x1x1
22
x2x2 x1x1 First principal component Gives direction of largest variation of the data
23
Data x2x2 x1x1 Second principal component Gives direction of second largest variation First principal component Gives direction of largest variation of the data
24
Rotation of Axes x2x2 x1x1
25
Dimensionality reduction x2x2 x1x1
26
Classification (on reduced dimensionality space) + − x2x2 x1x1
27
Classification (on reduced dimensionality space) + − x2x2 x1x1 Note: Can be used for labeled or unlabeled data.
28
Principal Components Analysis (PCA) Summary: PCA finds new orthogonal axes in directions of largest variation in data. PCA used to create high-level features in order to improve classification and reduce dimensions of data without much loss of information. Used in machine learning and in signal processing and image compression (among other things).
29
Suppose features are A 1 and A 2, and we have n training examples. x’s denote values of A 1 and y’s denote values of A 2 over the training examples. Variance of an feature: Background for PCA
30
Covariance of two features: If covariance is positive, both dimensions increase together. If negative, as one increases, the other decreases. Zero: independent of each other.
31
Covariance matrix – Suppose we have n features, A 1,..., A n. – Covariance matrix:
32
Covariance matrix
33
Eigenvectors: – Let M be an n n matrix. v is an eigenvector of M if M v = v is called the eigenvalue associated with v – For any eigenvector v of M and scalar a, – Thus you can always choose eigenvectors of length 1: – If M is symmetric with real entries, it has n eigenvectors, and they are orthogonal to one another. – Thus eigenvectors can be used as a new basis for a n-dimensional vector space. Review of Matrix Algebra
34
Principal Components Analysis (PCA) 1.Given original data set S = {x 1,..., x k }, produce new set by subtracting the mean of feature A i from each x i. Mean: 1.81 1.91 Mean: 0 0
36
2.Calculate the covariance matrix: 3.Calculate the (unit) eigenvectors and eigenvalues of the covariance matrix: x y xyxy
37
Eigenvector with largest eigenvalue traces linear pattern in data
38
4.Order eigenvectors by eigenvalue, highest to lowest. In general, you get n components. To reduce dimensionality to p, ignore n p components at the bottom of the list.
39
Construct new “feature vector” (assuming v i is a column vector). Feature vector = (v 1, v 2,...v p ) V1V1 V2V2
40
5.Derive the new data set. TransformedData = RowFeatureVector RowDataAdjust where RowDataAdjust = transpose of mean-adjusted data This gives original data in terms of chosen components (eigenvectors)—that is, along these axes.
42
Intuition: We projected the data onto new axes that captures the strongest linear trends in the data set. Each transformed data point tells us how far it is above or below those trend lines.
44
Reconstructing the original data We did: TransformedData = RowFeatureVector RowDataAdjust so we can do RowDataAdjust = RowFeatureVector -1 TransformedData = RowFeatureVector T TransformedData and RowDataOriginal = RowDataAdjust + OriginalMean
46
What you need to remember General idea of what PCA does – Finds new, rotated set of orthogonal axes that capture directions of largest variation – Allows some axes to be dropped, so data can be represented in lower-dimensional space. – This can improve classification performance and avoid overfitting due to large number of dimensions.
47
Example: Linear discrimination using PCA for face recognition (“Eigenfaces”) 1.Preprocessing: “Normalize” faces Make images the same size Line up with respect to eyes Normalize intensities
49
2.Raw features are pixel intensity values (2061 features) 3.Each image is encoded as a vector i of these features 4.Compute “mean” face in training set:
50
Subtract the mean face from each face vector Compute the covariance matrix C Compute the (unit) eigenvectors v i of C Keep only the first K principal components (eigenvectors) From W. Zhao et al., Discriminant analysis of principal components for face recognition.
52
The eigenfaces encode the principal sources of variation in the dataset (e.g., absence/presence of facial hair, skin tone, glasses, etc.). We can represent any face as a linear combination of these “basis” faces. Use this representation for: Face recognition (e.g., Euclidean distance from known faces) Linear discrimination (e.g., “glasses” versus “no glasses”, or “male” versus “female”) Interpreting and Using Eigenfaces
53
Eigenfaces Demo http://demonstrations.wolfram.com/FaceRecognitionUsingTheEigenfaceAlgorit hm/ http://demonstrations.wolfram.com/FaceRecognitionUsingTheEigenfaceAlgorit hm/
54
Kernel PCA PCA: Assumes direction of variation are all straight lines Kernel PCA: Maps data to higher dimensional space
55
Kernel PCA Note that covariance terms can be expressed in terms of dot products: Use kernel function K(x,y) to project to higher-dimensional space:
56
From Wikipedia Original data Data after kernel PCA
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.