Presentation is loading. Please wait.

Presentation is loading. Please wait.

Facial Recognition as a Pattern Recognition Problem

Similar presentations


Presentation on theme: "Facial Recognition as a Pattern Recognition Problem"— Presentation transcript:

1 Facial Recognition as a Pattern Recognition Problem
Hongcheng Wang Beckman Institute, UIUC 2019/5/1

2 Contents Why Face Recognition? Face Recognition Approaches
Eigenface FisherFace SVM, LLE, IsoMAP, NN,… Experimental Results Conclusions 2019/5/1

3 Why Face Recognition? Non-intrusive.
Growing Interest in biometric authentication Data required is easily obtained and readily available. 2019/5/1

4 Face Recognition Approaches
Eigenface Identify discriminating components Fisherface Uses ‘within-class’ information to maximise class separation Kernel Eigenface vs. Kernel Fisherface Take higher order correlations into account LLE, IsoMAP, Charting, SVM, Neural networks 2019/5/1

5 Eigenface (Turk, Pentland, 91) -1
Use Principle Component Analysis (PCA) to determine the most discriminating features between images of faces. 2019/5/1

6 Eigenface -2 Create an image subspace (face space) which best discriminates between faces. Like faces occupy near points in face space. Compare two faces by projecting the images into faces pace and measuring the Euclidean distance between them. 2019/5/1

7 Eigenface -3 Similarly the following 1x2 pixel images are converted into the vectors shown. Each image occupies a different point in image space. Similar images are near each other in image space. Different images are far from each other in image space. 2019/5/1

8 Eigenface -4 xd x2 x1 A 256x256 pixel image of a face occupies a single point in 65,536-dimensional image space. Images of faces occupy a small region of this large image space. Similarly, different faces should occupy different areas of this smaller region. We can identify a face by finding the nearest ‘known’ face in image space. 2019/5/1

9 Dimensionality Reduction!
Eigenface -5 However, even tiny changes in lighting, expression or head orientation cause the location in image space to change dramatically. Large amounts of storage is required. What should we do? Dimensionality Reduction! 2019/5/1

10 Eigenface -6 Training Step
Align a set of face images (the training set T1, T2, …… TM ) Compute the average face image ψ = 1/M Σ TM Compute the difference image for each image in the training set φi = Ti – ψ Compute the covariance matrix of this set of difference images C = 1/MΣ φn φTn Compute the eigenvectors of the covariance matrix 2019/5/1

11 Eigenface -7 Here is an example of visualization of eigenvectors of the covariance matrix (which are called EIGENFACEs): These are the first 4 eigenvectors, from a training set of 23 images…. Only selecting the top M eigenfaces  reduces the dimensionality. Too few eigenfaces results in too much information loss, and hence less discrimination between faces. 2019/5/1

12 Eigenface -8 Recognition Step Projection in Eigenface
Projection ωi = W (T – ψ) W = {eigenfaces} Compare projections 2019/5/1

13 Eigenface -9 Matlab code Matlab code 2019/5/1 load faces.mat
mn = mean(c')'; % compute mean of each class for i=1:P msc(:,i) = c(:,i) - mn; end % Find an orthonormal basis using k-l cov = msc * msc'; %covariance matrix [V,D] = eig(cov); vects = V; % Order the eigenvects according to the eigenvalues for i=1:N evals(i) = D(i,i); end [a,b]=sort(evals); for i=1:N ind = b(N-i+1); if (a(N-i+1) > tol) %non-zero eigenvalues u(:,i) = vects(:,ind); ev(i) = D(ind,ind); proj=u'*msc; Matlab code test = reshape(double(imread('images/r3.jpg')),N1*N2,1); mst = test - mn; % subtracting the mean image projt = u'*mst; % project into the same eigenspace define by u % similarity measures using the L2 norm - Euclidean norm or the Euclidean distance proj_train = u'*(c- [mn mn mn mn]); diff = proj_train - [projt projt projt projt]; L=zeros(1,4); for j=1:4 for i=1:3 L(j) = L(j) + (diff(i, j))^2; end; [a b] = sort(L); fprintf('the image what we found is: ') display(b(1)) 2019/5/1

14 Eigenface -10 Advantages:
Discover structure of data lying near a linear subspace of the input space Non-iterative, globally optimal solution Disadvantages: Not capable of discovering nonlinear degrees of freedom; Registration and scaling issues Very sensitive to changes in lighting conditions The scatter being maximized is not only due to the between-class scatter that is useful for classification, but also the within-class scatter which is unwanted information for classification purposes; So PCA projection is optimal for reconstruction from a low dimensional basis, but may not be optimal for discrimination… 2019/5/1

15 Fisherface -1 Using Linear Discriminant Analysis (LDA) or Fisher’s Linear Discriminant (FLD) Eigenfaces attempt to maximise the scatter of the training images in face space, while Fisherfaces attempt to maximise the between class scatter, while minimising the within class scatter. In other words, moves images of the same face closer together, while moving images of difference faces further apart. “Eigenfaces vs. Fisherfaces: recognition using class specific linear projection” by Belhumeur & Kriegman, 1997 2019/5/1

16 Fisherface -2 Poor Projection Good Projection 2019/5/1

17 FisherFace - 3 N Sample images: C classes: Average of each class:
Total average: 2019/5/1

18 FisherFace - 4 Scatter of class i: Within class scatter:
Between class scatter: Total scatter: 2019/5/1

19 FisherFace - 5 After projection: Between class scatter (of y’s):
Within class scatter (of y’s): 2019/5/1

20 FisherFace - 6 Good separation 2019/5/1

21 FisherFace - 7 The wanted projection: How is it found ? 2019/5/1

22 FisherFace -8 Data dimension is much larger then the number of Samples
The matrix is singular 2019/5/1

23 FisherFace PCA+FLD Project with PCA to space Project with FLD to space
2019/5/1

24 FisherFace -10 As in the Eigenfaces, the vectors of Wopt are of dimension d, and can thus be thought of as images, or “Fisherfaces” 2019/5/1

25 FisherFace -11 %ignore the zero eigens and sort in descend order nz_eval_ind = find(eval>0.0001); nz_eval = eval(nz_eval_ind); for i=1:length(nz_eval_ind); nz_evec(:,i) = evec(:,nz_eval_ind(i)); end [seval,Ind] = sort(nz_eval); Ind = fliplr(Ind); %buld the eigen space v=ones(p,N-1); for i=1:N-1 v(:,i) = nz_evec(:,Ind(i)); for j=1:p v(j,i)=real(v(j,i)); %project images onto the eigen space Y = zeros(N-1,N*M); for i=1:N*M Y(:,i) = v'*X(:,i); N = 4; %Input how many classes are there M = 6; %Input how many images are in each class; p = **; %Input how many pixels are in one image; load face %read in the training images %calculate the mean for each class m = zeros(p,N); Sw = zeros(p,p); Sb = zeros(p,p); for i = 1:N m(:,i) = mean(X(:,((i-1)*M+1):i*M)')'; S = zeros(p,p); %calculate the within class scatter matrix for j = ((i-1)*M+1):i*M S = S + (X(:,j)-m(:,i))*(X(:,j)-m(:,i))'; end Sw = Sw+S; %calculate the between class scatter matrix Sb = Sb+(m(:,i)-total_m)*(m(:,i)-total_m)'; %calculate the generalized eigenvectors and eigen values [evec,eval] = eig(Sb,Sw); e = real(evec); 2019/5/1

26 Experimental Results - 1
Variation in Facial Expression, Eyewear, and Lighting Input: 160 images of 16 people Train: 159 images Test: 1 image With glasses Without glasses 3 Lighting conditions 5 expressions 2019/5/1

27 Experimental Results - 2
2019/5/1

28 Conclusion All methods perform well when conditions do not change much
Fisherfaces has best results 2019/5/1

29 The End Thank You ! 2019/5/1


Download ppt "Facial Recognition as a Pattern Recognition Problem"

Similar presentations


Ads by Google