PRINCIPAL COMPONENT ANALYSIS (PCA) FACE RECOGNITION USING PRINCIPAL COMPONENT ANALYSIS (PCA) BY SANGEETHA LAXMAN MYSORE
FACE RECOGNITION ? IDENTIFYING A PERSON USING FACIAL FEATURES MACHINE LEARNING TECHNIQUE IS USED CHALLENGES - SHOULD NOT BE INFLUENCED BY EXTERNAL PARAMETERS LIKE BRIGHTNESS, NOISE, VISIBILTY ETC., FEW OF THE VARIOUS APPLICATIONS : IDENTITY SECURITY, UNLOCKING A PHONE ETC.,
PCA ? PCA is mathematical procedure that converts set of values of correlated images to set of eigenfaces (or Principal components) It can be thought of as revealing internal structure of the data in a way that best explains the major features / directions (variance) Used for making predictive models Reduces the dimensionality to reduce the calculation
Multivariate set of images visualized as a set of coordinates in a high dimensional space PCA provides low dimensional set of the images (like a shadow ) viewed from most informative point
Each incoming image can be noted as a weighted sum of the eigenfaces. Goal is to calculate the weight vector
Understanding PCA …
Each image’s variation is determined using the average of all given images
Number of computations is reduced by computing covariance vector
Variance is scaled down to find eigen faces which is used to determine weight vector
Algorithm for training model Calculate the average of all the train data Subtract each face from the average face (A) Calculate the covariance – product of A and its transpose Calculate eigen vector – product of covariance and train data Calculate the weight vector – product of eigen vector and A transpose
Algorithm for recognition model For the test image, calculate its weight using training eigen vectors and weight vector Calculate the difference of each weight vector and normalize them If any of these values is less than threshold, it’s recognized else not Threshold is heuristically-determined based on model training
Train data is created dynamically using SAVE button Any images can be tested using the VERIFY button
Source code for training data It’s verified using MATLAB Data is stored in device’s internal memory
Source code for testing incoming data Train Data stored in device’s internal memory is used here
Source code to capture events due to clicking of SAVE & VERIFY buttons
SHORT-COMINGS No pre-processing on the incoming image – so shadows, image brightness, image contrast, image resolution, face tilt etc., affects the results. This can be addressed by making enhancement to incorporate RGB2YUV algorithm All images needs to be at the constant distance from the camera. This can be 10 cm to 2 feet. This can also be addressed by incorporating LDHM – Long distance and high magnification techniques. Due to in-place storing of trained data which can dynamically grow, app can crash due to overload affecting performance. This can be handled by providing clear data option retaining some trained data OR using external storage like SD card
REFERENCES https://onionesquereality.wordpress.com/2009/02/11/face-recognition-using-eigenfaces-and- distance-classifiers-a-tutorial/ https://github.com/Lauszus/FaceRecognitionApp https://github.com/Lauszus/Eigenfaces http://www.iosrjournals.org/iosr-jece/papers/Vol9-Issue1/Version-6/A09160105.pdf http://roadsafellc.com/NCHRP22- 24/Literature/Papers/Metrics/Matching%20Patterns%20from%20Historical%20Data%20Using%2 0PCA%20and%20Distance%20Similarity%20Factors.pdf http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.301.437&rep=rep1&type=pdf https://pdfs.semanticscholar.org/d3b0/c0d2958bd1692c96f426e110446328d8248d.pdf
Q & A
THANK YOU