Presentation is loading. Please wait.

Presentation is loading. Please wait.

Ajay Kumar, Member, IEEE, and David Zhang, Senior Member, IEEE.

Similar presentations


Presentation on theme: "Ajay Kumar, Member, IEEE, and David Zhang, Senior Member, IEEE."— Presentation transcript:

1 Ajay Kumar, Member, IEEE, and David Zhang, Senior Member, IEEE

2 Introdution This paper proposes a new bimodal biometric system using feature-level fusion of hand shape and palm texture. Fingerprint iris palmprint voice feature subset

3 Proposed Work Feature subset selection helps to identify and remove much of the irrelevant and redundant features. improving the performance of hand-shape recognition by exploring new features investigating the palmprint recognition in frequency domain using popular discrete cosine coefficients

4 Proposed Work Evaluating the performance gain from the feature subset selection and features combination. Bayes support vector machines(svm) feed-forward neural networks(FFN) K -nearest neighbor (K-NN) decision-tree

5 Proposed Work

6 AUTOMATED EXTRACTION OF HAND-SHAPE AND PALMPRINT IMAGES extraction of these two images 1. a binary image depicting hand-shape 2.a gray-level image containing palmprint texture

7 AUTOMATED EXTRACTION OF HAND-SHAPE AND PALMPRINT IMAGES The magnitude of thresholding limit η is computed by maximizing the object function Jop(η) where the numbers of pixels in class 1 and 2 are represented by P1(η)and P2(η),µ1(η) and µ2(η) are the corresponding sample mean.

8 AUTOMATED EXTRACTION OF HAND-SHAPE AND PALMPRINT IMAGES The orientation of each of the binarized image P(x,y) is estimated by the parameters of the best-fitting ellipse is estimated by the parameters of the best- fitting ellipse. The counterclockwise rotation of major axis relative to the normal axis is used to approximate the orientation θ

9 AUTOMATED EXTRACTION OF HAND-SHAPE AND PALMPRINT IMAGES ρ11, ρ22, and ρ12 are the normalized second-order moments of pixels in the image P(x,y) (cx,cy) denote the location of its centroid

10 AUTOMATED EXTRACTION OF HAND-SHAPE AND PALMPRINT IMAGES Remove isolated foreground blobs or holes by morphological preprocessing The distance transform of every pixel in the hand-shape image is used to estimate the center of palmprint. The location (u,v)of the pixel with highest magnitude of distance transform is obtained. All the gray-level pixels from the original hand image, in a fixed-square region, centered at (u,v) and oriented along, are used as the palmprint image.

11 AUTOMATED EXTRACTION OF HAND-SHAPE AND PALMPRINT IMAGES

12 Palmprint Features The discrete cosine transform(DCT) that maps a Q × R spatial image block Ω to its values in frequency domain (fig4) The feature vector from every palmprint image is formed by computing standard deviation of these significant DCT coefficients in each of these blocks.

13 Palmprint Features

14 Hand-Shape Features We investigated seven such shape properties, i.e., perimeter (f1) , solidity (f2), extent (f3) eccentricity (f4), x – y position of centroid relative to shape boundary (f5 - f6), and convex area (f7)to improve the success of prior methods. In addition, 16 geometrical features from the hand shape, as proposed in prior work were also obtained; four finger length (f8 – f11), eight finger width (f12 – f19), palm width (f20), palm length (f21), hand area (f22), and hand length (f23).

15 CLASSIFICATION SCHEMES naive Bayes normal- it traditionally makes the assumption that the feature values are normally distributed estimation - The distribution of features was also estimated using nonparametric kernel density estimation multinomial K -nearest neighbor (k-NN) - minimum Euclidean distance between the query feature vector and all the prototype training data support vector machine (SVM

16 CLASSIFICATION SCHEMES The feed-forward neural network (FFN) - a linear activation function for the last layer the sigmoid activation function was employed for other layers The C4.5 decision tree logistic model tree (LMT)

17 EXPERIMENTS AND RESULTS

18

19

20

21 CONCLUSION Hand-shape and palmprint image segmentation, and the combination of features from these two images, has shown to be useful in achieving higher performance. Our experimental results in Section V suggested the usefulness of shape properties (e.g., perimeter, extent, convex area) which can be effectively used to enhance the performance in hand-shape recognition. Although more work remains to be done, our results to date indicate that the combination of hand-shape and palmprint features constitutes a promising addition to the biometrics-based personal recognition systems.

22 THE END thanks for your listening


Download ppt "Ajay Kumar, Member, IEEE, and David Zhang, Senior Member, IEEE."

Similar presentations


Ads by Google