Download presentation

Presentation is loading. Please wait.

Published byEmmalee Thorndike Modified over 3 years ago

1
Fusion of HMM’s Likelihood and Viterbi Path for On-line Signature Verification Bao Ly Van - Sonia Garcia Salicetti - Bernadette Dorizzi Institut National des Télécommunications Prague – May 2004 Presented by Bao LY VAN

2
Overview HMM for Online Signature Likelihood Approach: Normalized Log- Likelihood information given by the HMM –Comparison with Dolfing’s system on Philips database [Ref] J.G.A. Dolfing, "Handwriting recognition and verification, a Hidden Markov approach", Ph.D. thesis, Philips Electronics N.V., 1998. Viterbi Path Approach: exploit the Viterbi Path information given by the HMM –Motivation of the Viterbi Path approach –Fusion Likelihood and Viterbi Path Experiments & Results New

3
Introduction of Online Signature Captured by a Digitizing Tablet A signature: a sequence of sampled points –Raw data: Coordinates:x(t), y(t) Pressure:p(t) Pen Inclination Angles Altitude (0°-90°) 90° 270° 0° Azimuth (0°-359°) 180°

4
HMM Architecture Continuous, left-right HMM Mixture of 4 Gaussians Personalized number of states –30 points to estimate a gaussian When using 5 training signatures, the personalized number of states for this signer is 10

5
Feature Extraction Features extracted from coordinates –Velocity –Acceleration –Curvature radius –Normalized coordinates by the gravity center –Length to Width ratio –... 25 features at each point of the signature: signature = sequence of feature vectors

6
Personalized Feature Normalization Goals: –Same variance for all features = same importance –A good choice of leads to a faster convergence –Avoid the overflow problem in training phase Implementation: –Normalization factors (one per feature) of each signer are stored with his/her signature model (HMM) –A test signature will be normalized according to these factors Feature Z Feature A Feature Z Feature A Normalize

7
HMM Likelihood Approach Log-Likelihood of a signature –Normalized by the signature length Score –Based on the Distance between the LLN of the test signature and the Average LLN of training signatures: |LLN-LLNmean| Convert to similitude between [0, 1] (Likelihood Score)

8
What is The Viterbi Path Approach? HMM (Viterbi Algorithm) inputoutput Normalized Log-Likelihood Viterbi Path (VP) VP is the sequence of states that maximizes the likelihood of the test signature New

9
Representation of Viterbi Path VP generated by a N states HMM is represented by a N components Segmentation Vector (SV) Each component of SV contains the number of points modeled by the corresponding state

10
LL = -1166.10 LL N = -14.95 SV = (21, 30, 27) LL = -296.46 LL N = -16.47 SV = (18, 0, 0) Complementarity between VP and LL Genuine and forged signatures can have very close Normalized Log-Likelihoods although their VPs (SVs) are quite different It is easier to forge the system when the score based on Normalized Likelihood

11
How to use the VP (SV) information? Convert Average Distance to similitude between [0, 1] (Viterbi Score) … SV 1 SV 2 SV K References …… HMM SVs of HMM’s training signatures are saved as References SV average Average Distance Hamming Distance...

12
Viterbi Score vs Likelihood Score Important overlap when using only one score Viterbi and Likelihood scores are complementary Simple arithmetic mean is used for fusion (no extra- training)

13
Experiments Overview Protocol P1: –Exploits only the likelihood score on Philips database (with the same protocol as Dolfing) [Ref] J.G.A. Dolfing, "Handwriting recognition and verification, a Hidden Markov approach", Ph.D. thesis, Philips Electronics N.V., 1998. Protocol P2: –Performs fusion of 2 scores on Philips database Protocol P3: –Performs fusion of 2 scores on BIOMET database

14
P1: Likelihood Score on Philips Database 15 signatures to train HMM Repeat 10 times: robust results Our result is of 0.95% EER compared to 2.2% EER of Dolfing (1998) NN0.711.31.622.53.2610 TE min(%) 1.321.590.970.920.880.971.101.231.98 EER (%)1.352.041.020.960.951.031.131.241.992.02

15
P2: Fusion on Philips database Only 5 signatures to train HMM Repeat 50 times: robust results Fusion lowers the Error Rate by 15% (compared to likelihood) LikelihoodViterbi PathFusion TE min (%) 3.737.663.26 EER (%) 4.188.123.54

16
P3: Fusion on BIOMET database 5 signatures to train HMM Genuine test on two session Repeat 50 times: robust results Fusion lowers the Error Rate by a factor 2 (compared to likelihood) genuine test data Likelihood Viterbi Path Fusion No time variability TE min (%)5.273.712.47 EER (%)6.454.072.84 Time variability (5 months before) TE min (%)14.307.446.95 EER (%)16.709.218.57

17
P3: Confidence Level on 50 trials

18
Conclusions We have built a HMM-based system and introduced 2 measures of information: –Likelihood score –Viterbi score We have compared both scores on two databases: Philips and BIOMET The new approach using VP information can give better results than LL approach (BIOMET) Fusion of both scores improves results which shows their complementarity

19
Thank you for your attention!

20
Protocol 1: Only Likelihood Philips database –51 signers, 30 genuine and about 70 forgeries per signer –Forgery of high quality Dolfing’s protocol –15 genuine signatures to train HMM –15 other genuine signatures and forgeries to test HMM (~4000 signatures) –Fixed partition of training and testing genuine signatures Our result is of 0.95% EER compared to 2.2% EER of Dolfing (1998) NN0.711.31.622.53.2610 TE min(%) 1.321.590.970.920.880.971.101.231.98 EER (%)1.352.041.020.960.951.031.131.241.992.02 Mean result of 10 trials

21
Protocol 2: Fusion on Philips database Protocol –Only 5 signatures to train HMM, randomly selected from 30 –Test on the remaining 25 genuine signatures and forgeries –Repeat 50 times: robust results Fusion lowers the Error Rate by 15% (compared to likelihood) LikelihoodViterbi PathFusion TE min (%) 3.737.663.26 EER (%) 4.188.123.54

22
Protocol 3: Fusion on BIOMET BIOMET Database –87 signers –Two sessions spaced of 5 months: 5 + 10 genuine, 12 forgeries per signer Protocol: –5 signatures (2nd session) to train HMM, randomly selected from 10 –test on the remaining 5 genuine signatures of the 2 nd session, on the 5 genuine of the 1 st session and the forgeries –Repeat 50 times: robust results Fusion lowers the Error Rate by a factor 2 (compared to likelihood) genuine test data Likelihood Viterbi Path Fusion 2 nd session TE min (%)5.273.712.47 EER (%)6.454.072.84 1 st session (5 months before) TE min (%)14.307.446.95 EER (%)16.709.218.57

Similar presentations

OK

Hidden Markov Models K 1 … 2. Outline Hidden Markov Models – Formalism The Three Basic Problems of HMMs Solutions Applications of HMMs for Automatic Speech.

Hidden Markov Models K 1 … 2. Outline Hidden Markov Models – Formalism The Three Basic Problems of HMMs Solutions Applications of HMMs for Automatic Speech.

© 2018 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on physical and chemical changes for class 6 Ppt on electricity generation from salt Civil engineering ppt on high performance concrete Ppt on accounting standards in india Ppt on review of related literature on research Ppt on rabindranath tagore in english Ppt on oxygen cycle and ozone layer Ppt on online marketing research Ppt on wind power in india Jit ppt on manufacturing company