Chapter 14 Speaker Recognition 14.1 Introduction to speaker recognition 14.2 The basic problems for speaker recognition 14.3 Approaches and systems 14.4.

Slides:



Advertisements
Similar presentations
Robust Speech recognition V. Barreaud LORIA. Mismatch Between Training and Testing n mismatch influences scores n causes of mismatch u Speech Variation.
Advertisements

Speech Recognition with Hidden Markov Models Winter 2011
Supervised Learning Recap
Lecture 17: Supervised Learning Recap Machine Learning April 6, 2010.
Instructor: Dr. G. Bebis Reza Amayeh Fall 2005
Page 0 of 8 Time Series Classification – phoneme recognition in reconstructed phase space Sanjay Patil Intelligent Electronics Systems Human and Systems.
Lecture 5: Learning models using EM
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Language and Speaker Identification using Gaussian Mixture Model Prepare by Jacky Chau The Chinese University of Hong Kong 18th September, 2002.
Pattern Recognition. Introduction. Definitions.. Recognition process. Recognition process relates input signal to the stored concepts about the object.
Authors: Anastasis Kounoudes, Anixi Antonakoudi, Vasilis Kekatos
9.0 Speaker Variabilities: Adaption and Recognition References: of Huang 2. “ Maximum A Posteriori Estimation for Multivariate Gaussian Mixture.
EE513 Audio Signals and Systems Statistical Pattern Classification Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
HMM-BASED PSEUDO-CLEAN SPEECH SYNTHESIS FOR SPLICE ALGORITHM Jun Du, Yu Hu, Li-Rong Dai, Ren-Hua Wang Wen-Yi Chu Department of Computer Science & Information.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
Isolated-Word Speech Recognition Using Hidden Markov Models
1 7-Speech Recognition (Cont’d) HMM Calculating Approaches Neural Components Three Basic HMM Problems Viterbi Algorithm State Duration Modeling Training.
Gaussian Mixture Model and the EM algorithm in Speech Recognition
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
Utterance Verification for Spontaneous Mandarin Speech Keyword Spotting Liu Xin, BinXi Wang Presenter: Kai-Wun Shih No.306, P.O. Box 1001,ZhengZhou,450002,
Speech Recognition Pattern Classification. 22 September 2015Veton Këpuska2 Pattern Classification  Introduction  Parametric classifiers  Semi-parametric.
EM and expected complete log-likelihood Mixture of Experts
7-Speech Recognition Speech Recognition Concepts
COMMON EVALUATION FINAL PROJECT Vira Oleksyuk ECE 8110: Introduction to machine Learning and Pattern Recognition.
International Conference on Intelligent and Advanced Systems 2007 Chee-Ming Ting Sh-Hussain Salleh Tian-Swee Tan A. K. Ariff. Jain-De,Lee.
Jun-Won Suh Intelligent Electronic Systems Human and Systems Engineering Department of Electrical and Computer Engineering Speaker Verification System.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
A Baseline System for Speaker Recognition C. Mokbel, H. Greige, R. Zantout, H. Abi Akl A. Ghaoui, J. Chalhoub, R. Bayeh University Of Balamand - ELISA.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
1 Modeling Long Distance Dependence in Language: Topic Mixtures Versus Dynamic Cache Models Rukmini.M Iyer, Mari Ostendorf.
Singer similarity / identification Francois Thibault MUMT 614B McGill University.
HMM - Part 2 The EM algorithm Continuous density HMM.
Multi-Speaker Modeling with Shared Prior Distributions and Model Structures for Bayesian Speech Synthesis Kei Hashimoto, Yoshihiko Nankaku, and Keiichi.
Voice Activity Detection based on OptimallyWeighted Combination of Multiple Features Yusuke Kida and Tatsuya Kawahara School of Informatics, Kyoto University,
1 CS 552/652 Speech Recognition with Hidden Markov Models Winter 2011 Oregon Health & Science University Center for Spoken Language Understanding John-Paul.
1 CSE 552/652 Hidden Markov Models for Speech Recognition Spring, 2006 Oregon Health & Science University OGI School of Science & Engineering John-Paul.
Chapter 20 Classification and Estimation Classification – Feature selection Good feature have four characteristics: –Discrimination. Features.
Chapter 12 search and speaker adaptation 12.1 General Search Algorithm 12.2 Search Algorithms for Speech Recognition 12.3 Language Model States 12.4 Speaker.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 04: GAUSSIAN CLASSIFIERS Objectives: Whitening.
Chapter 7 Speech Recognition Framework  7.1 The main form and application of speech recognition  7.2 The main factors of speech recognition  7.3 The.
Relation Strength-Aware Clustering of Heterogeneous Information Networks with Incomplete Attributes ∗ Source: VLDB.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Statistical Significance Hypothesis Testing.
Statistical Models for Automatic Speech Recognition Lukáš Burget.
Automated Speach Recognotion Automated Speach Recognition By: Amichai Painsky.
A Maximum Entropy Language Model Integrating N-grams and Topic Dependencies for Conversational Speech Recognition Sanjeev Khudanpur and Jun Wu Johns Hopkins.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
ECE 8443 – Pattern Recognition Objectives: Reestimation Equations Continuous Distributions Gaussian Mixture Models EM Derivation of Reestimation Resources:
Phone-Level Pronunciation Scoring and Assessment for Interactive Language Learning Speech Communication, 2000 Authors: S. M. Witt, S. J. Young Presenter:
Definition of the Hidden Markov Model A Seminar Speech Recognition presentation A Seminar Speech Recognition presentation October 24 th 2002 Pieter Bas.
Computer Vision Lecture 7 Classifiers. Computer Vision, Lecture 6 Oleh Tretiak © 2005Slide 1 This Lecture Bayesian decision theory (22.1, 22.2) –General.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Mixture Densities Maximum Likelihood Estimates.
Flexible Speaker Adaptation using Maximum Likelihood Linear Regression Authors: C. J. Leggetter P. C. Woodland Presenter: 陳亮宇 Proc. ARPA Spoken Language.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Bayes Rule Mutual Information Conditional.
Gaussian Mixture Model classification of Multi-Color Fluorescence In Situ Hybridization (M-FISH) Images Amin Fazel 2006 Department of Computer Science.
Utterance verification in continuous speech recognition decoding and training Procedures Author :Eduardo Lleida, Richard C. Rose Reporter : 陳燦輝.
A Study on Speaker Adaptation of Continuous Density HMM Parameters By Chin-Hui Lee, Chih-Heng Lin, and Biing-Hwang Juang Presented by: 陳亮宇 1990 ICASSP/IEEE.
Voice Activity Detection Based on Sequential Gaussian Mixture Model Zhan Shen, Jianguo Wei, Wenhuan Lu, Jianwu Dang Tianjin Key Laboratory of Cognitive.
Statistical Models for Automatic Speech Recognition
Classification with Perceptrons Reading:
Chapter 3: Maximum-Likelihood and Bayesian Parameter Estimation (part 2)
3. Applications to Speaker Verification
Hidden Markov Models Part 2: Algorithms
LTI Student Research Symposium 2004 Antoine Raux
Bayesian Models in Machine Learning
Statistical Models for Automatic Speech Recognition
SMEM Algorithm for Mixture Models
EE513 Audio Signals and Systems
LECTURE 21: CLUSTERING Objectives: Mixture Densities Maximum Likelihood Estimates Application to Gaussian Mixture Models k-Means Clustering Fuzzy k-Means.
LECTURE 15: REESTIMATION, EM AND MIXTURES
Chapter 3: Maximum-Likelihood and Bayesian Parameter Estimation (part 2)
Presentation transcript:

Chapter 14 Speaker Recognition 14.1 Introduction to speaker recognition 14.2 The basic problems for speaker recognition 14.3 Approaches and systems 14.4 Language Identification

14.1 Introduction to speaker recognition (1) Speaker recognition tries to extract the personal factors of speakers, and speech recognition – common factors of speakers. Speaker Verification : is the speaker as claimed or not Speaker Identification : who is the speaker in a name list

Introduction to speaker recognition (2) Text dependent speaker recognition and text independent speaker recognition Application of speaker recognition –Business SystemsLegal Systems –Military SystemsSecurity Systems Hard problemsWhich features are effective and reliable ?

14.2 The basic problems of speaker recognition (1) System diagram Training : Create utterances and model parameters for all speakers Verification : Compare the real parameters with that of claimed speaker. If the difference is less than some threshold the speaker is verified otherwise rejected

The basic problems of speaker recognition (2) Recognition : Compare the extracted parameters with the reference parameters of all speakers and chose the minimum distance decision to identify the speaker. Three basic problems : parameter selection; specifying the similarity measure to make the calculation simple and reliable; updating the reference parameters to adapt to users.

The basic problems of speaker recognition (3) Design compromising. For speaker verification system, two important parameters are False Rejection Rate(FR) and False Acceptance Rate(FA). They have some relation with acceptance threshold. In different cases they have different influence. Performance vs number of speakers

The basic problems of speaker recognition (4) Updating of Reference Templates Performance Evaluation Basic Characteristics of Speakers Ideally these features should effectively distinguish different speakers and keep relative stability when speech changes; they should be easy to extract; not easy to mimic.

The basic problems of speaker recognition (5) Evaluation approaches to parameter’s effectiveness F = / k,i. where x k (i) is the parameters for k-th utterance of i-th speaker. i is averaging for speakers; k is averaging for different utterances of a speaker; μ i = k is the mean estimation of i-th speaker; μ= i For multi-dimension, B= i, W= k,i. Divergence D = i,j,=T r (W -1 B)

The basic problems of speaker recognition (6) Feature Examples (1) LPC and its derived parameters (2) Parameters deducted from speech spectrum (3) Mixture parameters Approaches to Speaker Recognition (1) Template Matching Method

The basic problems of speaker recognition (7) (2) Probability Model Method (3) Text independent speaker recognition system based on VQ (4) Neural Network We have done a speaker recognition system using BP network. It was text dependent.

14.3 Approaches and systems (1) GMM(Gaussian Mixture Model) It is a kind of probability model. Every speaker corresponds a GMM model. P(x|λ)=ΣP i b i (x), i=1~M It means P(x|λ) is the weighted sum of M normal density b i. x is n-dimensional observation vector; P i are the weighting coefficients; b i are n- dimensional gaussian functions.

Approaches and systems (2) b i (x) = {1/[(2π) n/2 |C i | 1/2 ]}* exp{-(x-μ i ) t C i - 1 (x-μ i )/2} where μ i is the mean vector, C i is the covariance matrix. λ i ={P i, μ i, C i }, i=1~M MLE of GMM parameters Assume X = {x t } t=1~T are training feature vectors. The likelyhood of model λis P(X|λ)=ΠP(x t |λ), t=1~T

Approaches and systems (3) The goal for training is to find a λ 0 such that P(X|λ 0 ) get maximum: λ 0 =argmax P(X|λ) for all λ P(X|λ) is non linear function of λ. So EM is used to find the optimal λ. Define Q(λ,λ’)=ΣP(X,i|λ)log P(X,i|λ’), i=1~M, i is the sequence number of gaussian components. Q(λ,λ’)= Σ Σγ t (i)logP i ’b i ’(x t ), i=1~M,t=1~T λ

Approaches and systems (4) where γ t (i)= P(x|λ)P(i t =i|x t,λ) P(i t =i|x t,λ)=P i b i (x t )/ΣP m b m (x t ) for m=1~M Let the partial derivation of Q over P i,μ i,C i, i=1~M the following iteration could be : P’ i =(1/T) Σ P(i t =i|x t,λ) μ’ i = Σ P(i t =i|x t,λ)x t /Σ P(i t =i|x t,λ), i=1~M

And σ i 2 ’=Σ P(i t =i|x t,λ)x t 2 /Σ P(i t =i|x t,λ), i=1~M Recognition Algorithm As soon as we have models for all speakers then we will calculate maximum posteriori probability to find the speaker: S =argmax ΣlogP(x t |λ k ), t=1~T, k=1~M Approaches and systems (5)

14.4 Language Identification (1) Principles Different level to recognize and utilize : phonemes, syllable structure, prosodic features, lexicon categories, syntax and semantic network. Structure of Language Identification System Different systems : HMM based, phoneme based.

Language Identification (2) Our experimental system OGI corpus System now contains four languages : English, Chinese, Spanish, Japanese It is phoneme based. Every language has a set of HMM models for its phonemes. The models are constructed such that every phoneme could be followed by any phoneme. These models are trained by the corpus with the label files.

Language Identification (3) The system structure is similar with above. Every language has a network of HMM models. The incoming utterance (with different length) is feed into every language network and every network will output a probability value for the incoming utterance. By comparing these values the decision could be made, the language will be identified. By our experiments the accuracy could be more than 95%. Because only thing is to decide

Language Identification (4) (continued) which language it is. So if you use two or three utterances to test, you will definitely get a correct answer with very high probability. The way is simpler than the large vocabulary word system based on HMM.