Conditional Random Fields for ASR Jeremy Morris July 25, 2006.

Slides:



Advertisements
Similar presentations
Speech Recognition Part 3 Back end processing. Speech recognition simplified block diagram Speech Capture Speech Capture Feature Extraction Feature Extraction.
Advertisements

Conditional Random Fields For Speech and Language Processing
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data John Lafferty Andrew McCallum Fernando Pereira.
Development of Automatic Speech Recognition and Synthesis Technologies to Support Chinese Learners of English: The CUHK Experience Helen Meng, Wai-Kit.
Hidden Markov Models Theory By Johan Walters (SR 2003)
Lecture 5: Learning models using EM
Rutgers CS440, Fall 2003 Introduction to Statistical Learning Reading: Ch. 20, Sec. 1-4, AIMA 2 nd Ed.
Conditional Random Fields   A form of discriminative modelling   Has been used successfully in various domains such as part of speech tagging and other.
Transfer Learning From Multiple Source Domains via Consensus Regularization Ping Luo, Fuzhen Zhuang, Hui Xiong, Yuhong Xiong, Qing He.
1 Conditional Random Fields for ASR Jeremy Morris 11/23/2009.
Extracting Places and Activities from GPS Traces Using Hierarchical Conditional Random Fields Yong-Joong Kim Dept. of Computer Science Yonsei.
OSU ASAT Status Report Jeremy Morris Yu Wang Ilana Bromberg Eric Fosler-Lussier Keith Johnson 13 October 2006.
Isolated-Word Speech Recognition Using Hidden Markov Models
Midterm Review Rao Vemuri 16 Oct Posing a Machine Learning Problem Experience Table – Each row is an instance – Each column is an attribute/feature.
Learning Structure in Bayes Nets (Typically also learn CPTs here) Given the set of random variables (features), the space of all possible networks.
Graphical models for part of speech tagging
Hierarchical Dirichlet Process (HDP) A Dirichlet process (DP) is a discrete distribution that is composed of a weighted sum of impulse functions. Weights.
Using Neural Networks to Predict Claim Duration in the Presence of Right Censoring and Covariates David Speights Senior Research Statistician HNC Insurance.
Csc Lecture 7 Recognizing speech. Geoffrey Hinton.
A Weakly-Supervised Approach to Argumentative Zoning of Scientific Documents Yufan Guo Anna Korhonen Thierry Poibeau 1 Review By: Pranjal Singh Paper.
Discriminative Models for Spoken Language Understanding Ye-Yi Wang, Alex Acero Microsoft Research, Redmond, Washington USA ICSLP 2006.
Modeling Speech using POMDPs In this work we apply a new model, POMPD, in place of the traditional HMM to acoustically model the speech signal. We use.
1 Generative and Discriminative Models Jie Tang Department of Computer Science & Technology Tsinghua University 2012.
CS774. Markov Random Field : Theory and Application Lecture 19 Kyomin Jung KAIST Nov
1 CRFs for ASR: Extending to Word Recognition Jeremy Morris 05/16/2008.
1 Word Recognition with Conditional Random Fields Jeremy Morris 12/03/2009.
Automatic Speech Recognition: Conditional Random Fields for ASR Jeremy Morris Eric Fosler-Lussier Ray Slyh 9/19/2008.
Fields of Experts: A Framework for Learning Image Priors (Mon) Young Ki Baik, Computer Vision Lab.
Prototype-Driven Learning for Sequence Models Aria Haghighi and Dan Klein University of California Berkeley Slides prepared by Andrew Carlson for the Semi-
FIGURE 1: Spectrogram of the phrase “that experience”, shown with phonetic labels and corresponding neural network posterior distributions over each phonetic.
1 CSE 552/652 Hidden Markov Models for Speech Recognition Spring, 2005 Oregon Health & Science University OGI School of Science & Engineering John-Paul.
The famous “sprinkler” example (J. Pearl, Probabilistic Reasoning in Intelligent Systems, 1988)
Multi-Speaker Modeling with Shared Prior Distributions and Model Structures for Bayesian Speech Synthesis Kei Hashimoto, Yoshihiko Nankaku, and Keiichi.
CSC321: Neural Networks Lecture 16: Hidden Markov Models
Speech Communication Lab, State University of New York at Binghamton Dimensionality Reduction Methods for HMM Phonetic Recognition Hongbing Hu, Stephen.
1 CRANDEM: Conditional Random Fields for ASR Jeremy Morris 11/21/2008.
Combining Speech Attributes for Speech Recognition Jeremy Morris November 9, 2006.
Performance Comparison of Speaker and Emotion Recognition
Discriminative Phonetic Recognition with Conditional Random Fields Jeremy Morris & Eric Fosler-Lussier The Ohio State University Speech & Language Technologies.
Presented by: Fang-Hui Chu Discriminative Models for Speech Recognition M.J.F. Gales Cambridge University Engineering Department 2007.
1 Conditional Random Fields for Automatic Speech Recognition Jeremy Morris 06/03/2010.
John Lafferty Andrew McCallum Fernando Pereira
Maximum Entropy Model, Bayesian Networks, HMM, Markov Random Fields, (Hidden/Segmental) Conditional Random Fields.
Conditional Markov Models: MaxEnt Tagging and MEMMs
Discriminative Training and Machine Learning Approaches Machine Learning Lab, Dept. of CSIE, NCKU Chih-Pin Liao.
Statistical Models for Automatic Speech Recognition Lukáš Burget.
1 Conditional Random Fields An Overview Jeremy Morris 01/11/2008.
Automated Speach Recognotion Automated Speach Recognition By: Amichai Painsky.
Spoken Language Group Chinese Information Processing Lab. Institute of Information Science Academia Sinica, Taipei, Taiwan
1 Experiments with Detector- based Conditional Random Fields in Phonetic Recogntion Jeremy Morris 06/01/2007.
FIGURE 1: Spectrogram of the phrase “that experience”, shown with phonetic labels and corresponding neural network posterior distributions over each phonetic.
Graphical Models for Segmenting and Labeling Sequence Data Manoj Kumar Chinnakotla NLP-AI Seminar.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Bayes Rule Mutual Information Conditional.
Conditional Random Fields and Its Applications Presenter: Shih-Hsiang Lin 06/25/2007.
Combining Phonetic Attributes Using Conditional Random Fields Jeremy Morris and Eric Fosler-Lussier – Department of Computer Science and Engineering A.
Christoph Prinz / Automatic Speech Recognition Research Progress Hits the Road.
1 Conditional Random Fields For Speech and Language Processing Jeremy Morris 10/27/2008.
Olivier Siohan David Rybach
Deep Feedforward Networks
Conditional Random Fields for ASR
Statistical Models for Automatic Speech Recognition
CRANDEM: Conditional Random Fields for ASR
Conditional Random Fields An Overview
Statistical Models for Automatic Speech Recognition
Jeremy Morris & Eric Fosler-Lussier 04/19/2007
Automatic Speech Recognition: Conditional Random Fields for ASR
LECTURE 23: INFORMATION THEORY REVIEW
Speech recognition, machine learning
Anthor: Andreas Tsiartas, Prasanta Kumar Ghosh,
Speech recognition, machine learning
Presentation transcript:

Conditional Random Fields for ASR Jeremy Morris July 25, 2006

Overview ► Problem Statement (Motivation) ► Conditional Random Fields ► Experiments  Attribute Selection  Experimental Setup ► Results ► Future Work

Problem Statement ► ► Developed as part of the ASAT Project   (Automatic Speech Attribute Transcription) ► ► Goal: Develop a system for bottom-up speech recognition using 'speech attributes'

Speech Attributes? ► ► Any information that could be useful for recognizing the spoken language   Phonetic attributes   Speaker attributes (gender, age, etc.)   Any other useful attributes that could be used for speech recognition   Note that there is no guarantee that attributes will be independent of each other ► ► One part of this project is to explore ways to create a framework for easily combining new features for experimental purposes /d/ manner: stop place of artic: dental voicing: voiced /t/ manner: stop place of artic: dental voicing: unvoiced /iy/ height: high backness: front roundness: nonround

Evidence Combination ► ► Two basic ways to build hypotheses hyp data hyp data Top Down Generate a hypothesis See if the data fits the hypothesis Bottom Up Examine the data Search for a hypothesis that fits

Top Down ► ► Traditional Automated Speech Recogintion Systems (ASR) use a top- down approach (HMMs)   Hypothesis is the phone we are predicting   Data is some encoding of the acoustic speech signal   A likelihood of the signal given the phone label is learned from data   A prior probability for the phone label is learned from the data  These are combined through Bayes Rule to give us the posterior probability /iy/ X P(/iy/) P(X|/iy/)

Bottom Up ► ► Bottom-up models have the same high-level goal – determine the label from the observation   But instead of a likelihood, the posterior probability is learned from the data ► ► Neural Networks have been used to learn these probabilities /iy/ X P(/iy/|X)

Speech is a Sequence ► ► Speech is not a single, independent event   It is a combination of multiple events over time ► ► A model to recognize spoken language should take into account dependencies across time /k/ /iy/

Speech is a Sequence ► ► A top down (generative) model can be extended into a time sequence as a Hidden Markov Model (HMM)   Now our likelihood of the data is over the entire sequence instead of a single phone /k/ /iy/ XXXXX

Speech is a Sequence ► ► Tandem is a method for using evidence bottom up (discriminative)   Hypothesis output of Neural Network is used to train an HMM   Not a pure discriminative method, but a combination of generative and discriminative methods /k//iy/ YYY XXX

Bottom up Modelling ► ► The idea is to have a system that combines evidence layer by layer   Speech attributes contribute to phone attribute detection   Phone attributes contribute to “syllable” attribute detection, and so on ► ► Each layer combines information from previous layers to form its hypotheses   We want to do this probabalistically – no hard decisions   Note that there is no guarantee of independence among the observed speech features – in fact, they are often very dependent.

Conditional Random Fields ► ► A form of discriminative modelling   Has been used successfully in various domains such as part of speech tagging and other Natural Language Processing tasks ► ► Processes evidence bottom-up   Combines multiple features of the data   Builds the probability P( sequence | data)

Conditional Random Fields ► ► CRFs are based on the idea of Markov Random Fields   Modelled as an undirected graph connecting labels with observations   Observations in a CRF are not random variables /k/ /iy/ XXXXX Transition functions add associations between transitions from one label to another State functions help determine the identity of the state

Conditional Random Fields State Feature Function f([x is stop], /t/) One possible state feature function For our attributes and labels State Feature Weight λ=10 One possible weight value for this state feature (Strong) Transition Feature Function g(x, /iy/,/k/) One possible transition feature function Indicates /k/ followed by /iy/ Transition Feature Weight μ=4 One possible weight value for this transition feature ► Hammersley-Clifford Theorem states that a random field is an MRF iff it can be described in the above form  The exponential is the sum of the clique potentials of the undirected graph

Conditional Random Fields ► ► Conceptual Overview   Each attribute of the data we are trying to model fits into a feature function that associates the attribute and a possible label ► ► A positive value if the attribute appears in the data ► ► A zero value if the attribute is not in the data   Each feature function carries a weight that gives the strength of that feature function for the proposed label ► ► High positive weights indicate a good association between the feature and the proposed label ► ► High negative weights indicate a negative association between the feature and the proposed label ► ► Weights close to zero indicate the feature has little or no impact on the identity of the label

Experiments ► ► Goal: Implement a Conditional Random Field Model on ASAT-style data   Perform phone recognition   Compare results to those obtained via a Tandem system ► ► Experimental Data   TIMIT read speech corpus   Moderate-sized corpus of clean, prompted speech, complete with phonetic-level transcriptions

Attribute Selection ► ► Attribute Detectors   ICSI QuickNet Neural Networks ► ► Two different types of attributes   Phonological feature detectors ► ► Place, Manner, Voicing, Vowel Height, Backness, etc. ► ► Features are grouped into eight classes, with each class having a variable number of possible values based on the IPA phonetic chart   Phone detectors ► ► Neural networks output based on the phone labels – one output per label   Classifiers were applied to 2960 utterances from the TIMIT training set

Experimental Setup ► ► Code built on the Java CRF toolkit on Sourceforge     Performs training to maximize the log-likelihood of the training set with respect to the model   Uses a Limited Memory BGFS algorithm to minimize the gradient of the log-likelihood ► ► For CRF models, maximizing the log-likelihood of the empirical distribution of the data as predicted by the model is the same as maximizing the entropy (Berger et. al.)

Experimental Setup ► ► Output from the Neural Nets are themselves treated as feature functions for the observed sequence – each attribute/label combination gives us a value for one feature function   Note that this makes the feature functions non- binary features.

ResultsModel Phone Accuracy Phone Correct Tandem [1] (phones) 60.48%63.30% Tandem [3] (phones) 67.32%73.81% CRF [1] (phones) 66.89%68.49% Tandem [1] (features) 61.48%63.50% Tandem [3] (features) 66.69%72.52% CRF [1] (features) 65.29%66.81% Tandem [1] (phones/feas) 61.78%63.68% Tandem [3] (phones/feas) 67.96%73.40% CRF (phones/feas) 68.00%69.58%

Future Work ► More features  What kinds of features can we add to improve our transitions? ► Tuning  HMM model has parameters that can be tuned for better performance – can we tweak the CRF to do something similar? ► Word recogntion  How does this model do at the full word recognition level, instead of just phones ► Other corpora  Can we extend this method beyond TIMIT to different types of corpora? (e.g. WSJ)