CS 188: Artificial Intelligence Spring 2007 Speech Recognition 03/20/2007 Srini Narayanan – ICSI and UC Berkeley.

Slides:



Advertisements
Similar presentations
Acoustic/Prosodic Features
Advertisements

Building an ASR using HTK CS4706
SPPA 403 Speech Science1 Unit 3 outline The Vocal Tract (VT) Source-Filter Theory of Speech Production Capturing Speech Dynamics The Vowels The Diphthongs.
Acoustic Characteristics of Consonants
From Resonance to Vowels March 8, 2013 Friday Frivolity Some project reports to hand back… Mystery spectrogram reading exercise: solved! We need to plan.
The frequency spectrum
Basic Spectrogram Lab 8. Spectrograms §Spectrograph: Produces visible patterns of acoustic energy called spectrograms §Spectrographic Analysis: l Acoustic.
ACOUSTICAL THEORY OF SPEECH PRODUCTION
Speech Perception Overview of Questions Can computers perceive speech as well as humans? Does each word that we hear have a unique pattern associated.
The Human Voice Chapters 15 and 17. Main Vocal Organs Lungs Reservoir and energy source Larynx Vocal folds Cavities: pharynx, nasal, oral Air exits through.
Introduction to Acoustics Words contain sequences of sounds Each sound (phone) is produced by sending signals from the brain to the vocal articulators.
PH 105 Dr. Cecilia Vogel Lecture 14. OUTLINE  consonants  vowels  vocal folds as sound source  formants  speech spectrograms  singing.
Advanced Artificial Intelligence
Natural Language Processing - Speech Processing -
Application of HMMs: Speech recognition “Noisy channel” model of speech.
ASR Intro: Outline ASR Research History Difficulties and Dimensions Core Technology Components 21st century ASR Research (Next two lectures)
Hidden Markov Models. Hidden Markov Model In some Markov processes, we may not be able to observe the states directly.
CS 188: Artificial Intelligence Fall 2009 Lecture 21: Speech Recognition 11/10/2009 Dan Klein – UC Berkeley TexPoint fonts used in EMF. Read the TexPoint.
COMP 4060 Natural Language Processing Speech Processing.
CS 188: Artificial Intelligence Fall 2009 Lecture 19: Hidden Markov Models 11/3/2009 Dan Klein – UC Berkeley.
A PRESENTATION BY SHAMALEE DESHPANDE
Representing Acoustic Information
ISSUES IN SPEECH RECOGNITION Shraddha Sharma
Audio Processing for Ubiquitous Computing Uichin Lee KAIST KSE.
Source/Filter Theory and Vowels February 4, 2010.
LE 460 L Acoustics and Experimental Phonetics L-13
Artificial Intelligence 2004 Speech & Natural Language Processing Natural Language Processing written text as input sentences (well-formed) Speech.
Midterm Review Spoken Language Processing Prof. Andrew Rosenberg.
Speech and Language Processing
MUSIC 318 MINI-COURSE ON SPEECH AND SINGING
Speech and Language Processing
Vowel Acoustics November 2, 2012 Some Announcements Mid-terms will be back on Monday… Today: more resonance + the acoustics of vowels Also on Monday:
ECE 598: The Speech Chain Lecture 7: Fourier Transform; Speech Sources and Filters.
Harmonics November 1, 2010 What’s next? We’re halfway through grading the mid-terms. For the next two weeks: more acoustics It’s going to get worse before.
A brief overview of Speech Recognition and Spoken Language Processing Advanced NLP Guest Lecture August 31 Andrew Rosenberg.
Speech Science Fall 2009 Oct 28, Outline Acoustical characteristics of Nasal Speech Sounds Stop Consonants Fricatives Affricates.
Say “blink” For each segment (phoneme) write a script using terms of the basic articulators that will say “blink.” Consider breathing, voicing, and controlling.
Csc Lecture 7 Recognizing speech. Geoffrey Hinton.
LML Speech Recognition Speech Recognition Introduction I E.M. Bakker.
Artificial Intelligence 2004 Speech & Natural Language Processing Natural Language Processing written text as input sentences (well-formed) Speech.
CS 188: Artificial Intelligence Fall 2007 Lecture 22: Viterbi 11/13/2007 Dan Klein – UC Berkeley.
Speech, Perception, & AI Artificial Intelligence CMSC February 13, 2003.
Speech Science VI Resonances WS Resonances Reading: Borden, Harris & Raphael, p Kentp Pompino-Marschallp Reetzp
Resonance October 23, 2014 Leading Off… Don’t forget: Korean stops homework is due on Tuesday! Also new: mystery spectrograms! Today: Resonance Before.
Overview ► Recall ► What are sound features? ► Feature detection and extraction ► Features in Sphinx III.
Artificial Intelligence 2004 Speech & Natural Language Processing Speech Recognition acoustic signal as input conversion into written words Natural.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Reestimation Equations Continuous Distributions.
Vowel Acoustics March 10, 2014 Some Announcements Today and Wednesday: more resonance + the acoustics of vowels On Friday: identifying vowels from spectrograms.
Statistical NLP Spring 2011
CHAPTER 8 DISCRIMINATIVE CLASSIFIERS HIDDEN MARKOV MODELS.
Probabilistic reasoning over time Ch. 15, 17. Probabilistic reasoning over time So far, we’ve mostly dealt with episodic environments –Exceptions: games.
Performance Comparison of Speaker and Emotion Recognition
A. R. Jayan, P. C. Pandey, EE Dept., IIT Bombay 1 Abstract Perception of speech under adverse listening conditions may be improved by processing it to.
1 Electrical and Computer Engineering Binghamton University, State University of New York Electrical and Computer Engineering Binghamton University, State.
P105 Lecture #27 visuals 20 March 2013.
Acoustic Phonetics 3/14/00.
Phonetics: A lecture Raung-fu Chung Southern Taiwan University
CS 224S / LINGUIST 285 Spoken Language Processing
Automatic Speech Recognition
Statistical NLP Spring 2010
The Physics of Sound.
Lecture 9: Speech Recognition (I) October 26, 2004 Dan Jurafsky
Speech Processing Speech Recognition
CS 188: Artificial Intelligence Spring 2007
CS 188: Artificial Intelligence Fall 2008
CS 188: Artificial Intelligence Spring 2006
Speech recognition, machine learning
Speech Recognition: Acoustic Waves
Speech recognition, machine learning
Presentation transcript:

CS 188: Artificial Intelligence Spring 2007 Speech Recognition 03/20/2007 Srini Narayanan – ICSI and UC Berkeley

Announcements  Midterm graded  Median 78  Mean 75  s  25 % above 90  HW 5 up today  BN inference  Due 4/9 (2 weeks + Spring break) < – > 1004

Hidden Markov Models  Hidden Markov models (HMMs)  Underlying Markov chain over states X  You observe outputs (effects) E at each time step  As a Bayes’ net:  Several questions you can answer for HMMs:  Last time: filtering to track belief about current X given evidence  Last time: Vitterbi estimation to compute most likely sequence X5X5 X2X2 E1E1 X1X1 X3X3 X4X4 E2E2 E3E3 E4E4 E5E5

Real HMM Examples  Speech recognition HMMs:  Observations are acoustic signals (continuous valued)  States are specific positions in specific words (so, tens of thousands)  Machine translation HMMs:  Observations are words (tens of thousands)  States are translation positions (dozens)  Robot tracking:  Observations are range readings (continuous)  States are positions on a map (continuous)

The Speech Recognition Problem  We want to predict a sentence given an acoustic sequence:  The noisy channel approach:  Build a generative model of production (encoding)  To decode, we use Bayes’ rule to write  Now, we have to find a sentence maximizing this product

The noisy channel model  Ignoring the denominator leaves us with two factors: P(Source) and P(Signal|Source)

Acoustic Modeling Describes the sounds that make up speech Lexicon Describes which sequences of speech sounds make up valid words Language Model Describes the likelihood of various sequences of words being spoken Speech Recognition Speech Recognition Knowledge Sources

Digitizing Speech

Speech in an Hour  Speech input is an acoustic wave form s p ee ch l a b Graphs from Simon Arnfield’s web tutorial on speech, Sheffield: “l” to “a” transition:

She just had a baby  What can we learn from a wavefile?  Vowels are voiced (vocal cord vibrates), long, loud  Length in time = length in space in waveform picture  Voicing: regular peaks in amplitude  When stops closed: no peaks: silence.  Peaks = voicing:.46 to.58 (vowel [i], from second.65 to.74 (vowel [u]) and so on  Silence of stop closure (1.06 to 1.08 for first [b], or 1.26 to 1.28 for second [b])  Fricatives (f tongue hits upper teeth) like [  ] intense irregular pattern; see.33 to.46

 Frequency gives pitch; amplitude gives volume  sampling at ~8 kHz phone, ~16 kHz mic (kHz=1000 cycles/sec)  Fourier transform of wave displayed as a spectrogram  darkness indicates energy at each frequency s p ee ch l a b frequency amplitude Spectral Analysis

Adding 100 Hz Hz Waves

Spectrum Frequency in Hz Amplitude Frequency components (100 and 1000 Hz) on x-axis

Part of [ae] from “had”  Note complex wave repeating nine times in figure  Plus smaller waves which repeats 4 times for every large pattern  Large wave has frequency of 250 Hz (9 times in.036 seconds)  Small wave roughly 4 times this, or roughly 1000 Hz  Two little tiny waves on top of peak of 1000 Hz waves

Back to Spectra  Spectrum represents these freq components  Computed by Fourier transform, algorithm which separates out each frequency component of wave.  x-axis shows frequency, y-axis shows magnitude (in decibels, a log measure of amplitude)  Peaks at 930 Hz, 1860 Hz, and 3020 Hz.

Vowel Formants

Resonances of the vocal tract  The human vocal tract as an open tube  Air in a tube of a given length will tend to vibrate at resonance frequency of tube.  Constraint: Pressure differential should be maximal at (closed) glottal end and minimal at (open) lip end. Closed end Open end Length 17.5 cm. Figure from W. Barry Speech Science slides

From Mark Liberman’s website

Articulation Process  Articulatory facts:  Vocal cord vibrations create harmonics  The mouth is a selective amplifier  Depending on shape of mouth, some harmonics are amplified more than others

Figures from Ratree Wayland slides from his website Vowel [i] sung at successively higher pitch

How to read spectrograms  bab: closure of lips lowers all formants: so rapid increase in all formants at beginning of "bab ”  dad: first formant increases, but F2 and F3 slight fall  gag: F2 and F3 come together: this is a characteristic of velars. Formant transitions take longer in velars than in alveolars or labials From Ladefoged “A Course in Phonetics”

Final Feature Vector  39 (real) features per 10 ms frame:  12 MFCC features  12 Delta MFCC features  12 Delta-Delta MFCC features  1 (log) frame energy  1 Delta (log) frame energy  1 Delta-Delta (log frame energy)  So each frame is represented by a 39D vector  (You don’t have to know these details!)

Acoustic Feature Sequence  Time slices are translated into acoustic feature vectors (~39 real numbers per slice)  These are the observations, now we need the hidden states X frequency …………………………………………….. e 12 e 13 e 14 e 15 e 16 ………..

State Space  P(E|X) encodes which acoustic vectors are appropriate for each phoneme (each kind of sound)  P(X|X’) encodes how sounds can be strung together  We will have one state for each sound in each word  From some state x, can only:  Stay in the same state (e.g. speaking slowly)  Move to the next position in the word  At the end of the word, move to the start of the next word  We build a little state graph for each word and chain them together to form our state space X

HMMs for Speech

Phones are not homogeneous!

Each phone has 3 subphones

Resulting HMM word model for “six”

ASR Lexicon: Markov Models

Speech Architecture meets Noisy Channel

Search space with bigrams

Markov Process with Bigrams Figure from Huang et al page 618

Decoding  While there are some practical issues, finding the words given the acoustics is an HMM inference problem  Here the state sequence is the sequence of phones  Observations are the acoustic vectors  We want to know which state sequence x 1:T is most likely given the evidence e 1:T :

Viterbi Algorithm  Question: what is the most likely state sequence given the observations?  Slow answer: enumerate all possibilities  Better answer: cached incremental version

Viterbi with 2 Words + Unif. LM Figure from Huang et al page 612

It's not easy to wreck a nice beach. It's not easy to recognize speech.

© Copyright 2002 Michael G. Christel and Alexander G. Hauptmann 38 Carnegie Mellon Continual Progress in Speech Recognition Increasingly Difficult Tasks, Steadily Declining Error Rates CONVERSATIONAL SPEECH Non-English English BROADCAST NEWS 20,000 Word Varied microphones Standard microphone Noisy environment Unlimited Vocabulary 5000 word All results are Speaker -Independent READ SPEECH 1000 Word vocabulary Word Error Rate (%) NSA/Wayne/Doddington

Current error rates ,000+Conversational Telephone 9.964,000+Broadcast news <6.620KWSJ read speech 3.05KWSJ read speech Digits Error Rate%VocabularyTask

Example

Dynamic Bayes Nets DBN = Multiple Hidden State Variables. Each State is a BN

Structured Probabilistic Inference

Next Class  Next part of the course: machine learning  We’ll start talking about how to learn model parameters (like probabilities) from data  One of the most heavily used technologies in all of AI

Extra Slides

Examples from Ladefoged bad pad spat

Simple Periodic Sound Waves  Y axis: Amplitude = amount of air pressure at that point in time  Zero is normal air pressure, negative is rarefaction  X axis: time. Frequency = number of cycles per second.  Frequency = 1/Period  20 cycles in.02 seconds = 1000 cycles/second = 1000 Hz

Deriving Schwa  Reminder of basic facts about sound waves  f = c/  c = speed of sound (approx 35,000 cm/sec)  A sound with =10 meters: f = 35 Hz (35,000/1000)  A sound with =2 centimeters: f = 17,500 Hz (35,000/2)

From Sundberg

Computing the 3 Formants of Schwa  Let the length of the tube be L  F 1 = c/ 1 = c/(4L) = 35,000/4*17.5 = 500Hz  F 2 = c/ 2 = c/(4/3L) = 3c/4L = 3*35,000/4*17.5 = 1500Hz  F 1 = c/ 2 = c/(4/5L) = 5c/4L = 5*35,000/4*17.5 = 2500Hz  So we expect a neutral vowel to have 3 resonances at 500, 1500, and 2500 Hz  These vowel resonances are called formants

HMMs for Continuous Observations?  Before: discrete, finite set of observations  Now: spectral feature vectors are real-valued!  Solution 1: discretization  Solution 2: continuous emissions models  Gaussians  Multivariate Gaussians  Mixtures of Multivariate Gaussians  A state is progressively:  Context independent subphone (~3 per phone)  Context dependent phone (=triphones)  State-tying of CD phone

Viterbi Decoding