Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS 188: Artificial Intelligence Fall 2007 Lecture 22: Viterbi 11/13/2007 Dan Klein – UC Berkeley.

Similar presentations


Presentation on theme: "CS 188: Artificial Intelligence Fall 2007 Lecture 22: Viterbi 11/13/2007 Dan Klein – UC Berkeley."— Presentation transcript:

1 CS 188: Artificial Intelligence Fall 2007 Lecture 22: Viterbi 11/13/2007 Dan Klein – UC Berkeley

2 Announcements  Review session tonight:  ???

3 Hidden Markov Models  An HMM is  Initial distribution:  Transitions:  Emissions: X5X5 X2X2 E1E1 X1X1 X3X3 X4X4 E2E2 E3E3 E4E4 E5E5

4 Most Likely Explanation  Remember: weather Markov chain  Tracking:  Viterbi: sun rain sun rain sun rain sun rain

5 Most Likely Explanation  Question: most likely sequence ending in x at t?  E.g. if sun on day 4, what’s the most likely sequence?  Intuitively: probably sun all four days  Slow answer: enumerate and score …

6 Mini-Viterbi Algorithm  Better answer: cached incremental updates  Define:  Read best sequence off of m and a vectors sun rain sun rain sun rain sun rain

7 Mini-Viterbi sun rain sun rain sun rain sun rain

8 Viterbi Algorithm  Question: what is the most likely state sequence given the observations e 1:t ?  Slow answer: enumerate all possibilities  Better answer: incremental updates

9 Example

10 Digitizing Speech

11 Speech in an Hour  Speech input is an acoustic wave form s p ee ch l a b Graphs from Simon Arnfield’s web tutorial on speech, Sheffield: http://www.psyc.leeds.ac.uk/research/cogn/speech/tutorial/ “l” to “a” transition:

12  Frequency gives pitch; amplitude gives volume  sampling at ~8 kHz phone, ~16 kHz mic (kHz=1000 cycles/sec)  Fourier transform of wave displayed as a spectrogram  darkness indicates energy at each frequency s p ee ch l a b frequency amplitude Spectral Analysis

13 Adding 100 Hz + 1000 Hz Waves

14 Spectrum 100 1000 Frequency in Hz Amplitude Frequency components (100 and 1000 Hz) on x-axis

15 Part of [ae] from “lab”  Note complex wave repeating nine times in figure  Plus smaller waves which repeats 4 times for every large pattern  Large wave has frequency of 250 Hz (9 times in.036 seconds)  Small wave roughly 4 times this, or roughly 1000 Hz  Two little tiny waves on top of peak of 1000 Hz waves

16 Back to Spectra  Spectrum represents these freq components  Computed by Fourier transform, algorithm which separates out each frequency component of wave.  x-axis shows frequency, y-axis shows magnitude (in decibels, a log measure of amplitude)  Peaks at 930 Hz, 1860 Hz, and 3020 Hz.

17 Resonances of the vocal tract  The human vocal tract as an open tube  Air in a tube of a given length will tend to vibrate at resonance frequency of tube.  Constraint: Pressure differential should be maximal at (closed) glottal end and minimal at (open) lip end. Closed end Open end Length 17.5 cm. Figure from W. Barry Speech Science slides

18 From Mark Liberman’s website

19 Acoustic Feature Sequence  Time slices are translated into acoustic feature vectors (~39 real numbers per slice)  These are the observations, now we need the hidden states X frequency …………………………………………….. e 12 e 13 e 14 e 15 e 16 ………..

20 State Space  P(E|X) encodes which acoustic vectors are appropriate for each phoneme (each kind of sound)  P(X|X’) encodes how sounds can be strung together  We will have one state for each sound in each word  From some state x, can only:  Stay in the same state (e.g. speaking slowly)  Move to the next position in the word  At the end of the word, move to the start of the next word  We build a little state graph for each word and chain them together to form our state space X

21 HMMs for Speech

22 Markov Process with Bigrams Figure from Huang et al page 618

23 Decoding  While there are some practical issues, finding the words given the acoustics is an HMM inference problem  We want to know which state sequence x 1:T is most likely given the evidence e 1:T :  From the sequence x, we can simply read off the words

24

25 POMDPs  Up until now:  MDPs: decision making when the world is fully observable (even if the actions are non-deterministic  Probabilistic reasoning: computing beliefs in a static world  What about acting under uncertainty?  In general, the formalization of the problem is the partially observable Markov decision process (POMDP)  A simple case: value of information

26 POMDPs  MDPs have:  States S  Actions A  Transition fn P(s’|s,a) (or T(s,a,s’))  Rewards R(s,a,s’)  POMDPs add:  Observations O  Observation function P(o|s,a) (or O(s,a,o))  POMDPs are MDPs over belief states b (distributions over S) a s s, a s,a,s’ s’ a b b, a o b’

27 Example: Battleship  In (static) battleship:  Belief state determined by evidence to date {e}  Tree really over evidence sets  Probabilistic reasoning needed to predict new evidence given past evidence  Solving POMDPs  One way: use truncated expectimax to compute approximate value of actions  What if you only considered bombing or one sense followed by one bomb?  You get the VPI agent from project 4! a {e} e, a e’ {e, e’} a b b, a o b’ a bomb {e} e, a sense e’ {e, e’} a sense U(a bomb, {e}) a bomb U(a bomb, {e, e’})

28 More Generally  General solutions map belief functions to actions  Can divide regions of belief space (set of belief functions) into policy regions (gets complex quickly)  Can build approximate policies using discretization methods  Can factor belief functions in various ways  Overall, POMDPs are very (actually PSACE-) hard  We’ll talk more about POMDPs at the end of the course!

29

30 Machine Learning  Up till now: how to reason or make decisions using a model  Machine learning: how to select a model on the basis of data / experience  Learning parameters (e.g. probabilities)  Learning structure (e.g. BN graphs)  Learning hidden concepts (e.g. clustering)

31 Classification  In classification, we learn to predict labels (classes) for inputs  Examples:  Spam detection (input: document, classes: spam / ham)  OCR (input: images, classes: characters)  Medical diagnosis (input: symptoms, classes: diseases)  Automatic essay grader (input: document, classes: grades)  Fraud detection (input: account activity, classes: fraud / no fraud)  Customer service email routing  … many more  Classification is an important commercial technology!

32 Classification  Data:  Inputs x, class labels y  We imagine that x is something that has a lot of structure, like an image or document  In the basic case, y is a simple N-way choice  Basic Setup:  Training data: D = collection of pairs  Feature extractors: functions f i which provide attributes of an example x  Test data: more x’s, we must predict y’s  During development, we actually know the y’s, so we can check how well we’re doing, but when we deploy the system, we don’t

33 Bayes Nets for Classification  One method of classification:  Features are values for observed variables  Y is a query variable  Use probabilistic inference to compute most likely Y  You already know how to do this inference

34 Simple Classification  Simple example: two binary features  This is a naïve Bayes model M SF direct estimate Bayes estimate (no assumptions) Conditional independence +

35 General Naïve Bayes  A general naive Bayes model:  We only specify how each feature depends on the class  Total number of parameters is linear in n C E1E1 EnEn E2E2 |C| parameters n x |E| x |C| parameters |C| x |E| n parameters

36 Inference for Naïve Bayes  Goal: compute posterior over causes  Step 1: get joint probability of causes and evidence  Step 2: get probability of evidence  Step 3: renormalize +

37 General Naïve Bayes  What do we need in order to use naïve Bayes?  Some code to do the inference (you know this part)  For fixed evidence, build P(C,e)  Sum out C to get P(e)  Divide to get P(C|e)  Estimates of local conditional probability tables  P(C), the prior over causes  P(E|C) for each evidence variable  These probabilities are collectively called the parameters of the model and denoted by   These typically come from observed data: we’ll look at this now


Download ppt "CS 188: Artificial Intelligence Fall 2007 Lecture 22: Viterbi 11/13/2007 Dan Klein – UC Berkeley."

Similar presentations


Ads by Google