Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS 224S / LINGUIST 285 Spoken Language Processing Dan Jurafsky Stanford University Spring 2014 Lecture 4: ASR: Learning: EM (Baum-Welch)

Similar presentations


Presentation on theme: "CS 224S / LINGUIST 285 Spoken Language Processing Dan Jurafsky Stanford University Spring 2014 Lecture 4: ASR: Learning: EM (Baum-Welch)"— Presentation transcript:

1 CS 224S / LINGUIST 285 Spoken Language Processing Dan Jurafsky Stanford University Spring 2014 Lecture 4: ASR: Learning: EM (Baum-Welch)

2 Outline for Today Baum-Welch = EM = Forward-Backward How this fits into the ASR component of course April 8: HMMs, Forward, Viterbi Decoding On your own: N-grams and Language Modeling Apr 10: Training: Baum-Welch (Forward-Backward) Apr 10: Advanced Decoding Apr 15: Acoustic Modeling and GMMs Apr 17: Feature Extraction, MFCCs May 27: Deep Neural Net Acoustic Models

3 The Learning Problem Baum-Welch = Forward-Backward Algorithm (Baum 1972) Is a special case of the EM or Expectation- Maximization algorithm (Dempster, Laird, Rubin) The algorithm will let us train the transition probabilities A= {a ij } and the emission probabilities B={b i (o t )} of the HMM

4 Input to Baum-Welch Ounlabeled sequence of observations Qvocabulary of hidden states For ice-cream task O = {1,3,2,…,} Q = {H,C}

5 Starting out with Observable Markov Models How to train? Run the model on observation sequence O. Since it’s not hidden, we know which states we went through, hence which transitions and observations were used. Given that information, training: B = {b k (o t )}: Since every state can only generate one observation symbol, observation likelihoods B are all 1.0 A = {a ij }:

6 Extending Intuition to HMMs For HMM, cannot compute these counts directly from observed sequences Baum-Welch intuitions: Iteratively estimate the counts. Start with an estimate for a ij and b k, iteratively improve the estimates Get estimated probabilities by: computing the forward probability for an observation dividing that probability mass among all the different paths that contributed to this forward probability

7 The Backward algorithm We define the backward probability as follows: This is the probability of generating partial observations O t+1T from time t+1 to the end, given that the HMM is in state i at time t and of course given .

8 The Backward algorithm We compute backward prob by induction: q0q0 /

9 Inductive step of the backward algorithm Computation of  t (i) by weighted sum of all successive values  t+1

10 Intuition for re-estimation of aij We will estimate â ij via this intuition: Numerator intuition: Assume we had some estimate of probability that a given transition i  j was taken at time t in observation sequence. If we knew this probability for each time t, we could sum over all t to get expected value (count) for i  j.

11 Re-estimation of aij Let  t be the probability of being in state i at time t and state j at time t+1, given O 1..T and model  : We can compute  from not-quite- , which is:

12 Computing not-quite- 

13 From not-quite-  to  We want: We’ve got: Which we compute as follows :

14 From not-quite-  to  We want: We’ve got: Since: We need:

15 From not-quite-  to 

16 From  to a ij The expected number of transitions from state i to state j is the sum over all t of  The total expected number of transitions out of state i is the sum over all transitions out of state i Final formula for reestimated a ij :

17 Re-estimating the observation likelihood b We’ll need to know γ t (j): the probability of being in state j at time t:

18 Computing γ

19 Summary The ratio between the expected number of transitions from state i to j and the expected number of all transitions from state i The ratio between the expected number of times the observation data emitted from state j is v k, and the expected number of times any observation is emitted from state j

20 The Forward-Backward Algorithm

21 Summary: Forward-Backward Algorithm Intialize  =(A,B) Compute , ,  Estimate new  ’=(A,B) Replace  with  ’ If not converged go to 2

22 Applying FB to speech: Caveats Network structure of HMM is always created by hand no algorithm for double-induction of optimal structure and probabilities has been able to beat simple hand- built structures. Always Bakis network = links go forward in time Subcase of Bakis net: beads-on-string net: Baum-Welch only guaranteed to return local max, rather than global optimum At the end, we through away A and only keep B

23 CS 224S / LINGUIST 285 Spoken Language Processing Dan Jurafsky Stanford University Spring 2014 Lecture 4b: Advanced Decoding

24 Outline for Today Advanced Decoding How this fits into the ASR component of course April 8: HMMs, Forward, Viterbi Decoding On your own: N-grams and Language Modeling Apr 10: Training: Baum-Welch (Forward-Backward) Apr 10: Advanced Decoding Apr 15: Acoustic Modeling and GMMs Apr 17: Feature Extraction, MFCCs May 27: Deep Neural Net Acoustic Models

25 Advanced Search (= Decoding) How to weight the AM and LM Speeding things up: Viterbi beam decoding Multipass decoding N-best lists Lattices Word graphs Meshes/confusion networks Finite State Methods

26 What we are searching for Given Acoustic Model (AM) and Language Model (LM): AM (likelihood)LM (prior) (1)

27 Combining Acoustic and Language Models We don’t actually use equation (1) AM underestimates acoustic probability Why? Bad independence assumptions Intuition: we compute (independent) AM probability estimates; but if we could look at context, we would assign a much higher probability. So we are underestimating We do this every 10 ms, but LM only every word. Besides: AM isn’t a true probability AM and LM have vastly different dynamic ranges

28 Language Model Scaling Factor Solution: add a language model weight (also called language weight LW or language model scaling factor LMSF Value determined empirically, is positive (why?) Often in the range 10 +- 5.

29 Language Model Scaling Factor As LMSF is increased: More deletion errors (since increase penalty for transitioning between words) Fewer insertion errors Need wider search beam (since path scores larger) Less influence of acoustic model observation probabilities Slide from Bryan Pellom

30 Word Insertion Penalty But LM prob P(W) also functions as penalty for inserting words Intuition: when a uniform language model (every word has an equal probability) is used, LM prob is a 1/V penalty multiplier taken for each word Each sentence of N words has penalty N/V If penalty is large (smaller LM prob), decoder will prefer fewer longer words If penalty is small (larger LM prob), decoder will prefer more shorter words When tuning LM for balancing AM, side effect of modifying penalty So we add a separate word insertion penalty to offset

31 Word Insertion Penalty Controls trade-off between insertion and deletion errors As penalty becomes larger (more negative) More deletion errors Fewer insertion errors Acts as a model of effect of length on probability But probably not a good model (geometric assumption probably bad for short sentences)

32 Log domain We do everything in log domain So final equation:

33 Speeding things up Viterbi is O(N 2 T), where N is total number of HMM states, and T is length This is too large for real-time search A ton of work in ASR search is just to make search faster: Beam search (pruning) Fast match Tree-based lexicons

34 Beam search Instead of retaining all candidates (cells) at every time frame Use a threshold T to keep subset: At each time t Identify state with lowest cost Dmin Each state with cost > Dmin+ T is discarded (“pruned”) before moving on to time t+1 Unpruned states are called the active states

35 Viterbi Beam Search Slide from John-Paul Hosom b A (1) b B (1) b C (1) b A (2) b B (2) b C (2) b A (3) b B (3) b C (3) b A (4) b B (4) b C (4) AA  B B  C C A: B: C: t=0t=1t=2t=3t=4

36 Viterbi Beam search Most common search algorithm for LVCSR Time-synchronous Comparing paths of equal length Two different word sequences W1 and W2: We are comparing P(W1|O 0t ) and P(W2|O 0t ) Based on same partial observation sequence O 0t So denominator is same, can be ignored Time-asynchronous search (A*) is harder

37 Viterbi Beam Search Empirically, beam size of 5-10% of search space Thus 90-95% of HMM states don’t have to be considered at each time t Vast savings in time.

38 On-line processing Problem with Viterbi search Doesn’t return best sequence til final frame This delay is unreasonable for many applications. On-line processing usually smaller delay in determining answer at cost of always increased processing time. 38

39 On-line processing At every time interval I (e.g. 1000 msec or 100 frames): At current time tcurr, for each active state qtcurr, find best path P(qtcurr) that goes from from t0 to tcurr (using backtrace (  )) Compare set of best paths P and find last time tmatch at which all paths P have the same state value at that time If tmatch exists { Output result from t0 to tmatch Reset/Remove  values until tmatch Set t0 to tmatch+1 } Efficiency depends on interval I, beam threshold, and how well the observations match the HMM. Slide from John-Paul Hosom

40 On-line processing Example (Interval = 4 frames): At time 4, all best paths for all states A, B, and C have state B in common at time 2. So, tmatch = 2. Now output states BB for times 1 and 2, because no matter what happens in the future, this will not change. Set t0 to 3 Slide from John-Paul Hosom  1 (A)  1 (B)  1 (C)  2 (A)  2 (B)  2 (C)  3 (A)  3 (B)  3 (C)  4 (A)  4 (B)  4 (C) A: B: C: t=1t=2t=3t=4 BBAA BBBB BBBC best sequence t 0 =1t curr =4

41 On-line processing Slide from John-Paul Hosom Now t match = 7, so output from t=3 to t=7: BBABB, then set t 0 to 8. If T=8, then output state with best  8, for example C. Final result (obtained piece-by-piece) is then BBBBABBC  3 (A)  3 (B)  3 (C)  4 (A)  4 (B)  4 (C)  5 (A)  5 (B)  5 (C)  6 (A)  6 (B)  6 (C) A: B: C: t=3t=4t=5t=6 BBABBA BBABBB BBABBC t 0 =3t curr =8  7 (A)  7 (B)  7 (C)  8 (A)  8 (B)  8 (C) t=7t=8 best sequence Interval=4

42 Problems with Viterbi It’s hard to integrate sophisticated knowledge sources Trigram grammars Parser-based LM long-distance dependencies that violate dynamic programming assumptions Knowledge that isn’t left-to-right Following words can help predict preceding words Solutions Return multiple hypotheses and use smart knowledge to rescore them Use a different search algorithm, A* Decoding (=Stack decoding)

43 Multipass Search

44 Ways to represent multiple hypotheses N-best list Instead of single best sentence (word string), return ordered list of N sentence hypotheses Word lattice Compact representation of word hypotheses and their times and scores Word graph FSA representation of lattice in which times are represented by topology

45 Another Problem with Viterbi The forward probability of observation given word string The Viterbi algorithm makes the “Viterbi Approximation” Approximates P(O|W) with P(O|best state sequence)

46 Solving the best-path-not-best- words problem Viterbi returns best path (state sequence) not best word sequence Best path can be very different than best word string if words have many possible pronunciations Two solutions Modify Viterbi to sum over different paths that share the same word string. Do this as part of N-best computation Compute N-best word strings, not N-best phone paths Use a different decoding algorithm (A*) that computes true Forward probability.

47 Sample N-best list

48 N-best lists Again, we don’t want the N-best paths That would be trivial Store N values in each state cell in Viterbi trellis instead of 1 value But: Most of the N-best paths will have the same word string Useless!!! It turns out that a factor of N is too much to pay

49 Computing N-best lists In the worst case, an admissible algorithm for finding the N most likely hypotheses is exponential in the length of the utterance. S. Young. 1984. “Generating Multiple Solutions from Connected Word DP Recognition Algorithms”. Proc. of the Institute of Acoustics, 6:4, 351-354. For example, if AM and LM score were nearly identical for all word sequences, we must consider all permutations of word sequences for whole sentence (all with the same scores). But of course if this is true, can’t do ASR at all!

50 Computing N-best lists Instead, various non-admissible algorithms: (Viterbi) Exact N-best (Viterbi) Word Dependent N-best And one admissible A* N-best

51 Exact N-best for time-synchronous Viterbi Due to Schwartz and Chow; also called “sentence- dependent N-best” Idea: each state stores multiple paths Idea: maintain separate records for paths with distinct word histories History: whole word sequence up to current time t and word w When 2 or more paths come to the same state at the same time, merge paths w/same history and sum their probabilities. i.e. compute the forward probability within words Otherwise, retain only N-best paths for each state

52 Exact N-best for time-synchronous Viterbi Efficiency: Typical HMM state has 2 or 3 predecessor states within word HMM So for each time frame and state, need to compare/merge 2 or 3 sets of N paths into N new paths. At end of search, N paths in final state of trellis give N- best word sequences Complexity is O(N) Still too slow for practical systems N is 100 to 1000 More efficient versions: word-dependent N-best

53 Word-dependent (‘bigram’) N-best Intuition: Instead of each state merging all paths from start of sentence We merge all paths that share the same previous word Details: This will require us to do a more complex traceback at the end of sentence to generate the N-best list

54 Word-dependent (‘bigram’) N-best At each state preserve total probability for each of k << N previous words K is 3 to 6; N is 100 to 1000 At end of each word, record score for each previous word hypothesis and name of previous word So each word ending we store “alternatives” But, like normal Viterbi, pass on just the best hypothesis At end of sentence, do a traceback Follow backpointers to get 1-best But as we follow pointers, put on a queue the alternate words ending at same point On next iteration, pop next best

55 Word Lattice Each arc annotated with AM and LM logprobs

56 Word Graph Timing information removed Overlapping copies of words merged AM information removed Result is a WFST Natural extension to N-gram language model

57 Converting word lattice to word graph Word lattice can have range of possible end frames for word Create an edge from (w i,t i ) to (w j,t j ) if t j-1 is one of the end-times of w i Slide from Bryan Pellom

58 Lattices Some researchers are careful to distinguish between word graphs and word lattices But we’ll follow convention in using “lattice” to mean both word graphs and word lattices. Two facts about lattices: Density: the number of word hypotheses or word arcs per uttered word Lattice error rate (also called “lower bound error rate”): the lowest word error rate for any word sequence in lattice Lattice error rate is the “oracle” error rate, the best possible error rate you could get from rescoring the lattice. We can use this as an upper bound

59 Posterior lattices We don’t actually compute posteriors: Why do we want posteriors? Without a posterior, we can choose best hypothesis, but we can’t know how good it is! In order to compute posterior, need to Normalize over all different word hypothesis at a time Align all the hypotheses, sum over all paths passing through word

60 Mesh = Sausage = pinched lattice

61 Summary: one-pass vs. multipass Potential problems with multipass Can’t use for real-time (need end of sentence) (But can keep successive passes really fast) Each pass can introduce inadmissible pruning (But one-pass does the same w/beam pruning and fastmatch) Why multipass Very expensive KSs. (NL parsing, higher-order n-gram, etc.) Spoken language understanding: N-best perfect interface Research: N-best list very powerful offline tools for algorithm development N-best lists needed for discriminant training (MMIE, MCE) to get rival hypotheses

62 Weighted Finite State Transducers for ASR An alternative paradigm for ASR Used by Kaldi Weighted finite state automaton that transduces an input sequence to an output sequence Mohri, Mehryar, Fernando Pereira, and Michael Riley. "Speech recognition with weighted finite- state transducers." In Springer Handbook of Speech Processing, pp. 559-584. Springer Berlin Heidelberg, 2008. http://www.cs.nyu.edu/~mohri/pub/hbka.pdf

63 Weighted Finite State Acceptors

64 Weighted Finite State Transducers

65 WFST Algorithms Composition: combine transducers at different levels. If G is a finite state grammar and P is a pronunciation dictionary, P ◦ G transduces a phone string to word strings allowed by the grammar Determinization: Ensures each state has no more than one output transition for a given input label Minimization: transforms a transducer to an equivalent transducer with the fewest possible states and transitions slide from Steve Renals

66 WFST-based decoding Represent the following components as WFSTs Context-dependent acoustic models (C) Pronunciation dictionary (D) n-gram language model (L) The decoding network is defined by their composition: C ◦ D ◦ L Successively determinize and combine the component transducers, then minimize the final network slide from Steve Renals

67 G

68 L

69 G o L

70 min(det(L o G))

71 Advanced Search (= Decoding) How to weight the AM and LM Speeding things up: Viterbi beam decoding Multipass decoding N-best lists Lattices Word graphs Meshes/confusion networks Finite State Methods


Download ppt "CS 224S / LINGUIST 285 Spoken Language Processing Dan Jurafsky Stanford University Spring 2014 Lecture 4: ASR: Learning: EM (Baum-Welch)"

Similar presentations


Ads by Google