Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 An Introduction to Statistical Machine Translation Dept. of CSIE, NCKU Yao-Sheng Chang Date: 2011.04.12.

Similar presentations


Presentation on theme: "1 An Introduction to Statistical Machine Translation Dept. of CSIE, NCKU Yao-Sheng Chang Date: 2011.04.12."— Presentation transcript:

1 1 An Introduction to Statistical Machine Translation Dept. of CSIE, NCKU Yao-Sheng Chang Date: 2011.04.12

2 2 Outline  Introduction  Peter Brown  The Mathematics of Machine Translation: Parameter Estimation, computational linguistics, vol. 19,1993, pp.263-311.  Model 1

3 3 Introduction (1)  Machine translation is available  Statistical method, information theory  Faster computer, large storage  Machine-readable corpora  Statistical method have proven their value  Automatic speech recognition  Lexicography, Natural language processing

4 4 Introduction (2)  Translations involve many cultural respects  We only consider the translation of individual sentence, just acceptable sentences.  Every sentence in one language is a possible translation of any sentence in the other  Assign (S,T) a probability, Pr(T|S), to be the probability that a translator will produce T in the target language when presented with S in the source language.

5 5 Statistical Machine Translation (SMT)  Noise channel problem

6 6 Fundamental of SMT  Given a string of French f, the job of our translation system is to find the string e that the speaker had in mind when he produced f. (Baye’s theorem)  Since denominator Pr(f) here is a constant, the best e is one which has the greatest probability.

7 7 Practical Challenges  Computation of translation model Pr(f|e)  Computation of language model Pr(e)  Decoding (i.e., search for e that maximize Pr(f|e)  Pr(e))

8 8 Alignment of case 1

9 9 Alignment of case 2

10 10 Alignment of case 3

11 11 Formulation of Alignment(1)  Let e = e 1 l  e 1 e 2 …e l and f = f 1 m  f 1 f 2 …f m  An alignment between a pair of strings e and f use a mapping of every word e i to some word f j  In other words, an alignment a between e and f tells that the word e i, 1  i  l is generated by the word f a j, a j  {1,…,m}  There are (l+1) m different alignments between e and f. (Including Null – no mapping ) e = e 1 e 2 …e i …e l f = f 1 f 2 … f j … f m aj =iaj =i f a j =e i

12 12 Formulation of Alignment(2)  Probability of an alignment

13 13 Translation Model  The alignment, a, can be represented by a series, a 1 m = a l a 2... a m, of m values, each between 0 and l such that if the word in position j of the French string is connected to the word in position i of the English string, then a j = i, and if it is not connected to any English word, then a j = 0 (null).

14 14 IBM Model I (1) 

15 15 IBM Model I (2)  The alignment is determined by specifying the values of a j for j from 1 to m, each of which can take any value from 0 to l.

16 16 Constrained Maximization  We wish to adjust the translation probabilities so as to maximize Pr(f|e ) subject to the constraints that for each e

17 17 Lagrange Multipliers (1)  Method of Lagrange multipliers (拉格朗乘數法) : Lagrange multipliers with one constraint  If there is a maximum or minimum subject to the constraint g(x,y) = 0, then it will occur at one of the critical numbers of the function F defined by is called the  f(x,y) is called the objective function (目標函數).  g(x,y) is called the constrained equation (條件限制方程式).  F(x, y, ) is called the Lagrange function (拉格朗函數).  is called the Lagrange multiplier (拉格朗乘數).

18 18 Lagrange Multipliers (2)  Example 1: Maximize  Subject to  Let  Set  代入 (2) 與 (3) ,可得  ( 5) 與 (6) 代入 (4) ,可得  ,由此可得 因此,最大值為

19 19 Lagrange Multipliers (3)  Following standard practice for constrained maximization, we introduce Lagrange multipliers e, and seek an unconstrained extremum of the auxiliary function

20 20 Derivation (1)  The partial derivative of h with respect to t(f|e) is  where  is the Kronecker delta function, equal to one when both of its arguments are the same and equal to zero otherwise

21 21 Derivation (2)  We call the expected number of times that e connects to f in the translation (f|e) the count of f given e for (f|e) and denote it by c(f|e; f, e). By definition,

22 22 Derivation (3)  replacing e by e Pr(f|e), then Equation (11) can be written very compactly as  In practice, our training data consists of a set of translations, (f (1) le (1) ), (f (2) le (2) ),..., (f (s) le (s) ),, so this equation becomes

23 23 Derivation (4)  For an expression that can be evaluated efficiently.

24 24 Derivation (5)  Thus, the number of operations necessary to calculate a count is proportional to l + m rather than to (l + 1) m as Equation (12)

25 25 EM Algorithm

26 26 EM Algorithm

27 27 Introduction(1) In statistical computing, an expectation-maximization (EM) algorithm is an algorithm for finding maximum likelihood estimates of parameters in probabilistic models, where the model depends on unobserved latent variables. EM is frequently used for data clustering in machine learning and computer vision.

28 28 Introduction(2) EM alternates between performing an expectation (E) step, which computes an expectation of the likelihood by including the latent variables as if they were observed, and a maximization (M) step, which computes the maximum likelihood estimates of the parameters by maximizing the expected likelihood found on the E step. The parameters found on the M step are then used to begin another E step, and the process is repeated.  (From: http://en.wikipedia.org/wiki/Expectation- maximization_algorithm )http://en.wikipedia.org/wiki/Expectation- maximization_algorithm

29 29  EM algorithm is a soft version of K- means clustering.  The idea is that the observed data are generated by several underlying causes.  Each cause contributes independently to the generation process, bur we only see the final mixture –without information about which cause contributed what.

30 30  Observable data  Each  Unobservable / hidden data  Each z ij can be interpreted as cluster membership probabilities.  The component z ij is 1 if object i is a member of cluster j.

31 31 Initial Assumption At first, suppose we have a data set, where each is the vector that correspond to the i th data point. Further, assume the samples are drawn from k mixture Gaussians,. Notice that the p.d.f. of multivariate normal distribution is A normal distribution in a x variate with  mean and  2 variance is a statistic distribution with probability functionvariatemean varianceprobability function

32 32 E-step Let be a n by k matrix, where Notice that, if we set for then by Bayes formula we have

33 33 M-step.

34 34 log likelihood The log likelihood of the data set X given the parameters is, where, and is the weight of cluster j. Notice that

35 35 計算示範 (1) 假設 則

36 36 計算示範 (2)


Download ppt "1 An Introduction to Statistical Machine Translation Dept. of CSIE, NCKU Yao-Sheng Chang Date: 2011.04.12."

Similar presentations


Ads by Google