Presentation is loading. Please wait.

Presentation is loading. Please wait.

Intelligent Database Systems Lab N.Y.U.S.T. I. M. Chinese Word Segmentation and Statistical Machine Translation Presenter : Wu, Jia-Hao Authors : RUIQIANG.

Similar presentations


Presentation on theme: "Intelligent Database Systems Lab N.Y.U.S.T. I. M. Chinese Word Segmentation and Statistical Machine Translation Presenter : Wu, Jia-Hao Authors : RUIQIANG."— Presentation transcript:

1 Intelligent Database Systems Lab N.Y.U.S.T. I. M. Chinese Word Segmentation and Statistical Machine Translation Presenter : Wu, Jia-Hao Authors : RUIQIANG ZHANG, KEIJI YASUDA, EIICHIRO SUMITA TOSLP (2008) 國立雲林科技大學 National Yunlin University of Science and Technology

2 Intelligent Database Systems Lab N.Y.U.S.T. I. M. 2 Outline Motivation Objective Methodology  Dictionary-based  CRF-based Experiments Conclusion Personal Comments

3 Intelligent Database Systems Lab N.Y.U.S.T. I. M. Motivation Chinese word segmentation is a necessary step in Chinese- English statistical machine translation. However, there are many choices involved in creating a CWS system such as various specifications and CWS methods. Ex 我們要發展中國家用電器 我們 要 發展 中國 家用電器 We Want to develop China’s Home electrical appliances. We Want Developing country To use Electrical appliances.

4 Intelligent Database Systems Lab N.Y.U.S.T. I. M. Motivation Chinese word segmentation is a necessary step in Chinese- English statistical machine translation. However, there are many choices involved in creating a CWS system such as various specifications and CWS methods. Chinese word segmentationStatistical machine translation The ChineseName is called by Rome phonetic transcription

5 Intelligent Database Systems Lab N.Y.U.S.T. I. M. Objective They created 16 CWS schemes under different setting to examine the relationship between CWS and SMT. The authors also tested two CWS methods that dictionary- based and CRF-based approaches. The authors propose two approaches for combining advantages of different specifications.  A simple concatenation of training data.  Implementing linear interpolation of multiple translation models.

6 Intelligent Database Systems Lab N.Y.U.S.T. I. M. Methodology-Dictionary-based The pure dictionary-based CWS does not recognize OOV words. The authors combined N-gram language model with Dictionary-based word segmentation.  For a give Chinese character sequence, C=c 0 c 1 c 2 …c N  The word sequence, W=w t0 w t1 w t2 …w tM  Which satisfies Out-of-vocabulary δ(u,v) equal to 1 if both arguments are the same, and 0 otherwise.

7 Intelligent Database Systems Lab N.Y.U.S.T. I. M. Methodology-CRF-based IOB Tagging Each character of a word is labeled.  B if it is the first character of a multiple-character word.  O if the character functions as an independent word  I for other. Ex :全北京市 is labeled 全 /O 北 /B 京 /I 市 /I The probability of an IOB tag sequence, T=t 0 t 1 …t M, given the word sequence W=w 0 w 1 …w M Unigram features : w 0,w -1,w 1,w -2,w 2,w 0 w -1,w 0 w 1,w -1 w 1,w -2 w -1,w 2 w 0 bigram features : simply used absolute counts for each feature in the training data and define a cutoff value for each feature type.

8 Intelligent Database Systems Lab N.Y.U.S.T. I. M. Methodology-Achilles An In-House CWS including Both Dictionary-Based and CRF-Based Approaches.  Dictionary-based  Zero OOV recognition rate.  In-vocabulary rate is higher.  CRF-based  OOV recognition rate higher than Dictionary-based.  Best F-scores.

9 Intelligent Database Systems Lab N.Y.U.S.T. I. M. Methodology-Phrase-Based SMT The method use a framework of log-linear models to integrate multiple features. Where f i (F,E) is the logarithmic value of the i-th feature,and λ i is the weight of the i-th feature. The target sentence candidate that maximizes P(E|F) is the solution.

10 Intelligent Database Systems Lab N.Y.U.S.T. I. M. Experiments The data used in the experiments were provided by LDC, and use the English sentences of the data plus Xinhua news of the LDC Gigaword English corpus. Implementation of CWS Schemes  Tokens : the total number of words in the training data  Unique word : lexicon size of the segmented training data.  OOVs : the unknown words in the test data.

11 Intelligent Database Systems Lab N.Y.U.S.T. I. M. Experiment The effect of CWS specifications on SMT.

12 Intelligent Database Systems Lab N.Y.U.S.T. I. M. Experiment

13 Intelligent Database Systems Lab N.Y.U.S.T. I. M. Experiment - Combining multiple CWS schemes Effect of Combining Training Data from Multiple CWS Specifications.  Create a new CWS scheme called dict-hybrid by combining AS, CITYU, MSR, PKU.  49,546,231 tokens, 112,072 unique words for the training data. 693 OOVs for the test data.

14 Intelligent Database Systems Lab N.Y.U.S.T. I. M. Experiment Effect of Feature Interpolation of Translation Models.  The authors generated multiple translation models by using different word segmenters.  The phrase translation model p(e|f) can be linearly interpolated as  Where p i (e|f) is the phrase translation model corresponding to the i-th CWSs. α i is the weight and S is the total number of models.

15 Intelligent Database Systems Lab N.Y.U.S.T. I. M. Conclusion The authors analyzed multiple CWS specifications and built a CWS for each one to examine how they affected translations. They proposed a new approach to linear interpolation of translation features, and improvement in translation and achieved the best BLEU score of all the CWS schemes.

16 Intelligent Database Systems Lab N.Y.U.S.T. I. M. Comments Advantage  There are many experiments to evaluate their performance. Drawback  But some interpretation of experiments are complex. Application  Chinese Word Segmentation.  Statistical Machine Translation.


Download ppt "Intelligent Database Systems Lab N.Y.U.S.T. I. M. Chinese Word Segmentation and Statistical Machine Translation Presenter : Wu, Jia-Hao Authors : RUIQIANG."

Similar presentations


Ads by Google