Presentation on theme: "AUTOMATIC PHONETIC ANNOTATION OF AN ORTHOGRAPHICALLY TRANSCRIBED SPEECH CORPUS Rui Amaral, Pedro Carvalho, Diamantino Caseiro, Isabel Trancoso, Luís Oliveira."— Presentation transcript:
AUTOMATIC PHONETIC ANNOTATION OF AN ORTHOGRAPHICALLY TRANSCRIBED SPEECH CORPUS Rui Amaral, Pedro Carvalho, Diamantino Caseiro, Isabel Trancoso, Luís Oliveira IST, Instituto Superior Técnico INESC, Instituto de Engenharia de Sistemas e Computadores
Summary Motivation System Architecture –Module 1: Grapheme-to-phone converter (G2P) –Module 2: Alternative transcriptions generator (ATG) –Module 3: Acoustic signal processor –Module 4: Phonetic decoder and aligner Training and Test Corpora Results –Transcription and alignment (Development phase) –Test corpus annotation (Evaluation phase) Conclusions and Future Work
Motivation Time consuming, repetitive task ( over 60 x real time) Large corpora processing No expert intervention –Non-existence of widely adopted standard procedures –Error prone –Inconsistency's among human annotators
System Architecture speech corpus Orthographically transcribed Acoustic signal processor Alternative Transcriptions Generator Phonetic Decoder/Aligner Rules Lexicon Grapheme-to-Phone Converter Phonetically annotated speech corpus
- Module 1 - Grapheme-to-Phone Converter Modules of the Portuguese TTS system (DIXI) Text normalisation –Special symbols, numerals, abbreviations and acronyms Broad Phonetic Transcription –Careful pronunciation of the word pronunciation –Set of 200 rules –Small exceptions dictionary (364 entries) –SAMPA phonetic alphabet
- Module 2 - Alternative Transcriptions Generator Transformation of phone sequences into lattices Based on optional rules: –Which account for: »Sandhi »Vowel reduction –Specified using finite-state-grammars and simple transduction operators A (B C) D
Examples: vowel reduction oito["ojtu]["ojt] restaurante[R@Stawr"6~t][R@StOr"6~t] Alternative pronunciations viagens[vj"aZ6~j~S][vj"aZe~S]
Phrasevou para a praia. Canonical P.T. [v"o p6r6 6 pr"aj6] Narrow P. T. ( most freq. ) [v"o pr"a pr"ai6] =sandhi + vowel reduction Rules: DEF_RULE 6a, ( (6 NULL) (sil NULL) (6 a) ) DEF_RULE pra, ( p ("6 NULL) r 6 ) Lattice rp"6r6sil6 p... ar Example (rules application):
- Module 3 - Acoustic Signal Processor Extraction of acoustical signal characteristics Sampling: 16 kHz, 16 bits Parameterisation: MFCC (Mel - Frequency Cepstral Coefficients) –Decoding: 14 coefficients, energy, 1 st and 2 nd order differences, 25 ms Hamming windows, updated every 10 ms. –Alignment: 14 coefficients, energy, 1 st and 2 nd order differences, 16 ms Hamming windows, updated every 5 ms.
- Module 4 - Phonetic Decoder and Aligner Selection of the phonetic transcription which is closest to the utterance Viterbi algorithm 2 x 60 HMM models –Architecture »left-to-right »3-state »3-mixture NOTE: modules 3 and 4 use Hidden Markov Model Toolkit (Entropic Research Labs)
Training and Test Corpora Subset of the EUROM 1 multilingual corpus –European Portuguese –Collected in an anechoic room, 16 kHz, 16 bits. –5 male + 5 female speakers (few talkers) –Prompt texts »Passages: Paragraphs of 5 related sentences Free translations of the English version of EUROM 1 Adapted from books and newspaper text »Filler sentences: 50 sentences grouped in blocks of 5 sentences each Built to increase the numbers of different diphones in the corpus –Manually annotated.
Training and Test Corpora (cont.) Training Corpus Test Corpus 1 Test Corpus 2 Passages: O0-O9, P0-P9: English translations Q0-Q9, R0-R9: Books and newspaper text. Filler sentences: F0-F9
Transcription and alignment results Transcription: –Precision = ((correct - inserted)/Total) x 100% Alignment: –% of cases in which the absolute error is < 10 ms –average absolute error including 90 % of cases
Annotation strategies and Results NOTE: Alignment evaluated only in places where the decoded sequence matched the manual sequence TranscriptionAlignment Strategy 1 HMM alignment Strategy 2 HMM recognition Strategy 3 HMM recognitionHMM alignment
Annotation results - Transcription - Comments –Better precision achieved for canonical transcriptions of Test 2 –Highest global precision achieved in Test 1 –Successive application of the rules leads to a better precision Precision Rules Test 1Test 2 Canonical 74 %76,9 % Sandhi 77,1 %79,4 % Vowel reduction and alternative pronunciation 85,1 %84,5 %
Annotation results - Alignment - Comments –Better alignment obtained with the best decoder –Some problematic transitions: vowels, nasals vowels and liquids.
Conclusions Better annotations results with: –Alternative Transcriptions (comparatively to canonical). –Use of different models for alignment and recognition About 84 % precision in transcription and 22 ms of maximum alignment error for 90 % of the cases
Future Works Automatic rule inference –1st Phase: comparison and selection of rules –2nd Phase: validation or phonetic-linguistic interpretation Annotation of other speech corpora to build better acoustic models Assignment of probabilistic information to the alternative pronunciations generated by rule
TOPIC ANNOTATION IN BROADCAST NEWS Rui Amaral, Isabel Trancoso IST, Instituto Superior Técnico INESC, Instituto de Engenharia de Sistemas e Computadores
Preliminary work System Architecture –Two-stage unsupervised clustering algorithm »nearest-neighbour search method »Kullback-Leibler distance measure –Topic language models »smoothed unigrams statistics –Topic Decoder »based on Hidden Markov Models (HMM) NOTE: topic models created with CMU Cambridge Statistical Language Modelling Toolkit
Training and Test Corpora Subset of the BD_PUBLICO newspaper text corpus –20000 stories –6 month period (September 95 - February 96) –topic annotated –size between 100 and 2000 word –normalised text