Presentation is loading. Please wait.

Presentation is loading. Please wait.

Linking transcriptions to spoken audio John Coleman and Sergio Grau Oxford University Phonetics Laboratory

Similar presentations


Presentation on theme: "Linking transcriptions to spoken audio John Coleman and Sergio Grau Oxford University Phonetics Laboratory"— Presentation transcript:

1

2 Linking transcriptions to spoken audio John Coleman and Sergio Grau Oxford University Phonetics Laboratory http://www.phon.ox.ac.uk/SpokenBNC

3 Many thanks to Lou Burnard (re XML) Jiahong Yuan, UPenn (for P2FA aligner) Dave de Roure & Kevin Page (for discussions re linked data) John Pybus & Amir Nettler (for experiments with streamed audio fragments) for £££

4 Outline of our talk: Large audio corpora and their challenges Mining a Year of Speech Random access to audio snippets

5 Multimedia dominates the internet 2005: YouTube launched 2008: YouTube surpasses Yahoo as world’s No. 2 search engine 2011: video/audio dominates peak-time bandwidth in North America

6 Some browsable audio corpora www.oyez.org (US Supreme Court recordings) whitehousetapes.net (1940-1973) www.scottishcorpus.ac.uk (Scottish Corpus of Texts and Speech) http://sounds.bl.uk/ (British Library Archival Sound Recordings)

7

8 Challenges of very large audio collections of spoken language How does a researcher find audio segments of interest? How do audio corpus providers mark them up to facilitate searching and browsing? How to make very large scale audio collections accessible?

9 Server-side challenges Amount of material Storage – CD quality audio: 635 MB/hour – Uncompressed.wav files: 115 MB/hour – 1.02 TB/year – Library/archive.wav files: 1 GB/hr, 9 TB/yr 1 TB (1000 GB) hard drive: c. £65 Now £39.95! Spoken audio = 250 times XML ---

10 Server-side challenges Audio format issues – Uncompressed.wav files: 115 MB/hour – Temptation to use compressed formats – For speech analysis, low bitrate compression (40 kbs) is pretty disastrous – Spectral centre-of-gravity measures are unreliable even at higher compression rates, but pitch and formant estimation is OK van Son (2005) Acta Acustica with Acustica 91: 771-778

11 Challenges Amount of material Computing – distance measures, etc. – alignment of labels – searching and browsing – Just reading or copying 9 TB takes >1 day – Download time: days or weeks

12 How large? Some biggish transcribed corpora: Switchboard corpus: 13 days (included in MYS) Spoken Dutch: 1 month, only a fraction transcribed Spoken Spanish: 110 hours OSU Buckeye Corpus: 2 days Wellington Corpus, NZ: 3 days Mining a Year of Speech: 218 days so far, on track towards 3.6 years (>1200 days)

13 The “Year of Speech” A grove of corpora, held at various sites with a common indexing scheme and search tools: US English: 2,240 hours of telephone conversations 1,255 hours of broadcast news Talk show conversations (1,000 hrs), Supreme Court oral arguments (5,000 hrs), political speeches and debates British English: Spoken audio part of the British National Corpus >7.4 million words of transcribed speech 1,400 hours Digitized by collaboration with British Library

14 Analogue audio in libraries British Library: >1m disks and tapes, 5% digitized Library of Congress Recorded Sound Reference Center: >2m items, including … International Storytelling Foundation: >8000 hrs of audio and video European broadcast archives: >20m hrs (2,283 years) cf. Large Hadron Collider 74% on ¼” tape 19% shellac and vinyl 7% digital

15 Analogue audio in libraries World wide: ~100m hours (11,415 yrs) analogue i.e. 4-5 Large Hadron Colliders! Cost of professional digitization and cataloguing: ~£20/$32 per tape (e.g. C-90 cassette) Using speech recognition and natural language technologies (e.g. summarization) could provide more detailed cataloguing/indexing without time- consuming human listening

16 Why so large? Lopsided sparsity I Top ten words each occur You 58,000 times it the 's and n't a That12,400 words (23%) only Yeahoccur once

17 Why so large? Lopsided sparsity

18 A rule of thumb To catch most English sounds, you need minutes of audio common words of English … a few hours a typical person's vocabulary … >100 hrs pairs of common words … >1000 hrs arbitrary word-pairs … >100 years

19 Main problem in large corpora Finding needles in the haystack To address that challenge, we think there are two “killer apps” Forced alignment Data linking, or at least open exposure of digital material, coupled with cross-searching

20 Practicalities In order to be of much practical use, such very large corpora must be indexed at word and segment level All included speech corpora must therefore have associated text transcriptions We’re using P2FA, the Penn Phonetics Laboratory Forced Aligner, to associate each word and segment with the corresponding start and end points in the sound files

21 Mining (indexing by forced alignment) x 21 million

22 Mining (indexing by forced alignment)

23 Mining (a needle in a haystack)

24 Mining (a diamond in the rough)

25 Challenges for alignments Problems with documentation and records Transcription errors Long untranscribed portions Some transcribed regions with no audio (lost in copying)

26 Challenges for alignments Broadcast recordings may include untranscribed commercials Transcripts generally edit out dysfluencies Political speeches may extemporize, departing from the published script

27 Challenges for alignments Overlapping speakers Background noise/music/babble Variable signal loudness Reverberation Distortion Poor speaker vocal health/voice quality Unexpected accents: need multidialect pronouncing dictionary

28 Issues we’re still grappling with No standards for adding phonemic transcriptions and timing information to XML transcriptions Many different possible schemes How to decide?

29 Enabling other corpora to be brought in in future Promoting common standards for audio with linked transcription ? Well

30 Automatic Speech-to-Phoneme alignment

31 Aligner output to extended XML HTK example: HTK output+ XML -> extended XML How to represent the obtained time information within the existing TEI-XML structure? 0.5625 0.6125 "IH1" 0.6125 0.8225 "T” 0.5625 0.8225 "IT”

32 Integrating alignment information in the TEI-XML structure Time information Word level Phoneme level Phonemic representation of each word Timeline

33 Other representations: EXMARaLDA EXMARaLDA: “Extensible Markup Language for Discourse Annotation” http://www.exmaralda.org/.... Good evening. I have with me tonight Ann Elk Mistress Ann Elk.

34 Other representations: Voices of the Holocaust http://voices.iit.edu/xml/voth_project_tei_example.xml This is the first utterance of the interviewer. This is the first utterance of the interviewee.

35 Other representations: IFA Dialog Video corpus, Phonetic Sciences, University of Amsterdam van Son, R., Wesseling, W., Sanders, E., and van den Heuvel, H., The IFADV corpus: A free dialog video corpus, LREC’08, Marrakech, 2008... beginnen we weer opnieuw?

36 Other representations: Labb-Cat (ONZE Miner) http://onzeminer.sourceforge.net Transcriber or Praat representation

37 Other representations: Transcriber http://trans.sourceforge.net so what do you know of your family ’s history like do you know when and why they came to Oxford

38 Other representations: COLT Corpus http://www.hd.uib.no/colt/ – Sentence Level But I must see Mr [smile again.] [ spoiled again?]... – Word level But I must see Mr...

39 Other representations: Summary Mostly sentence/word level time information representation No phoneme analysis No phoneme time information Timeline representation TEI standard?

40 Other representations: Summary Mostly sentence/word level time information representation No phoneme analysis No phoneme time information Timeline representation TEI standard? Extended TEI-XML with time and phoneme information

41 Wanted me to.

42

43 wanted

44 ......

45 Q. When you have an indexing scheme and a big database, what do you want to do with it? A. Random access to audio snippets

46 Random access to audio snippets Timing of fragments in URL e.g. Gaudi (Google Labs) everyzing.com (ramp.com) http://audio.weei.com/search?q=something http://audio.weei.com/a/42828235/red-sox- pregame-show.htm#q=something&seek=311.989 http://audio.weei.com/a/42828235/red-sox- pregame-show.htm#q=something&seek=311.989

47

48

49 Random access to audio snippets Audio objects in HTML5 (in the browser) e.g. http://www.phon.ox.ac.uk/jcoleman/useful_test.htmlhttp://www.phon.ox.ac.uk/jcoleman/useful_test.html W3C media fragments protocol e.g. http://www.w3.org/2008/WebVideo/Fragments/ Demo: http://ninsuna.elis.ugent.be/MediaFragmentsPlayerhttp://ninsuna.elis.ugent.be/MediaFragmentsPlayer

50 URN’s for audio snippets Linked data/semantic web approach: refer to each specific word, phoneme etc as a specific audio object, not just a time range inside an audio file Challenge: need for an ontology for sounds and sound timelines in audio recordings Some progress in music ontologies

51 Conclusion Sound and multimedia corpora/collections are getting very big In fact multimedia, not text, dominates the internet So, we need some standard ways for representing audio structure and accessing its parts Forced alignment allows us to map transcriptions to audio, reasonably accurately For searching, there are several “demonstration” possibilities, but this is still work in progress

52 Thank you very much!


Download ppt "Linking transcriptions to spoken audio John Coleman and Sergio Grau Oxford University Phonetics Laboratory"

Similar presentations


Ads by Google