Function words are often reduced or even deleted in casual conversation (Fig. 1). Pairs may neutralize: he’s/he was, we’re/we were What sources of information.

Slides:



Advertisements
Similar presentations
Tom Lentz (slides Ivana Brasileiro)
Advertisements

Tone perception and production by Cantonese-speaking and English- speaking L2 learners of Mandarin Chinese Yen-Chen Hao Indiana University.
Human Speech Recognition Julia Hirschberg CS4706 (thanks to John-Paul Hosum for some slides)
PSY 369: Psycholinguistics
Language and Cognition Colombo, June 2011 Day 8 Aphasia: disorders of comprehension.
Infant sensitivity to distributional information can affect phonetic discrimination Jessica Maye, Janet F. Werker, LouAnn Gerken A brief article from Cognition.
Chapter 12 Speech Perception. Animals use sound to communicate in many ways Bird calls Bird calls Whale calls Whale calls Baboons shrieks Baboons shrieks.
The Perception of Speech. Speech is for rapid communication Speech is composed of units of sound called phonemes –examples of phonemes: /ba/ in bat, /pa/
SPEECH RECOGNITION 2 DAY 15 – SEPT 30, 2013 Brain & Language LING NSCI Harry Howard Tulane University.
Using prosody to avoid ambiguity: Effects of speaker awareness and referential context Snedeker and Trueswell (2003) Psych 526 Eun-Kyung Lee.
Speech perception 2 Perceptual organization of speech.
“Speech and the Hearing-Impaired Child: Theory and Practice” Ch. 13 Vowels and Diphthongs –Vowels are formed when sound produced at the glottal source.
The Perception of Speech. Speech is for rapid communication Speech is composed of units of sound called phonemes –examples of phonemes: /ba/ in bat, /pa/
Nuclear Accent Shape and the Perception of Prominence Rachael-Anne Knight Prosody and Pragmatics 15 th November 2003.
Word Stress in English.
Sentence Durations and Accentedness Judgments ABSTRACT Talkers in a second language can frequently be identified as speaking with a foreign accent. It.
A study into the international intelligibility of Hong Kong English Intelligible, Intelligent, Likeable? Andy Kirkpatrick
Speech Perception Overview of Questions Can computers perceive speech as well as humans? Does each word that we hear have a unique pattern associated.
L2 Acquisition: The Social Perspective Guadalupe Valdés Stanford University.
SYNTAX 1 DAY 30 – NOV 6, 2013 Brain & Language LING NSCI Harry Howard Tulane University.
Language, Mind, and Brain by Ewa Dabrowska Chapter 2: Language processing: speed and flexibility.
Research on teaching and learning pronunciation
Sound and Speech. The vocal tract Figures from Graddol et al.
Morphological information and acoustic salience in Dutch compounds Victor Kuperman, IWTS Radboud University Nijmegen.
1 Speech synthesis 2 What is the task? –Generating natural sounding speech on the fly, usually from text What are the main difficulties? –What to say.
The Perception of Speech
PHONOLOGICAL ANALYSIS ABSTRACT Substitution is a common phenomenon when a non-English speaker speaks English with foreign accent. By using spectrographic.
STUDY OF ENGLISH STRESS AND INTONATION
Teaching Pronunciation
ACE TESOL Diploma Program – London Language Institute OBJECTIVES You will understand: 1. Various techniques for assessing student listening ability. You.
English Phonetics arifsuryopriyatmojo.com. Questions to consider? what is a language? how many languages are there? why do people need a language? how.
Chapter 10: Language and Communication Module 10.1 The Road to Speech Module 10.2 Learning the Meanings of Words Module 10.3 Speaking in Sentences Module.
Language Assessment 4 Listening Comprehension Testing Language Assessment Lecture 4 Listening Comprehension Testing Instructor Tung-hsien He, Ph.D. 何東憲老師.
The partner effect in non- native speech Speech Accommodation Group Jiwon Hwang May 9, 2007.
Present Experiment Introduction Coarticulatory Timing and Lexical Effects on Vowel Nasalization in English: an Aerodynamic Study Jason Bishop University.
Nasal endings of Taiwan Mandarin: Production, perception, and linguistic change Student : Shu-Ping Huang ID No. : NA3C0004 Professor : Dr. Chung Chienjer.
1 Speech Perception 3/30/00. 2 Speech Perception How do we perceive speech? –Multifaceted process –Not fully understood –Models & theories attempt to.
Suprasegmentals Segmental Segmental refers to phonemes and allophones and their attributes refers to phonemes and allophones and their attributes Supra-
Dr. Monira Al-Mohizea MORPHOLOGY & SYNTAX WEEK 12.
Results 1.Boundary shift Japanese vs. English perceptions Korean vs. English perceptions 1.Category boundary was shifted toward boundaries in listeners’
English Linguistics: An Introduction
Seven Secrets to Learn English by Prinkle
Speech Perception 4/4/00.
Ch 3 Slide 1 Is there a connection between phonemes and speakers’ perception of phonetic differences? (audibility of fine distinctions) Due to phonology,
Breathing and speech planning in turn-taking Francisco Torreira Sara Bögels Stephen Levinson Max Planck Institute for Psycholinguistics Nijmegen, The Netherlands.
SPEECH PERCEPTION DAY 16 – OCT 2, 2013 Brain & Language LING NSCI Harry Howard Tulane University.
Sh s Children with CIs produce ‘s’ with a lower spectral peak than their peers with NH, but both groups of children produce ‘sh’ similarly [1]. This effect.
Intelligibility of voiced and voiceless consonants produced by Lebanese Arabic speakers with respect to vowel length Romy Ghanem.
Copyright 2005 Allyn & Bacon Anthropology Experience Linguistics.
Pragmatically-guided perceptual learning Tanya Kraljic, Arty Samuel, Susan Brennan Adaptation Project mini-Conference, May 7, 2007.
Elision is an important area in listening skills, as learners are often unable to hear elided words correctly, especially if they have little contact with.
Sounds and speech perception Productivity of language Speech sounds Speech perception Integration of information.
Katherine Morrow, Sarah Williams, and Chang Liu Department of Communication Sciences and Disorders The University of Texas at Austin, Austin, TX
© 2005, it - instituto de telecomunicações. Todos os direitos reservados. Arlindo Veiga 1,2 Sara Cadeias 1 Carla Lopes 1,2 Fernando Perdigão 1,2 1 Instituto.
Jeopardy Syntax Morphology Sociolinguistics and Prescriptivism Phonology Language and Diversity Q $100 Q $200 Q $300 Q $400 Q $500 Q $100 Q $200 Q $300.
Language I: Structure Defining language: symbolic, rule-based system of communication shared by a community Linguistics: study of language Psycholinguistics:
Nuclear Accent Shape and the Perception of Syllable Pitch Rachael-Anne Knight LAGB 16 April 2003.
Suprasegmental Properties of Speech Robert A. Prosek, Ph.D. CSD 301 Robert A. Prosek, Ph.D. CSD 301.
1 Pragmatic & Perceptual Biases on Phoneme Identification Young Ah Do (MIT Linguistics) TedLab. BCS. MIT 25 th April 2012.
Outline  I. Introduction  II. Reading fluency components  III. Experimental study  1) Method and participants  2) Testing materials  IV. Interpretation.
Speech in the DHH Classroom A new perspective. Speech in the DHH Bilingual Classroom Important to look beyond the traditional view of speech Think of.
Unit 6 Unit 6 Teaching Pronunciation. Teaching aims able to understand the role of pronunciation in language learning able to know the goal of teaching.
Chapter 11 Language. Some Questions to Consider How do we understand individual words, and how are words combined to create sentences? How can we understand.
On the role of context and prosody in the interpretation of ‘okay’ Julia Agustín Gravano, Stefan Benus, Julia Hirschberg Héctor Chávez, and Lauren Wilcox.
Danielle Werle Undergraduate Thesis Intelligibility and the Carrier Phrase Effect in Sinewave Speech.
LI 2013 NATHALIE F. MARTIN INTRODUCTION TO LINGUISTICS.
Sentence Durations and Accentedness Judgments
期中考试 Quarter’s Final Oct. 11th – Listening / Reading / Writing (60)
Studying Intonation Julia Hirschberg CS /21/2018.
Artificial Intelligence 2004 Speech & Natural Language Processing
Presentation transcript:

Function words are often reduced or even deleted in casual conversation (Fig. 1). Pairs may neutralize: he’s/he was, we’re/we were What sources of information do listeners use in interpreting reduced function words? What affects whether listeners extract information from the signal, or simply rely on biases (perhaps phrasal frequency)? References Arai, T. (1999). A case study of spontaneous speech in Japanese. Proc. of the International Congress of Phonetic Sciences (ICPhS), San Francisco, 1: Ernestus, Mirjam, Baayen, R. Harald, & Schreuder, Rob. (2002). The recognition of reduced word forms. Brain and Language, 81: 162–173. Greenberg, Steven. (1999). Speaking in shorthand - A syllable-centric perspective for understanding pronunciation variation. Speech Communication, 29: van de Ven, M., Ernestus, M., & Schreuder, R. (Submitted). Contextual influences in the understanding of spontaneous speech. Were we or are we? Perception of Reduced Function Words in Spontaneous Conversations Natasha Warner*, Dan Brenner*, Anna Woods*, Benjamin V. Tucker**, Mirjam Ernestus‡ *U. Arizona, Tucson; **U. Alberta, Edmonton; ‡Radboud U. Nijmegen & MPI for Psycholinguistics Casual phone conversations by 22 native English-speaking American students, recorded in a sound booth Extracted 188 utterances/phrases containing (s)he is, (s)he’s(s)he was we are, we’rewe were they are, they’rethey were other X-is/was/are/were 3 context conditions (Fig. 1): Full phrase, Limited (through surrounding vowels), Isolation (target he’s, we were, etc. only) Isolation supplies acoustic cues within the target word(s), Limited adds speech rate and coarticulation information, Full adds syntactic/semantic information 62 listeners (42 Eng. monolingual, all native) heard each item in each context, responded is vs. was, are vs. were 3pSC Introduction: Questions Methods Results Discussion In perceiving reduced, potentially neutralized function words, listeners use acoustic information within the word itself (isolation), and additionally use cues outside the word (syntax, semantics, speech rate). Speech rate is more helpful for longer forms (was, were), while syntax/semantics is most important for shorter forms (‘s, ‘re). Reduction creates more ambiguity between are/were than between is/was, perhaps because pairs like “we’re” vs. “were” have less vowel quality difference than “he’s” vs. “he was.” Quotative “like” greatly increases ambiguity in the preceding is/was verb, even if the “like” is not heard. This suggests there is more reduction in the high frequency phrases “he’s like,” “she was like,” etc. Listeners use a variety of information sources, as well as overall bias, perhaps based on frequency, to interpret reduced function words. There is a small overall tendency toward present responses: For is/was pairs, 61.8% “is” responses overall, for are/were pairs, 56.2% “are” responses. Figure 2: Listeners extract significantly more information from the full utterance than from the isolated target. They use primarily syntactic/semantic information if the word is actually the shorter option (is, are), but mostly speech rate if it is the longer (was, were): varying effect of limited context. Figure 3: Items vary in intelligibility. If are/were items are difficult (reduced), utterance context helps greatly, but if they are clear (less reduced), context adds little. For is/was, more context rarely helps even if they are difficult (reduced). Figure 4: Young speakers use quotative “like” frequently (e.g. “And she’s like, ‘Yay! I’m so excited for you!’”). In targets without “like,” listeners make significantly better use of acoustic information in is/was than in are/were. When the target is followed by “like,” listeners rely more on bias toward is, even if they do not hear the following “like” (isolation). Er, uh, Tuesday night, uh when we were chill in’ in the spa, but Figure 1: Ex. item Ʊ ? n w ɚ (w) ɚ ʧ ʌ ? Full Limited Iso. Figure 2: All data except quotative "like" Figure 3: Representative sample items Figure 4: In isolation only ("like" for sing. only, too few tokens in plural)