Sh s Children with CIs produce ‘s’ with a lower spectral peak than their peers with NH, but both groups of children produce ‘sh’ similarly [1]. This effect.

Slides:



Advertisements
Similar presentations
An Integrated Toolkit Deploying Speech Technology for Computer Based Speech Training with Application to Dysarthric Speakers Athanassios Hatzis, Phil Green,
Advertisements

09/01/10 Kuhl et al. (1992) Presentation Kuhl, P. K., Williams, K. A., Lacerda, F., Stevens, K. N., & Lindblom, B. (1992) Linguistic experience alters.
Analysis of Spoken Language Department of General & Comparative Linguistics Christian-Albrechts-Universität zu Kiel Oliver Niebuhr 1 Vowel.
Event-related potentials (ERPs) have been used in past research to study the correlates and consequences of alcohol use (Porjesz et al., 2005). In particular,
Tone perception and production by Cantonese-speaking and English- speaking L2 learners of Mandarin Chinese Yen-Chen Hao Indiana University.
Sounds that “move” Diphthongs, glides and liquids.
Plasticity, exemplars, and the perceptual equivalence of ‘defective’ and non-defective /r/ realisations Rachael-Anne Knight & Mark J. Jones.
Glides (/w/, /j/) & Liquids (/l/, /r/) Degree of Constriction Greater than vowels – P oral slightly greater than P atmos Less than fricatives – P oral.
Human Speech Recognition Julia Hirschberg CS4706 (thanks to John-Paul Hosum for some slides)
Effects of Competence, Exposure, and Linguistic Backgrounds on Accurate Production of English Pure Vowels by Native Japanese and Mandarin Speakers Malcolm.
The perception of dialect Julia Fischer-Weppler HS Speaker Characteristics Venice International University
Karen Iler Kirk PhD, Hearing Science, The University of Iowa –Speech perception & cochlear implants Professor, Dept. of Speech, Language and Hearing Sciences.
Infant sensitivity to distributional information can affect phonetic discrimination Jessica Maye, Janet F. Werker, LouAnn Gerken A brief article from Cognition.
Ling 240: Language and Mind Acquisition of Phonology.
Speech perception 2 Perceptual organization of speech.
Aimee L. Arnoldussen; Julia L. Evans; Mark S. Seidenberg Performance Matching: Comparing Children with Specific Language Impairment to Younger Children.
Perception of syllable prominence by listeners with and without competence in the tested language Anders Eriksson 1, Esther Grabe 2 & Hartmut Traunmüller.
Sentence Durations and Accentedness Judgments ABSTRACT Talkers in a second language can frequently be identified as speaking with a foreign accent. It.
David J. Ertmer, Ph.D. Associate Professor Speech, Language, and Hearing Sciences Your picture here.
The nature of sound Types of losses Possible causes of hearing loss Educational implications Preparing students for hearing assessment.
Speaking Style Conversion Dr. Elizabeth Godoy Speech Processing Guest Lecture December 11, 2012.
Latino fathers’ childbearing intentions: The view from mother-proxy vs. father self-reports Lina Guzman, Jennifer Manlove, & Kerry Franzetta.
Speech Perception Overview of Questions Can computers perceive speech as well as humans? Does each word that we hear have a unique pattern associated.
Vocal Emotion Recognition with Cochlear Implants Xin Luo, Qian-Jie Fu, John J. Galvin III Presentation By Archie Archibong.
Interviews September 22, Questionnaires a. What it is/when to use them Types of Questionnaires group/individual open/closed a. Face-to-face (Utah.
Development of coarticulatory patterns in spontaneous speech Melinda Fricke Keith Johnson University of California, Berkeley.
Profile of Phoneme Auditory Perception Ability in Children with Hearing Impairment and Phonological Disorders By Manal Mohamed El-Banna (MD) Unit of Phoniatrics,
Interrupted speech perception Su-Hyun Jin, Ph.D. University of Texas & Peggy B. Nelson, Ph.D. University of Minnesota.
Perceptual Weighting Strategies in Normal Hearing and Hearing Impaired Children and Adults Andrea Pittman, Ph.D. Patricia Stelmachowicz, Ph.D. Dawna Lewis,
Different evaluations for different kinds of hearing Matthew B. Winn Au.D., Ph.D. Waisman Center, UW-Madison Dept. of Surgery.
Level 1 and Level 2 Auditory Perspective-taking in 3- and 4- Year -Olds Abstract Presented at the Psychology Undergraduate Research Conference, Atlanta,
Sebastián-Gallés, N. & Bosch, L. (2009) Developmental shift in the discrimination of vowel contrasts in bilingual infants: is the distributional account.
Speech Perception 4/6/00 Acoustic-Perceptual Invariance in Speech Perceptual Constancy or Perceptual Invariance: –Perpetual constancy is necessary, however,
Creating sound valuewww.hearingcrc.org Kelley Graydon 1,2,, Gary Rance 1,2, Dani Tomlin 1,2 Richard Dowell 1,2 & Bram Van Dun 1,4. 1 The HEARing Cooperative.
VOT trumps other measures in predicting Korean children’s early mastery of tense stops Eun Jong Kong Mary E. Beckman Jan Edwards LSA2010 January 7 th.
Ruth Litovsky University of Wisconsin Madison, WI USA Brain Plasticity and Development in Children and Adults with Cochlear Implants
Acoustic Aspects of Place Contrasts in Children with Cochlear Implants Kelly Wagner, M.S., & Peter Flipsen Jr., Ph.D. Idaho State University INTRODUCTION.
Parental Educational Level, Language Characteristics, and Children Who Are Late to Talk Celeste Domsch Department of Hearing & Speech Sciences Vanderbilt.
5aSC5. The Correlation between Perceiving and Producing English Obstruents across Korean Learners Kenneth de Jong & Yen-chen Hao Department of Linguistics.
Acoustic Cues to Laryngeal Contrasts in Hindi Susan Jackson and Stephen Winters University of Calgary Acoustics Week in Canada October 14,
Sounds in a reverberant room can interfere with the direct sound source. The normal hearing (NH) auditory system has a mechanism by which the echoes, or.
Speech Science IX How is articulation organized? Version WS
Assessment of Phonology
Calibration of Consonant Perception in Room Reverberation K. Ueno (Institute of Industrial Science, Univ. of Tokyo) N. Kopčo and B. G. Shinn-Cunningham.
Epenthetic vowels in Japanese: a perceptual illusion? Emmanual Dupoux, et al (1999) By Carl O’Toole.
Need for cortical evoked potentials Assessment and determination of amplification benefit in actual hearing aid users is an issue that continues to be.
Phonetic Context Effects Major Theories of Speech Perception Motor Theory: Specialized module (later version) represents speech sounds in terms of intended.
The New Normal: Goodness Judgments of Non-Invariant Speech Julia Drouin, Speech, Language and Hearing Sciences & Psychology, Dr.
Sub-regional Workshop on Census Data Evaluation, Phnom Penh, Cambodia, November 2011 Evaluation of Age and Sex Distribution United Nations Statistics.
1 Cross-language evidence for three factors in speech perception Sandra Anacleto uOttawa.
Neurophysiologic correlates of cross-language phonetic perception LING 7912 Professor Nina Kazanina.
Examining Constraints on Speech Growth in Children with Cochlear Implants J. Bruce Tomblin The University of Iowa.
Functional Listening Evaluations:
Parent Education, Language Characteristics, and Children Who Are Late to Talk Celeste Domsch, Ph.D. Baylor University Stephen Camarata, Ph.D. Edward G.
Katherine Morrow, Sarah Williams, and Chang Liu Department of Communication Sciences and Disorders The University of Texas at Austin, Austin, TX
Bosch & Sebastián-Gallés Simultaneous Bilingualism and the Perception of a Language-Specific Vowel Contrast in the First Year of Life.
IIT Bombay 17 th National Conference on Communications, Jan. 2011, Bangalore, India Sp Pr. 1, P3 1/21 Detection of Burst Onset Landmarks in Speech.
Five Main Educational Challenges of Children with Cochlear Implants Academic Acoustic Attention Associative Adjustment How prepared are speech-language.
What can we expect of cochlear implants for listening to speech in noisy environments? Andrew Faulkner: UCL Speech Hearing and Phonetic Sciences.
Danielle Werle Undergraduate Thesis Intelligibility and the Carrier Phrase Effect in Sinewave Speech.
Speechreading Based on Tye-Murray (1998) pp
Speech Intelligibility and Sentence Duration as a Function of Mode of Communication in Cochlear Implanted Children Nicole L. Wiessner 1, Kristi A. Buckley.
The 157th Meeting of Acoustical Society of America in Portland, Oregon, May 21, pSW35. Confusion Direction Differences in Second Language Production.
Vision Sciences Society Annual Meeting 2012 Daniel Mann, Charles Chubb
4aPPa32. How Susceptibility To Noise Varies Across Speech Frequencies
University of Silesia Acoustic cues for studying dental fricatives in foreign-language speech Arkadiusz Rojczyk Institute of English, University of Silesia.
Visual Memory is Superior to Auditory Memory
A perceptual investigation of prosodic accuracy in children with typical language and specific language impairment Peter Richtsmeier
RESULTS: Individual Data
WAISMAN CENTER 2019 ARO Meeting Baltimore, MD PS621
Presentation transcript:

sh s Children with CIs produce ‘s’ with a lower spectral peak than their peers with NH, but both groups of children produce ‘sh’ similarly [1]. This effect is present even for productions that are judged as correct by a trained transcriber. Question: Is the subtle acoustic difference in ‘s’ between NH and CI groups perceptible to adults? Motivation: Begin to understand if the acoustic difference impacts listeners’ ease of understanding. METHODS How do Adults Perceive the Speech of Children with Cochlear Implants? Sara R. Bernstein, Ann E. Todd, Jan R. Edwards University of Wisconsin-Madison, USA Undergraduate Research Symposium April 18, 2013 Poster #109 WAISMAN CENTER INTRODUCTION HYPOTHESES RESULTSSUMMARY 1: Todd, A. E., Edwards, J. R., Litovsky, R. Y., (2011). Production of contrast between sibilant fricatives by children with cochlear implants. J. Acoust. Soc. Am., 130, : Newman, R. S., Clouse, S. A., & Burnham, J. L. (2001). The perceptual consequences of within-talker variability in fricative production. J. Acoust. Soc. Am., 109, REFERENCES ACKNOWLEDGEMENTS PARTICIPANTS For the question Is it ‘sh’? listeners responded more slowly to ‘s’ tokens by both hearing type groups (p<.01). This result was not significant for Is it ‘s’? Listeners did not respond slowest to CI- ‘ s’. Addressing Hypotheses  Adults rated the CI-‘s’ productions closer to the middle of the scale than NH-‘s’, mirroring the acoustic findings as predicted.  Adults did NOT respond slowest to CI-‘s’. Listeners did respond slower to ‘s’ than ‘sh’ in one task, and slower to ‘s’ tokens with lower spectral peaks in both timed tasks. This discrepancy may suggest a lack of adequate power. Additional Findings  Listeners identified CI tokens less accurately and consistently than they identified NH tokens, suggesting that CI tokens may be less perceptually distinct.  Reaction time and scale rating both had some relationship to spectral peak: spectral peak predicted scale rating for ‘sh’ and reaction time for ‘s’. DISCUSSION Scale ratings showed that the subtle acoustic differences between correct ‘s’ productions by children with NH and children with CIs were perceptible to adults. For children with CIs, it is possible that clinicians should not rely only on transcription accuracy. A production may be correct, but the child may not have a robust contrast. Using additional measures such as a visual analog scale in the clinic may be important. Future research is needed to determine whether this finding can explain the reduced intelligibility of speech of children with CIs relative to that of children with NH. Listeners rated CI- ‘ s’ tokens closer to the midline than NH- ‘ s’ tokens (p<.01). There was no difference in rating for ‘sh’ tokens. This mirrors the acoustic findings. 1.Adults will rate the ‘s’ and ‘sh’ productions by children with CIs more closely together than those by children with NH on a scale from ‘s’ to ‘sh’. Previous acoustic analyses have found that both groups produced ‘sh’ similarly, but children with CIs produced ‘s’ with less contrast from ‘sh’ than children with NH [1]. 2.Adults will respond most slowly to ‘s’ productions by children with CIs. Children with CIs produced less acoustic contrast between ‘s’ and ‘sh’ than peers with NH [1]. Listeners respond more quickly when sound categories are more distinct from each other [2]. Listeners responded faster to ‘s’ tokens with higher spectral peaks (more characteristic of an ‘s’; p<.01). Spectral peak did not predict reaction time for ‘sh’ tokens. Listeners displayed this pattern for both questions Is it ‘s’? and Is it ‘sh’? C. Reaction TimesD. Linking Acoustics and Reaction Time B. Linking Acoustics and Scale RatingA. Visual Analog Scale Ratings E. Listener Response Consistency Between Tasks # of tokens by Hearing Type (NH vs CI) and Sound (‘s’ vs ‘sh’) Spectral peak predicted scale rating for ‘sh’ only (p=.017), not ‘s’ for either hearing type. Listeners rated CI- ‘ s’ with greater variability than they rated NH- ‘ s’ tokens. Listeners responded less consistently to CI tokens than to NH tokens (p<.01). Figures E1 and E2 suggest that CI tokens were less perceptually distinct than NH tokens. Stimuli  220 speech samples (tokens)  Subset from acoustic analysis [1]  Word initial consonant-vowel sequences  Judged as correct by trained transcriber  Balanced by vowel context  Amplitude normalized Testing Setup  Tested in a quiet room  Equipment: laptop and headphones  Responded using number keys and mouse Analysis  Removed reaction time and scale rating outliers  Took the log of reaction time  Normalized each listener’s scale ratings A cochlear implant (CI) can provide an auditory signal to deaf individuals, but with less frequency information than what listeners with normal hearing (NH) can access. Perhaps due to device limitations, the speech of children with CIs is somewhat less intelligible than that of peers with NH. Frequency information can help listeners tell speech sounds apart. Spectral peak refers to the frequency in a sound with the most energy. The sound ‘s’ has a higher-frequency spectral peak than the sound ‘sh’. Talkers: 21 children with NH, 21 children with CIs 4-7 years old Age-matched within 4 months Children with CIs implanted before 2.5 years old Typical development (except hearing loss for CI group) Native English speakers Listeners: 36 adults (19 female, 17 male) years old Native English speakers No training in phonetic transcription 55 CI-s55 NH-s 55 NH-sh55 CI-sh 220  3 perception tasks presented in varied order + Acoustic Analysis [1]  All 220 tokens presented in random order in each task Design SECTION Respond to Is it ‘s’? Yes or No Respond to Is it ‘sh’? Yes or No Visual Analog Scale Click along the line Acoustic Analysis As in [1] MEASURE Token Judgment Reaction Time Token Judgment Reaction Time Scale Rating Spectral Peak Because listeners judged each sound twice, we could assess listener consistency. Consistent ‘s’ Neither ‘s’ nor ‘sh’ Both ‘s’ and ‘sh’ Consistent ‘sh’ Is it ‘s’? YES Is it ‘sh’? NO sh s Judged as Correct ≠ Acoustically Identical Frequency (kHz) 140 Energy Spectral peak ‘sh’ (lower) ‘sh’‘s’ 140 Spectral peak ‘s’ (higher) 140 We would like to thank the listeners who participated in this study, as well as the children who produced the stimuli and their families. We would also like to thank the members of the Learning To Talk lab for their support and input. This work was supported by a Hilldale Undergraduate Research Fellowship to Sara Bernstein and by NIDCD grants to Jan Edwards and to Ruth Litovsky. E2E2 E1E1