SPOKEN LANGUAGE COMPREHENSION Anne Cutler Addendum: How to study issues in spoken language comprehension.

Slides:



Advertisements
Similar presentations
Accessing spoken words: the importance of word onsets
Advertisements

Human Speech Recognition Julia Hirschberg CS4706 (thanks to John-Paul Hosum for some slides)
Synthesizing naturally produced tokens Melissa Baese-Berk SoundLab 12 April 2009.
Isabella State School Jolly Phonics Information Session.
Language and Cognition Colombo, June 2011 Day 8 Aphasia: disorders of comprehension.
Vocabulary Assessment Norbert Schmitt University of Nottingham
BASIC LITERACY SKILLS Stacie Phillips
Developing Active Readers Everyday D.A.R.E
The Perception of Speech. Speech is for rapid communication Speech is composed of units of sound called phonemes –examples of phonemes: /ba/ in bat, /pa/
Language Comprehension Speech Perception Semantic Processing & Naming Deficits.
DR. AZAD H. FATAH Definitions of word. Definitions A word is a sound or combination of sounds that has a meaning and is spoken or written. A word is a.
Speech perception 2 Perceptual organization of speech.
The Perception of Speech. Speech is for rapid communication Speech is composed of units of sound called phonemes –examples of phonemes: /ba/ in bat, /pa/
Get Ready to Huddle! Discover Intensive Phonics (K - 3 rd Grade & SPED) Huddle 4 th Tuesday of each month at 2 pm MT Please Call Passcode.
Auditory Word Recognition
Exam 1 Monday, Tuesday, Wednesday next week WebCT testing centre Covers everything up to and including hearing (i.e. this lecture)
Cognitive Processes PSY 334 Chapter 2 – Perception April 9, 2003.
PSY 369: Psycholinguistics
Language, Mind, and Brain by Ewa Dabrowska Chapter 2: Language processing: speed and flexibility.
Visual Cognition II Object Perception. Theories of Object Recognition Template matching models Feature matching Models Recognition-by-components Configural.
1 Psych 5500/6500 The t Test for a Single Group Mean (Part 5): Outliers Fall, 2008.
Language Comprehension Speech Perception Naming Deficits.
The Perception of Speech
Cognitive Processes PSY 334 Chapter 2 – Perception.
CSD 2230 HUMAN COMMUNICATION DISORDERS
EDRD 7715 Dr. Alice Snyder. Suggests that there is a 1 to 1 correspondence between phonemes (sounds) and graphemes (letters) such that each letter consistently.
Language Assessment 4 Listening Comprehension Testing Language Assessment Lecture 4 Listening Comprehension Testing Instructor Tung-hsien He, Ph.D. 何東憲老師.
Sebastián-Gallés, N. & Bosch, L. (2009) Developmental shift in the discrimination of vowel contrasts in bilingual infants: is the distributional account.
1 Speech Perception 3/30/00. 2 Speech Perception How do we perceive speech? –Multifaceted process –Not fully understood –Models & theories attempt to.
Two types of reading practice ---- reading aloud and silent reading
PHONETICS & PHONOLOGY 3/24/2014. AGENDA GO OVER CORRECTED HOMEWORK IN PAIRS/SMALL GROUPS (5 MIN) MAKE ANY CORRECTIONS TO HWK DUE TODAY, THEN TURN IN (5.
ANN MORRISON, PH.D. PHONOLOGICAL AWARENESS. Is an umbrella term over the following: Listening for sounds Rhyming Syllabication Phonemic awareness – phonemic.
Seven Secrets to Learn English by Prinkle
SPEECH PERCEPTION DAY 16 – OCT 2, 2013 Brain & Language LING NSCI Harry Howard Tulane University.
Speech Science IX How is articulation organized? Version WS
Assessment of Phonology
SPEECH PERCEPTION DAY 18 – OCT 9, 2013 Brain & Language LING NSCI Harry Howard Tulane University.
Epenthetic vowels in Japanese: a perceptual illusion? Emmanual Dupoux, et al (1999) By Carl O’Toole.
Pragmatically-guided perceptual learning Tanya Kraljic, Arty Samuel, Susan Brennan Adaptation Project mini-Conference, May 7, 2007.
The Goals of Phonology: to note and describe the sound patterns in language(s) to detect and taxonomize (classify) general patterns to explain these patterns.
QUESTION Your name Your teacher’s name Your school.
The importance of talking and listening for second language learners
Sounds and speech perception Productivity of language Speech sounds Speech perception Integration of information.
1 Cross-language evidence for three factors in speech perception Sandra Anacleto uOttawa.
How to teach Reading ( Phonics )
What infants bring to language acquisition Limitations of Motherese & First steps in Word Learning.
Phonological Priming and Lexical Access in Spoken Word Recognition Christine P. Malone Minnesota State University Moorhead.
Language Perception.
WebCT You will find a link to WebCT under the “Current Students” heading on It is your responsibility to know how to work WebCT!
Understanding Spoken Speech ● Bottom up is minimal for humans, but crucial for computers – voice recognition.
AUDITORY CORTEX 1 SEPT 11, 2015 – DAY 8 Brain & Language LING NSCI Fall 2015.
Against formal phonology (Port and Leary).  Generative phonology assumes:  Units (phones) are discrete (not continuous, not variable)  Phonetic space.
Chapter 11 Language. Some Questions to Consider How do we understand individual words, and how are words combined to create sentences? How can we understand.
Helping your child to read. Presentation to Nursery and Reception Parents and Carers. October 2014 St. Michael’s Primary School.
Objectives of session By the end of today’s session you should be able to: Define and explain pragmatics and prosody Draw links between teaching strategies.
VISUAL WORD RECOGNITION. What is Word Recognition? Features, letters & word interactions Interactive Activation Model Lexical and Sublexical Approach.
Speaker Recognition UNIT -6. Introduction  Speaker recognition is the process of automatically recognizing who is speaking on the basis of information.
1. Chapter Preview Part 1 – Listening in the Classroom  Listening Skills: The Problem and the Goal  Listening Tasks in Class Part 2 – Listening outside.
Assessment. Issues related to Phonemic awareness assessment  Is it a conceptual understanding about language or is it a skill?
Aim To test Cherry’s findings on attention ‘more rigorously’. Sample
Michael C. W. Yip The Education University of Hong Kong
Phonological Priming and Lexical Access in Spoken Word Recognition
Cognitive Processes PSY 334
SPEAKING ASSESSMENT Joko Nurkamto UNS Solo 11/8/2018.
Abstraction versus exemplars
SPEAKING ASSESSMENT Joko Nurkamto UNS Solo 12/3/2018.
عمادة التعلم الإلكتروني والتعليم عن بعد
REACTION TIME LAB DAY 1 & 2.
Phonological Priming and Lexical Access in Spoken Word Recognition
Topic: Language perception
Presentation transcript:

SPOKEN LANGUAGE COMPREHENSION Anne Cutler Addendum: How to study issues in spoken language comprehension

The psycholinguist’s problem We want to know HOW spoken language is comprehended But the process of comprehension is a mental operation, invisible to direct inspection So we have to devise ways of looking at the process indirectly (in the laboratory, mostly) These laboratory methods often involve measuring RT - the speed with which a decision is made, or a target detected It is important to know how to relate the laboratory task to the processes one wants to study!!! (a “linking hypothesis”) And also to make sure that the task is reflecting what we want it to reflect i.e. there are no uncontrolled artifacts, which might provide alternative interpretations….

The lectures so far Lecture 1: speech is fast and continuous, but somehow listeners have to identify the words in it, because words are known entities, and identifying those known entities and putting them together is the only way to reach the (unknown) goal, i.e. the speaker’s message. But words themselves are not unique – they resemble one another and can be embedded within one another…. So speech can contain many spurious words, not part of the message So do listeners process only the words which the speaker uttered, or do other words become activated and have to be eliminated?

The lectures, contd. Lecture 2: all the words which are simultaneously activated compete with one another Lecture 3: although competition alone could explain how segmentation occurs, there are also processes of segmentation, and they differ across languages Lecture 4: the process of activating words involves continuous evaluation of information in the speech signal; there is no necessary intermediate stage in which such representations as syllable, mora are extracted Lecture 5: the lawful phonological processes which result in, say, lean being spoken as leam, or petit being spoken sometimes with a final vowel and sometimes with a final consonant, neither disrupt nor facilitate processing

LEXICAL DECISION (hear words and decide: is that a real word?) Lexical decision is the simplest word processing task. For instance, it can be tell us whether words can be recognised, or nonwords rejected, as soon as enough of them is heard that no other words are possible (the “uniqueness point”). Auditory lexical decision is dependent on word length – no response possible until end because it could always become a nonword after all!

CROSS-MODAL PRIMING (Hear prime, see target; decide: is it a real word?) CMP is a way of looking at what is activated when a listener hears speech – we measure the RT to decide whether the visual target is a real word, and if that RT varies when the spoken prime varies, then we have observed an effect of the spoken prime. Hearing “give” makes recognising TAKE easier. Prime and target may be identical (e.g. give- GIVE), or related by association (give-TAKE)

CROSS-MODAL FRAGMENT PRIMING (decision: is that a real word?) Likewise, hearing “octo-” makes recognising OCTOPUS easier Whether the prime and target are the same or related by association is one factor; whether the prime is presented as a whole or just as a fragment is a separate factor

Eye-tracking experiment Eye-tracking also looks at activation – listeners who hear “ha-” look at the ham or the hamster – both are potential candidates

GATING (hear a word in fragments of increasing size, and at each fragment guess what the word is) E.g. p- pr- pra- prak- pract- practi- practik- The “gated” fragments can be of constant size (e.g. 50 ms, 100 ms, 150 ms etc.); or they can systematically add more phonetic information (e.g. each fragment adds another phoneme transition – Fragment 1: to middle of 1 st phoneme; 2: to middle of 2nd phoneme; 3 to middle of 3rd phoneme; etc.) Gating tells us what information can be used at a particular point – but it is a problem that listeners sometimes stick with bad guesses…

PHONEME DETECTION AND FRAGMENT DETECTION (hear speech, listen for target phoneme or fragment) These are also very simple tasks. They probably don’t directly reflect prelexical processing, but they can reflect how easy it is to extract information (below the word level) at a given point. Thus they might reflect segmentation, or how easy a preceding word or phoneme was to process, etc.

WORD SPOTTING (hear nonsense item – is there a real word in it?) Word spotting is especially good for looking at word recognition in context – by minimising the context, we can look at the local effect of a context on how hard or easy it is to find a word

WORD RECONSTRUCTION Change a nonword into a real word by altering a single sound This task was used to look at the processing of vowels and consonants – which type of phoneme constrains word identity more strongly? Then, it was also used to look at whether rhythmic categories like the mora are used in recognising words

PHONETIC CATEGORIZATION (hear artificial sounds, decide what they are) Listeners normally hear speech sounds which are more or less good exemplars of their categories. It is possible to make an artificial continuum from one sound to another, and present these sounds to listeners. What they report in the middle is not new non-sounds, but a sudden switch from tokens of one category to tokens of the other - “categorical perception”. The phonetic categorization task (developed for phonetic research) has also been useful in psycholinguistics. E.g. categorical functions can shift if one decision would make a word but the other would make a nonword (Ganong)…

Other tasks Blending task: Construct a blend of two pseudo-names, using the 1st part of the 1st name and the 2nd part of the 2nd name. Has been used to look at what information in the signal is more vs. less important (e.g. is place of articulation unspecified?) Reversal task: A sort of language game – reverse the parts of a word (e.g. syllables, phonemes….). Has been used to look at people’s internal representations of words (e.g. are syllable boundaries clear? Are intervocalic consonants ambisyllabic?). Artificial language learning: Hear nonsense input, try to learn the “words” (and other structures) it consists of. Has been used to look at how easily different sequences can be segmented, and whether listeners have expectations about what words will be like.

What the tasks told us For instance: there are lots of types of converging evidence for multiple concurrently active words – after capt- BOTH captain and captive are facilitated etc. (N.B. Different tasks look at the same aspect of processing) For instance: segmentation relies on language-specific information Different tasks look at the same aspect of processing, again – e.g. with word-spotting we discover language- specific segmentation procedures, then we can predict that listeners will use these procedures also in learning new languages, and test this with an artificial vocabulary learning experiment

Summary Studying spoken language comprehension can’t be done directly (we can’t look into the brain), only indirectly, with the help of laboratory methods This means that we have to translate the bigger questions we are interested in into questions which can be answered using our laboratory methods For instance: can Finnish listeners use vowel harmony to help them find word boundaries? We turn that into a smaller question: is a boundary easier to find if the vowels on either side of it are disharmonious rather than harmonious? And then we can use word-spotting – a real word, abutted to a nonsense context (harmonious or disharmonious)