Presentation on theme: "Associations of behavioral parameters of speech emotional prosody perception with EI measures in adult listeners Elena Dmitrieva Kira Zaitseva, Alexandr."— Presentation transcript:
Associations of behavioral parameters of speech emotional prosody perception with EI measures in adult listeners Elena Dmitrieva Kira Zaitseva, Alexandr Orlov Institute of Evolutionary Physiology RAS Victor Gelman, Medical Academy for Postgraduate Studies
2 Introduction Human behavior to a certain extent depends on the ability to express and comprehend speech emotional intonation. Cerebral mechanisms involved in speech production and perception deal with the processing of two interacting levels of information - emotional and semantic. Human emotion communicated by nonverbal cues: - facial expression -speech affective prosody (or - speech affective prosody (or speech emotional intonation).
3 Introduction Importance of emotional intelligence when studying the mechanisms of working brain (Vygotsky: a fuller understanding of all forms of cognitive and affective behaviour only in the complex study of intellect and affect) Emotional intonation of a message may override or nullify its linguistic content Scenario: "it is not what was said but how it was said"
4 Introduction Introduction Although EI have got many-sided consideration in the last years some aspects such as possible cerebral mechanisms of EI, are still not fully understood. Apparently, for further development it is necessary to study the associations of EI components (measured by methods of self-report) with mental processes causing individual differences in emotions perception and management.
5 The Aim of the Research To further examine the associations of the data on EI assessment with behavioral parameters of speech emotional prosody perception at different acoustical (noise) environment. The well-developed psychophysics procedures (P. Simonov, V. Morozov, K. Scherer), that study the ability to recognize emotional prosody in speech signals were applied.
6 Subjects and Procedure Sample: 42 listeners of years old. The comparison of emotional intonation recognition was conducted: for stimuli: utterances of neutral semantic meaning, emotional (anger and joy) and neutral intonations (corpus: emotional portrayals); and at different acoustic background (noiseless and noisy- “white” noise was applied to both ears )
7 Stimulus materials (Spectrogram of test sentence) The stimuli: utterances of neutral semantic meaning, emotional (anger and joy) and neutral intonations (corpus: emotional portrayals).
The prepared test stimuli were presented at random to a listener (48trials: 4 speakers x 2 sentences x 3 emotional intonations x 2 signal-to-noise ratios) Test stimuli were presented through the headphones 8Procedure
9 Procedure and Flow Block of the Experiment Subject Left headphone Right headphone Console Researcher Computer Responses Test signals
10Procedure The task of the subject was to identify the emotional tone and to push the appropriate button of the console for the three variant responses of the subjects. Answer and time of subject's reaction were registered in protocol by computer program. The psychophysiological features of the emotional prosody perception were assessed by comparing: the accuracy of recognition (AR) and time of reaction (RT). The questionnaire (test EmIn) assessing EI components was completed by the listeners.
11 Data analysis The data obtained on AR, RT and set of experimental parameters were submitted to ANOVA to reveal the main factors influencing the values of AR and RT. Correlation coefficients were computed using Pearson correlations for associations between AR, RT and EI measures according self-report to Lyusin’s questionnaire
The ANOVA on accuracy of recognition and time of reaction revealed “signal-to-noise ratio”, “type of emotion”, “gender” and “age” and their interactions to be significant (p<0.01) factors influencing the values of AR and RT and to be invariant with regard to linguistic information of speech stimuli. 12Results
13 Age-related changes in perception of emotions of different valences (AR) AR of positive emotional intonation (“joy”) decreases at noise the most strongly as compared to other intonations for the listeners of all ages while the recognition accuracy of “anger” does not change for the eldest listeners (65-79 years old). Noiseless conditions At noise background
14 Age-related changes in perception of emotions (RT) The changes in the time of recognition (RT) has been shown to depend on age and gender of listeners also regardless of signal’s semantic content
15 Matrices of linear correlation coefficients between EI components and emotional recognition The analysis has revealed certain relationships between AR and RT of emotional intonations by listeners and components of EI, obtained as the results of EmIn test. IrEI (мэи) IaEI (вэи) EP (пэ) EM (уэ) POE (мп) EMO (му) PSE (вп) EMOS (ву) CEEO (вэ) AR %0,1530,1540,342-0,0080,339-0,0990,278-0,1510,271 RT s0,1360,1490,465-0,1250,401-0,1940,426-0,1130,001 AR w/n0,3030,3480,5240,1560,4560,0370,4760,0550,315 RT w/n0,0660,1100,392-0,1530,325-0,2340,369-0,111-0,039 AR noise-0,059-0,1070,045-0,1930,118-0,228-0,020-0,3470,151 RT noise0,1990,1730,515-0,0980,457-0,1460,460-0,1140,027 Age-0,142-0,355-0,123-0,385-0,013-0,233-0,184-0,417-0,277
16Conclusions Individual-related changes in behavioral parameters of emotional speech prosody perception found throughout the life span are linked with the valence of emotional intonation regardless of stimuli semantic content and depend on acoustic background. Correlations were found between components of EI measured by questionnaire and behavioral parameters of speech emotional prosody recognition in listeners of all ages. Results obtained: allow to assume the tests based on self-report to rather adequately reflect the features of EI. sample fewness prevents from drawing the final conclusions and it is necessary to have the further investigation of this point