Overview What is in a speech signal?

Slides:



Advertisements
Similar presentations
Acoustic/Prosodic Features
Advertisements

Digital Signal Processing
Acoustic Characteristics of Consonants
5/5/20151 Acoustics of Speech Julia Hirschberg CS 4706.
Signals The main function of the physical layer is moving information in the form of electromagnetic signals across a transmission media. Information can.
Chapter-3-1CS331- Fakhry Khellah Term 081 Chapter 3 Data and Signals.
Chi-Cheng Lin, Winona State University CS412 Introduction to Computer Networking & Telecommunication Theoretical Basis of Data Communication.
ECE 4321: Computer Networks Chapter 3 Data Transmission.
CMP206 – Introduction to Data Communication & Networks Lecture 2 – Signals.
The frequency spectrum
SIMS-201 Characteristics of Audio Signals Sampling of Audio Signals Introduction to Audio Information.
IT-101 Section 001 Lecture #8 Introduction to Information Technology.
Basic Spectrogram Lab 8. Spectrograms §Spectrograph: Produces visible patterns of acoustic energy called spectrograms §Spectrographic Analysis: l Acoustic.
Introduction to Acoustics Words contain sequences of sounds Each sound (phone) is produced by sending signals from the brain to the vocal articulators.
SPPA 6010 Advanced Speech Science 1 The Source-Filter Theory: The Sound Source.
Acoustics in Twenty Words or Less. What is Acoustics? The Science of Sound!
Chapter 3 Data and Signals
6/24/20151 Acoustics of Speech Julia Hirschberg CS 4706.
William Stallings Data and Computer Communications 7th Edition (Selected slides used for lectures at Bina Nusantara University) Data, Signal.
Module 3.0: Data Transmission
EE2F1 Speech & Audio Technology Sept. 26, 2002 SLIDE 1 THE UNIVERSITY OF BIRMINGHAM ELECTRONIC, ELECTRICAL & COMPUTER ENGINEERING Digital Systems & Vision.
Digital Audio Multimedia Systems (Module 1 Lesson 1)
Basic Acoustics + Digital Signal Processing September 11, 2014.
CS Spring 2011 CS 414 – Multimedia Systems Design Lecture 2 –Auditory Perception and Digital Audio Klara Nahrstedt Spring 2011.
Representing Acoustic Information
CS 551/651: Structure of Spoken Language Lecture 1: Visualization of the Speech Signal, Introductory Phonetics John-Paul Hosom Fall 2010.
Source/Filter Theory and Vowels February 4, 2010.
LE 460 L Acoustics and Experimental Phonetics L-13
Lab #8 Follow-Up: Sounds and Signals* * Figures from Kaplan, D. (2003) Introduction to Scientific Computation and Programming CLI Engineering.
GCT731 Fall 2014 Topics in Music Technology - Music Information Retrieval Overview of MIR Systems Audio and Music Representations (Part 1) 1.
Lecture 1 Signals in the Time and Frequency Domains
Basics of Signal Processing. SIGNALSOURCE RECEIVER describe waves in terms of their significant features understand the way the waves originate effect.
Data Communications & Computer Networks, Second Edition1 Chapter 2 Fundamentals of Data and Signals.
1 Kyung Hee University Signals 2 3. 신호 (Signals) 3.1 아날로그와 디지털 (Analog and Digital) 3.2 아날로그 신호 (Analog signals) 3.3 디지털 신호 (Digital signals) 3.4 Analog.
1 Business Telecommunications Data and Computer Communications Chapter 3 Data Transmission.
Artificial Intelligence 2004 Speech & Natural Language Processing Natural Language Processing written text as input sentences (well-formed) Speech.
Lecture 5: Signal Processing II EEN 112: Introduction to Electrical and Computer Engineering Professor Eric Rozier, 2/20/13.
CSC361/661 Digital Media Spring 2002
15.1 Properties of Sound  If you could see atoms, the difference between high and low pressure is not as great.  The image below is exaggerated to show.
Acoustic Analysis of Speech Robert A. Prosek, Ph.D. CSD 301 Robert A. Prosek, Ph.D. CSD 301.
Wireless and Mobile Computing Transmission Fundamentals Lecture 2.
Filtering. What Is Filtering? n Filtering is spectral shaping. n A filter changes the spectrum of a signal by emphasizing or de-emphasizing certain frequency.
Introduction to SOUND.
Georgia Institute of Technology Introduction to Processing Digital Sounds part 1 Barb Ericson Georgia Institute of Technology Sept 2005.
1 Introduction to Information Technology LECTURE 6 AUDIO AS INFORMATION IT 101 – Section 3 Spring, 2005.
Digital Signal Processing January 16, 2014 Analog and Digital In “reality”, sound is analog. variations in air pressure are continuous = it has an amplitude.
ECE 5525 Osama Saraireh Fall 2005 Dr. Veton Kepuska
Audio processing methods on marine mammal vocalizations Xanadu Halkias Laboratory for the Recognition and Organization of Speech and Audio
Encoding and Simple Manipulation
Frequency, Pitch, Tone and Length February 12, 2014 Thanks to Chilin Shih for making some of these lecture materials available.
Chapter 21 Musical Sounds.
Intro-Sound-part1 Introduction to Processing Digital Sounds part 1 Barb Ericson Georgia Institute of Technology Oct 2009.
Acoustic Phonetics 3/14/00.
Session 18 The physics of sound and the manipulation of digital sounds.
Digital Audio I. Acknowledgement Some part of this lecture note has been taken from multimedia course made by Asst.Prof.Dr. William Bares and from Paul.
Basic Acoustics + Digital Signal Processing January 11, 2013.
The Physics of Sound.
COMPUTER NETWORKS and INTERNETS
William Stallings Data and Computer Communications 7th Edition
Multimedia Systems and Applications
"Digital Media Primer" Yue-Ling Wong, Copyright (c)2013 by Pearson Education, Inc. All rights reserved.
CHAPTER 3 DATA AND SIGNAL
Acoustics of Speech Julia Hirschberg CS /7/2018.
Acoustics of Speech Julia Hirschberg CS /10/2018.
Analyzing the Speech Signal
C-15 Sound Physics 1.
Analyzing the Speech Signal
Signals Prof. Choong Seon HONG.
Acoustics of Speech Julia Hirschberg CS /2/2019.
Julia Hirschberg and Sarah Ita Levitan CS 6998
Presentation transcript:

Lecture 4 Spoken Language Processing Prof. Andrew Rosenberg Acoustics of Speech Lecture 4 Spoken Language Processing Prof. Andrew Rosenberg

Overview What is in a speech signal? Defining cues to phonetic segments and intonation. Techniques to extract these cues.

Phone Recognition Goal: Distinguishing One Phoneme from Another…Automatically ASR: Did the caller say “I want to fly to Newark” or “I want to fly to New York”? Forensic Linguistics: Did that person say “Kill him” or “Bill him” What evidence is available in the speech signal? How accurately and reliably can we extract it? What qualities make this difficult? easy?

Prosody and Intonation How things are said is sometimes critical and often useful for understanding Forensic Linguistics: “Kill him!” vs. “Kill him?” TTS: “Travelling from Boston?” vs. “Travelling from Boston.” What information do we need to extract from/generate in the speech signal? What tools do we have to do this?

Speech Features What cues are important? How do we extract these? Spectral Features Fundamental Frequency (pitch) Amplitude/energy (loudness) Timing (pauses, rate) Voice Quality How do we extract these? Digital Signal Processing Tools and Algorithms Praat Wavesurfer Xwaves

Sound Production Pressure fluctuations in the air caused by a voice, musical instrument, a car horn etc. Sound waves propagate through material air, but also solids, etc. Cause eardrum (tympanum) to vibrate Auditory system translates this into neural impulses Brain interprets these as sound Represent sounds as change in pressure over time

How “loud” are sounds? Event Pressure (Pa) dB Absolute silence 20 Whisper 200 Quiet office 2K 40 Conversation 20K 60 Bus 200K 80 Subway 2M 100 Thunder 20M 120 *Hearing Damage* 200M 140

Voiced Sounds are (mostly) Periodic Simple Periodic Waves (sine waves) defined by Frequency: how often does the pattern repeat per time unit Cycle: one repetition Period: duration of a cycle Frequency: #cycles per time unit (usually second) Frequency in Hertz (Hz): cycles per second or 1 / period E.g. 400 Hz = 1/0.0025 (a cycle has a period of 0.0025 seconds; 400 cycles complete in a second) Zero crossing: where the waveform crosses the x-axis

Voiced Sounds are (mostly) Periodic Simple Periodic Waves (sine waves) defined by Amplitude: peak deviation of pressure from normal atmospheric pressure Phase: timing of a waveform relative to a reference point

Phase Differences

Complex Periodic Waves Cyclic but composed of multiple sine waves Fundamental Frequency (F0): rate at which the largest pattern repeats and its harmonics Also GCD of component frequencies Harmonics: rate of shorter patterns Any complex waveform can be analyzed into its component sine waves with their frequencies, amplitudes and phases (Fourier theorem – in 2 lectures)

2 sine wave -> 1 complex wave

4 sine waves -> 1 complex wave

Power Spectra and Spectrograms Frequency components of a complex waveform represened in the power spectrum. Plots frequency and amplitude of each component sine wave Adding temporal dimension -> Spectrogram Obtained via Fast Fourier Transform (FFT), Linear Predictive Coding (LPC)… Useful for analysis, coding and synthesis.

Example Power spectrum http://clas.mq.edu.au/acoustics/speech_spectra/fft_lpc_settings.html Australian male /i:/ from “heed” FFT analysis window 12.8ms

Example Spectrogram Example Spectrogram from Praat

Terms Spectral Slice: plots the amplitude at each frequency Spectrograms: plots amplitude and frequency over time Harmonics: components of a complex waveform that are multiples of the fundamental frequency (F0) Formants: frequency bands that are most amplified in speech.

Aperiodic Waveforms Waveforms with random or non-repeating patterns Random aperiodic waveforms: white noise Flat spectrum: equal amplitude for all frequency components. Transients: sudden bursts of pressure (clicks, pops, lip smacks, door slams, etc.) Flat spectrum at a single impulse Voiceless consonants

Speech Waveforms Lungs plus vocal fold vibration is filtered by resonance of the vocal tract to produce complex, periodic waveforms. Pitch range, mean, max: cycles per sec of lowest frequency periodic component of a signal = “Fundamental frequency (F0)” Loudness RMS amplitude Intensity: in dB where P0 is a reference atmospheric pressure

Collecting speech for analysis? Recording conditions A quiet office, a sound booth, an anechoic chamber Microphones convert sound into electrical current oscillations of air pressure are converted to oscillations of current Analog devices (e.g. tape recorders) store these as a continuous signal Digital devices (e.g. DAT, computers) convert to a digital signal (digitizing)

Digital Sound Representation A microphone is a mechanical eardrum, capable of measuring change in air pressure over time. Digital recording converts analog (smoothly continuous) changes in air pressure over time to a digital signal. The digital representation: measures the pressure at a fixed time interval sampling rate represents pressure as an integral value bit depth The analog to digital conversion results in a loss of information.

Waveform – “Name”

Analog to Digital Conversion “Quantization” or “Discretization” with example wave form. drawn by hand.

Analog to Digital Conversion “Quantization” or “Discretization” with example wave form. drawn by hand.

Analog to Digital Conversion “Quantization” or “Discretization” with example wave form. drawn by hand.

Analog to Digital Conversion “Quantization” or “Discretization” with example wave form. drawn by hand.

Analog to Digital Conversion Bit depth impact 16bit sound – CD Quality 8bit sound Sampling rate impact 44.1kHz 16kHz 8kHz 4kHz EXAMPLES

Nyquist Rate At least 2 samples per cycle are necessary to capture the periodicity of a waveform at a given frequency 100Hz needs 200 samples per sec Nyquist Frequency or Nyquist Rate Highest frequency that can be captured with a given sampling rate 8kHz sampling rate (Telephone speech) can capture frequencies up to 4kHz

Sampling/storage trade off Human hearing: ~20kHz top frequency Should we store 40kHz samples? Telephone speech 300-4kHz (8kHz sampling) But some speech sounds, (e.g., fricatives, stops) have energy above 4kHz Peter, Teeter, Dieter 44kHz (CD quality) vs. 16-22kHz Usually good enough to study speech, amplitude, duration, pitch, etc. Golden Ears.

Filtering Acoustic filters block out certain frequencies of sounds Low-pass filter blocks high frequency components High-pass filter blocks low frequencies Band-pass filter blocks both high and low, around a band Reject band (what to block) vs. pass band (what to let through) What if the frequencies fo two sounds overlap? Source Separation INCLUDE EXAMPLES FOR FILTERING

Estimating pitch Pitch Tracking: Estimate F0 over time as a function of vocal fold vibration How? Autocorrelation approach A periodic waveform is correlated with itself, since one period looks like another Find the period by finding the “lag” (offset) between two windows of the signal where the correlation of the windows is highest Lag duration, T, is one period of the the waveform F0 is the inverse: 1/T

Pitch Issues Microprosody effects of consonants (e.g. /v/) Creaky voice -> no pitch track, or noisy estimate Errors to watch for: Halving: shortest lag calculated is too long, by one or more cycles. Since the estimated lag is too long, the pitch is too low (underestimation) of pitch Doubling: shortest lag is too short. Second half of the cycle is similar to the first Estimates a short lag, counts too many cycles per second (overestimation) of pitch

Pitch Doubling and Halving DoublingError Halving Error

Next Class Speech Recognition Overview Reading: J&M 9.1, 9.2, 5.5