Presentation is loading. Please wait.

Digital Signal Processing

Presentation on theme: "Digital Signal Processing"— Presentation transcript:

Digital Signal Processing
September 16, 2014

Analog and Digital In “reality”, sound is analog.
variations in air pressure are continuous = it has an amplitude value at all points in time. and there are an infinite number of possible air pressure values. analog clock Back in the bad old days, acoustic phonetics was strictly an analog endeavor.

Analog and Digital In the good new days, we can represent sound digitally in a computer.  In a computer, sounds must be discrete. everything = 1 or 0 digital clock Computers represent sounds as sequences of discrete pressure values at separate points in time. Finite number of pressure values. Finite number of points in time.

Analog-to-Digital Conversion
Recording sounds onto a computer requires an analog-to-digital conversion (A-to-D) When computers record sound, they need to digitize analog readings in two dimensions: X: Time (this is called sampling) Y: Amplitude (this is called quantization) quantization sampling

Thanks to Chilin Shih for making these materials available.
Sampling Example

Sampling Example

Sampling Rate Sampling rate = frequency at which samples are taken.
What’s a good sampling rate for speech? Typical options include: 22050 Hz, Hz, Hz sometimes even Hz and Hz Higher sampling rate preserves sound quality. Lower sampling rate saves disk space. (which is no longer much of an issue) Young, healthy human ears are sensitive to sounds from 20 Hz to 20,000 Hz

One Consideration The Nyquist Frequency
= highest frequency component that can be captured with a given sampling rate = one-half the sampling rate Harry Nyquist ( ) Problematic Example: 100 Hz sound 100 Hz sampling rate samples

Nyquist’s Implication
An adequate sampling rate has to be… at least twice as much as any frequency components in the signal that you’d like to capture. 100 Hz sound 200 Hz sampling rate samples

Sampling Rate Demo 44100 Hz 22050 Hz 11025 Hz (watch out for [s])
Speech should be sampled at at least Hz (although there is little frequency information in speech above 10,000 Hz) 44100 Hz 22050 Hz 11025 Hz (watch out for [s]) 8000 Hz 5000 Hz

Another Problem When the continuous sound signal completes more than one cycle in between samples, a phenomenon called aliasing occurs. The digital signal then contains a low frequency component which is not in the analog signal.

The Aliasing Solution: Filtering
Whenever sound is digitized, frequencies above the Nyquist frequency need to be filtered out of the end product. E.g., CDs digitize at a Hz sampling rate… And filter out any components over Hz. “Low-pass filters” allow low frequencies to pass through the filter. and remove high frequencies from the signal. Cf. “high-pass” filters: allow high frequencies to pass through filter.

Low-Pass Filter in Action
Power spectrum of 100 Hz Hz combo: Filter passes 100 Hz component, but not 1000 Hz component.

Digital Dimension #2: Quantization
Each sample that is taken has a range of pressure values This range is determined by the number of bits allotted to each sample Remember: in computers, numbers are stored in binary format (sequences of ones and zeroes). Ex: 89 = in 8-bit encoding Typical sample sizes: 8 bits values 12 bits 212 4,096 values 16 bits ,536 values

Samples Go Small Sample size here = 2 bits = 22 = 4 values
We lose information when the sample size is too small, given the same sampling rate. Sample size here = 2 bits = 22 = 4 values

Quantization

Quantization Noise

Sample Size Demo 11k 16 bits 11k 8 bits 8k 16 bits
8k 8bits (telephone) Note: CDs sample at 44,100 Hz and have 16-bit quantization. Also check out bad and actedout examples in Praat.

Quantization Range With 16-bit quantization, we can encode 65,536 different possible amplitude values. Remember that I(dB) = 10 * log10 (A2/r2) Substitute the max and min amplitude values for A and r, respectively, and we get: I(dB) = 10 * log10 (655362/12) = 96.3 dB Some newer machines have 24-bit quantization-- = 16,777,216 possible amplitude values. I(dB) = 10 * log10 ( /12) = dB This is bigger than the range of sounds we can listen to without damaging our hearing.

Problem: Clipping Clipping occurs when the pressure in the analog signal exceeds the quantization range in digitization Check out sylvester and normal in Praat.

A Note on Formats Digitized sound files come in different formats…
.wav, .aiff, .au, etc. Lossless formats digitize sound in the way I’ve just described. They only differ in terms of “header” information and specified limits on file size, etc. Lossy formats use algorithms to condense the size of sound files …and the sound file loses information in the process. For instance: the .mp3 format primarily saves space by eliminating some very high frequency information. (which is hard for people to hear)

AIFF vs. MP3 .aiff format .mp3 format (digitized at 128 kB/s)
This trick can work pretty well…

MP3 vs. MP3 .mp3 format (digitized at 128 kB/s) .mp3 format
.mp3 conversion can induce reverb artifacts, and also cut down on temporal resolution (among other things).

Sound Digitization Summary
Samples are taken of an analog sound’s pressure value at a recurring sampling rate. This digitizes the time dimension in a waveform. The sampling frequency needs to be twice as high as any frequency components you want to capture in the signal. E.g., Hz for speech Quantization converts the amplitude value of each sample into a binary number in the computer. This digitizes the amplitude dimension in a waveform. Rounding off errors can lead to quantization noise. Excessive amplitude can lead to clipping errors.

The Digitization of Pitch
Praat can give us a representation of speech that looks like: The blue line represents the fundamental frequency (F0) of the speaker’s voice. Also known as a pitch track How can we automatically “track” F0 in a sample of speech?

Pitch Tracking Voicing: Air flow through vocal folds
Rapid opening and closing due to Bernoulli Effect Each cycle sends an acoustic shockwave through the vocal tract …which takes the form of a complex wave. The rate at which the vocal folds open and close becomes the fundamental frequency (F0) of a voiced sound.

Voicing Bars

Individual glottal pulses
Voicing Bars Individual glottal pulses

Voicing = Complex Wave Note: voicing is not perfectly periodic.
…always some random variation from one cycle to the next. How can we measure the fundamental frequency of a complex wave?

duration = ??? The basic idea: figure out the period between successive cycles of the complex wave. Fundamental frequency = 1 / period

Measuring F0 To figure out where one cycle ends and the next begins…
The basic idea is to find how well successive “chunks” of a waveform match up with each other. One period = the length of the chunk that matches up best with the next chunk. Automatic Pitch Tracking parameters to think about: Window size (i.e., chunk size) Step size Frequency range (= period range)

Here’s an example of a small window
Window (Chunk) Size Here’s an example of a small window

Here’s an example of a large(r) window
Window (Chunk) Size Here’s an example of a large(r) window

Initial window of the waveform is compared to another window (of the same duration) at a later point in the waveform

Matching ??? The waveforms in the two windows are compared to see how well they match up. Correlation = measure of how well the two windows match

Autocorrelation The measure of correlation =
Sum of the point-by-point products of the two chunks. The technical name for this is autocorrelation… because two parts of the same wave are being matched up against each other. (“auto” = self)

Autocorrelation Example
Ex: consider window x, with n samples… What’s its correlation with window y? (Note: window y must also have n samples) x1 = first sample of window x x2 = second sample of window x xn = nth (final) sample of window x y1 = first sample of window y, etc. Correlation (R) = x1*y1 + x2* y2 + … + xn* yn The larger R is, the better the correlation.

By the Numbers Sample 1 2 3 4 5 6 x .8 .3 -.2 -.5 .4 .8
product Sum of products = -.48 These two chunks are poorly correlated with each other.

By the Numbers, part 2 Sample 1 2 3 4 5 6 x .8 .3 -.2 -.5 .4 .8
z product Sum of products = 1.26 These two chunks are well correlated with each other. (or at least better than the previous pair) Note: matching peaks count for more than matches close to 0.

Back to (Digital) Reality
??? These two windows are poorly correlated The waveforms in the two windows are compared to see how well they match up. Correlation = measure of how well the two windows match

Next: the pitch tracking algorithm moves further down the waveform and grabs a new window

“step” The distance the algorithm moves forward in the waveform is called the step size

Matching, again ??? The next window gets compared to the original.

Matching, again ??? These two windows are also poorly correlated
The next window gets compared to the original.

another “step” The algorithm keeps chugging and, eventually…

Matching, again ??? These two windows are highly correlated
The best match is found.

period The fundamental period can be determined by calculating the length of time between the start of window 1 and the start of (well correlated) window 2.

Mopping up period Frequency is 1 / period
Q: How many possible periods does the algorithm need to check? Frequency range (default in Praat: 75 to 600 Hz)

Download ppt "Digital Signal Processing"

Similar presentations

Ads by Google