Presentation is loading. Please wait.

Presentation is loading. Please wait.

Compression No. 1  Seattle Pacific University Data Compression Kevin Bolding Electrical Engineering Seattle Pacific University.

Similar presentations


Presentation on theme: "Compression No. 1  Seattle Pacific University Data Compression Kevin Bolding Electrical Engineering Seattle Pacific University."— Presentation transcript:

1 Compression No. 1  Seattle Pacific University Data Compression Kevin Bolding Electrical Engineering Seattle Pacific University

2 Compression No. 2  Seattle Pacific University Compression Reducing the size of a message by removing redundant data Compression relies on some understanding of the structure of a message Only works on structured messages; random messages cannot be effectively compressed Because wireless systems (especially mobile phones) have limited capacity, compression is critical For example, an average compression rate of 50% doubles system capacity

3 Compression No. 3  Seattle Pacific University Lossless compression Lossless compression transmits enough information to recreate the original with no defects Huffman coding Calculate statistics for the occurrence rates of various symbols Assign short codes to common signals, long codes to uncommon symbols

4 Compression No. 4  Seattle Pacific University Huffman Encoding Example: Alphabet of {A,B,C,D,E,F,G} We could pick A=000, B=001, C=010, etc. Encoding of the string: BACADAEAFABBAAAGAH is 54 bits: 001000010000011000100000101000001001000000000110000111 What if we know that some symbols occur more frequently (A, followed by B in this example)? A=0; B=100; C=1010; D=1011; E=1100; F=1101; G=1110; H=1111 Encoding of the previous string is only 42 bits: 100010100101101100011010100100000111001111

5 Compression No. 5  Seattle Pacific University General Lossless Compression If source type is known (i.e. English text), a pre- defined dictionary/index can be used If source type is not known, the algorithm can create a dictionary with statistics on the fly Dictionary must be transmitted with the message ZIP file compression works this way

6 Compression No. 6  Seattle Pacific University Lossy Compression If the only purpose of the message is to be interpreted by a human, we can remove data not perceived by human senses Music – Limits on hearing Cannot hear frequencies above 20kHz Cannot detect frequencies with low amplitudes in the presence of frequencies with high amplitudes Voice – Purpose is communication, not fidelity Intelligibility matters most Pleasing sound also matters Images and Video Models of how humans see can help…

7 Compression No. 7  Seattle Pacific University Voice Compression Tremendous opportunity for compression We know the model for voice production We know the model for reception We probably know the rules of speech (language- specific) Language has tremendous redundancy Speech is time-limited, with communication as its primary purpose; it is not generally saved and studied in more detail later

8 Compression No. 8  Seattle Pacific University Voice Compression Techniques Companding Use non-linear encoding to decrease dynamic range Reduces the number of bits needed per sample Linear Predictive Coding Use a model of the human vocal tract Send parameters that reflect operations on the components of the vocal tract Codebooks Form a book of “codes” that correspond to types of sounds commonly used in speed Send the number for each code rather than the sound itself

9 Compression No. 9  Seattle Pacific University Nonlinear coding Linear coding samples using a linear scale Large amplitude  Cross many levels during waveform (good) 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Nonlinear encoding More levels concentrated near the center level Wider spacing at edges Compresses dynamic range 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 Small amplitude  Cross few levels during waveform (bad)

10 Compression No. 10  Seattle Pacific University Companding A form of non-linear encoding commonly used with telephone systems Before quantitizing, reshape the signal by increasing the amplitude of low-amplitude signals In US and Japan, mu-law is used; in Europe, A-law is used Original (top) and companded (bottom) waveforms.

11 Compression No. 11  Seattle Pacific University Linear Predictive Coding The human vocal tract can be modeled by: A buzzer with varying pitch and volume (the vocal cords) A tube with resonances at certain frequencies (the vocal tract) Resonant frequencies are called formants A filter (lips, tongue, etc.) LPC Encoding Estimate the frequency and intensity of the original buzz Find the closest match to overall tone by selecting formants Remove these from signal  left-over is residue Transmit the buzz frequency and intensity, the formants selected, and the residue

12 Compression No. 12  Seattle Pacific University Codebooks Since speech is chosen from a limited number of basic sounds, it may be described as only these basic sounds Make a codebook including examples of all types of speech sounds Encoding: Find best matching sound and send the code for that Decoding: Look up code and produce sound Drawbacks: Codebook may reflect “standard” speech, not the speaker in particular Codebook may be excessively large

13 Compression No. 13  Seattle Pacific University Codebook Excited Linear Predictive Coders CELP coders use two steps Encode the speech using an LPC. This gives codes for the frequency, intensity and formants, plus a residue. Encode the residue using codebook techniques. Normally, two codebooks are used: Fixed codebook of common residue signals. Adaptive codebook that includes recently used residue signals. This reduces the data rate for repetitive signals.


Download ppt "Compression No. 1  Seattle Pacific University Data Compression Kevin Bolding Electrical Engineering Seattle Pacific University."

Similar presentations


Ads by Google