Download presentation

Presentation is loading. Please wait.

Published byMekhi Daniell Modified about 1 year ago

1
Sampling and Pulse Code Modulation ECE460 Spring, 2012

2
How to Measure Information Flow Signals of interest are band-limited which lend themselves to sampling (e.g., discrete values) Study the simplest model: Discrete Memoryless Source (DMS) Alphabet: Probability Mass Function: The DMS is fully defined given its Alphabet and PMF. How to measure information flow where a 1 is the most likely and a N is the least likely: 1.Information content of output a j depends only on the probability of a j and not on the value. Denote this as I ( p j ) – called self-information 2.Self-information is a continuous function of p j 3.Self-information is increases as p j decreases. 4.If p j = p (j1) p (j2 ), then Only function that satisfies all these properties: Unit of measure: Log 2 -bits (b) Log e -nats 2

3
Entropy The mean value of I(x i ) over the alphabet of source X with N different symbols is given by Note: 0 log 0 = 0 H(X) is called entropy and is a measure of the average information content per source symbol and is measured in b/symbol Source entropy H(X) is bounded: 3

4
Entropy Example A DMS X has an alphabet of four symbols, { x 1, x 2, x 3, x 4 } with probabilities P(x 1 ) = 0.4, P(x 1 ) = 0.3, P(x 1 ) = 0.2, and P(x 1 ) = 0.1. a)Calculate the average bits/symbol for this system. b)Find the amount of information contained in the messages x 1, x 2, x 1, x 3 and x 4, x 3, x 3, x 2 and compare it to the average. c)If the source has a bandwidth of 4000 Hz and it is sampled at the Nyquist rate, determine the average rate of the source in bits/sec. 4

5
Joint & Conditional Entropy Definitions Joint entropy of two discrete random variables ( X, Y ) Conditional entropy of the random variable X given Y Relationships Entropy rate of a stationary discrete-time random process is defined by Memoryless: Memory: 5

6
Example Two binary random variables X and Y are distributed according to the joint distribution Compute: H(X) H(Y) H(X,Y) H(X|Y) H(Y|X) 6

7
7

8
Source Coding Which of these codes are viable? Which have uniquely decodable properties? Which are prefix-free? A sufficient (but not necessary) condition to assure that a code is uniquely decodable is that no code word be the prefix of any other code words. Which are instantaneously decodable? Those for which the boundary of the present code word can be identified by the end of the present code word rather than by the beginning of the next code word. Theorem: A source with entropy H can be encoded with arbitrarily small error probability at any rate R (bits/source output)as long as R > H. Conversely if R < H, the error probability will be bounded away from zero, independent of the complexity of the encoder and the decoder employed. R : the average code word length per source symbol 8

9
Huffman Coding Steps to Huffman Codes: 1.List the source symbols in order of decreasing probability 2.Combine the probabilities of the two symbols having the lowest probabilities, and reorder the resultant probabilities; this step is called reduction 1. The same procedure is repeated until there are two ordered probabilities remaining. 3.Start encoding with the last reduction, which consist of exactly two ordered probabilities. Assign 0 as the first digit in the code words for all the source symbols associated with the first probability; assign 1 to the second probability. 4.Now go back and assign 0 and 1 to the second digit for the two probabilities that were combined in the previous reduction step, retaining all assignments made in step 3. 5.Keep regressing this way until the first column is reached. 9

10
Example 10 SymProb Code x x x x x x

11
Quantization Scalar Quantization For example: 11

12
Quantization Define a Quantization Function Q as Is the quantization function invertible? Define the squared-error distortion for a single measurement: Since is a random variable, so are and, and the distortion D for the source is the expected value of this random variable 12

13
Example The source X(t) is a stationary Gaussian source with mean zero and power-spectral density The source is sampled at the Nyquest rate, and each sample is quantized using the 8-level quantizer shown below. What is the rate R and the distortion D ? 13

14
Signal-to-Quantized Noise Ratio In the example, we used the mean-squared distortion, or quantization noise, as the measure of performance. SQNR provides a better measure: Definition: If the random variable X is quantized Q ( x ), the signal-to-quantized noise ration ( SQNR ) is defined by For signals, the quantization-noise power is And the signal power is Therefore: If X ( t ) is a strictly stationary random process, how is the SQNR related to the autocorrelation functions of X(t) and ? 14

15
Uniform Quantization The example before was for a uniform quantization where the first and last ranges were (-∞,-70] and (70, ∞). The remaining ranges went in steps of 20 (e.g., (-70, -50], …, (50, 70] ). We can generalize for a zero-mean, unit variance Gaussian by breaking the ranges into N symmetric segments of width Δ about the origin. The distortion would then be given by: Even N: Odd N: How do you select the optimized value for Δ give N ? 15

16
Optimal Uniform Quantizer 16

17
Non-Uniform Quantizers The Lloyd-Max conditions for optimal quantization: 1.The boundaries of the quantization regions are the midpoints of the corresponding quantized values (nearest neighbors) 2.The quantized values are the centroids of the quantization regions. Typically determined by trial and error! Note the optimal non- uniform quantizer table for a zero-mean, unit variance Gaussian source. Example (Continued): How would the results of the previous example change if instead of the uniform quantizer we used an optimal non-uniform quantizer with the same number of levels? 17

18
Optimized Non-Uniform Quantizer 18

19
Waveform Coding Our focus will be on Pulse-Code Modulation (PCM) Uniform PCM Input range for x(t) : Length of each quantization region: is chosen as the midpoint of each quantization level Quantization error is a random variable in the interval 19

20
Uniform PCM Typically, N is very high and the variations of input low so that the error can be approximated as uniformly distributed on the interval This give the quantization noise as Signal-to-quantization noise ration (SQNR) 20

21
Uniform PCM Example: What is the resulting SQNR for a signal uniformly distributed on [-1,1] when uniform PCM with 256 levels is employed? What is the minimum bandwidth required? 21

22
A more effective method would be to make use a non-uniform PCM: Speech coding in the U.S. uses the µ-law compander with a typical value of µ = 255: How to Improve Results 22

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google