Presentation is loading. Please wait.

Presentation is loading. Please wait.

PCM & DPCM & DM.

Similar presentations


Presentation on theme: "PCM & DPCM & DM."— Presentation transcript:

1 PCM & DPCM & DM

2 Pulse-Code Modulation (PCM) :
In PCM each sample of the signal is quantized to one of the amplitude levels, where B is the number of bits used to represent each sample. The rate from the source is bps. The quantized waveform is modeled as : q(n) represent the quantization error, Which we treat as an additive noise.

3 Pulse-Code Modulation (PCM) :
The quantization noise is characterize as a realization of a stationary random process q in which each of the random variables q(n) has uniform pdf. Where the step size of the quantizer is

4 Pulse-Code Modulation (PCM) :
If :maximum amplitude of signal, The mean square value of the quantization error is : Measure in dB, The mean square value of the noise is :

5 Pulse-Code Modulation (PCM) :
The quantization noise decreases by 6 dB/bit. If the headroom factor is h, then The signal to noise (S/N) ratio is given by (Amax=1) In dB, this is

6 Pulse-Code Modulation (PCM) :
Example : We require an S/N ratio of 60 dB and that a headroom factor of 4 is acceptable. Then the required word length is : 60= B – 20 If we sample at 8 KHZ, then PCM require

7 Pulse-Code Modulation (PCM) :
A nonuniform quantizer characteristic is usually obtained by passing the signal through a nonlinear device that compress the signal amplitude, follow by a uniform quantizer. Compressor A/D D/A Expander Compander (Compressor-Expander)

8 Pulse-Code Modulation (PCM) :
A logarithmic compressor employed in North American telecommunications systems has input-output magnitude characteristic of the form is a parameter that is selected to give the desired compression characteristic.

9 Pulse-Code Modulation (PCM) :
The logarithmic compressor used in European telecommunications system is called A-law and is defined as

10 DPCM : A Sampled sequence u(m), m=0 to m=n-1.
Let be the value of the reproduced (decoded) sequence.

11 DPCM: At m=n, when u(n) arrives, a quantify , an estimate of u(n), is predicted from the previously decoded samples i.e., ”prediction rule” Prediction error:

12 DPCM : If is the quantized value of e(n), then the reproduced value of u(n) is: Note:

13 DPCM CODEC: Σ Quantizer Σ Predictor Σ Predictor Coder Decoder
Communication Channel Σ Quantizer Σ Predictor Σ Predictor Coder Decoder

14 DPCM: Remarks: The pointwise coding error in the input sequence is exactly equal to q(n), the quantization error in e(n). With a reasonable predictor the mean sequare value of the differential signal e(n) is much smaller than that of u(n).

15 DPCM: Conclusion: For the same mean square quantization error, e(n) requires fewer quantization bits than u(n). The number of bits required for transmission has been reduced while the quantization error is kept the same.

16 Delta Modulation : (DM)
Predictor : one-step delay function Quantizer : 1-bit quantizer

17 Delta Modulation : (DM)
Primary Limitation of DM Slope overload : large jump region Max. slope = (step size)X(sampling freq.) Granularity Noise : almost constant region Instability to channel noise

18 DM: Unit Delay Integrator Coder Unit Delay Decoder

19 DM: Step size effect : Step Size (i) slope overload
(sampling frequency ) (ii) granular Noise

20 Adaptive DM: Adaptive Function Unit Delay This adaptive approach simultaneously minimizes the effects of both slope overload and granular noise

21 Vector Quantization (VQ)

22 Vector Quantization : Quantization is the process of approximating continuous amplitude signals by discrete symbols. Partitioning of two-dimensional Space into 16 cells.

23 Vector Quantization : The LBG algorithm first computes a 1-vector codebook, then uses a splitting algorithm on the codeword to obtain the initial 2-vector codebook, and continue the splitting process until the desired M-vector codebook is obtained. This algorithm is known as the LBG algorithm proposed by Linde, Buzo and Gray.

24 Vector Quantization : The LBG Algorithm :
Step 1: Set M (number of partitions or cells)=1.Find the centroid of all the training data. Step 2: Split M into 2M partitions by splitting each current codeword by finding two points that are far apart in each partition using a heuristic method, and use these two points as the new centroids for the new 2M codebook. Now set M=2M. Step 3: Now use a iterative algorithm to reach the best set of centroids for the new codebook. Step 4: if M equals the VQ codebook size require, STOP; otherwise go to Step 2.


Download ppt "PCM & DPCM & DM."

Similar presentations


Ads by Google