Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 7 Source Coding and Compression Lossy Compression Concepts

Similar presentations


Presentation on theme: "Lecture 7 Source Coding and Compression Lossy Compression Concepts"— Presentation transcript:

1 Lecture 7 Source Coding and Compression Lossy Compression Concepts
Dr.-Ing. Khaled Shawky Hassan Room: C3-222, ext: 1204, 1 1

2 Lossy Compression Block Diagram 2

3 Lossy Compression If the original source is discrete:
Lossless coding: bit rate >= entropy rate One can further quantize source samples to reach a lower rate If the original source is continuous: Lossless coding will require an infinite bit rate! One must quantize source samples to reach a finite bit rate Lossy coding rate is bounded between the mutual information between the original source (Channel rate) and the quantized source that satisfy a distortion criterion Quantization methods: Scalar quantization (Previously studied – revisited with advanced terminologies) Vector quantization (New method!) 3

4 Layering of Source Coding of Analog Signals
Source coding includes Formatting (input data) Sampling Quantization Symbols to bits (Encoding) Lossless compression Decoding includes Lossless decompression Formatting (output) Bits to symbols (Decoding) Symbols to sequence of numbers (opposite of Quantization) Sequence to waveform (Reconstruction, opposite of sampling) Mirror 4

5 Layering of Source Coding of Analog Signals
Lookup Table: Changes the symbols to a sequence of numbers (which is the opposite of the Quantizer) Analog filter: Interpolate between the sequence (output number) to generate waveform 5

6 Formatting of Analog Data
To transform an analog waveform into a form that is compatible with a digital communication, the following steps are taken: Sampling Quantization and Encoding Base-band transmission pulse-coded modulation (PCM) {send the quantization levels as pulses} 6

7 First: Sampling of the continuous Signal
Sampling: limits the analog/ continuous signal to only limited samples separated by Ts Sampling rate: fs= 1/Ts What will be the frequency response after sampling??? 7

8 Sampling in Frequency Domain
VERY IMPORTANT: Sampling in time domain is equivalent to a repetition in the frequency domain at fs and its multiples 8

9 Sampling in Frequency Domain
To cut the band of the required signal without overlapping of the higher frequency component of the repeated band to the original band (around “0”) 9

10 High Compression: Under-sampled Signal
If we need to sample the analog signal with fewer number of samples, we have to reduce the sampling rate fs= 1/Ts, i.e., increase Ts. What will be the frequency response after sampling??? Ans: Aliasing will happen as in the figure on the left-hand. 10

11 Aliasing Phenomenon The phenomenon of a high-frequency component in the spectrum of the signal seemingly taking on the identify of a lower frequency in the spectrum of its sampled version To combat the effects of aliasing in practices: Before sampling : a low-pass “anti-aliasing filter” is used to attenuate those high-frequency components of a message signal that are not essential to the information being conveyed by the signal (filter-BW=fa) The filtered signal is sampled at a rate slightly higher than the Nyquist rate of the new filter bandwidth (fs >= 2fa < 2W, where W is the bandwidth of the original signal) Physically realizable reconstruction filter The reconstruction filter is of a low-pass kind with a passband extending from –fa to fa (where fa is the bandwidth) Thus use Nyquist formula

12 Low Compression: Over-sampled Signal
If we need to sample the analog signal with higher number of samples, we have to increase the sampling rate fs= 1/Ts, i.e., decrease the duration Ts. This will introduce more data to store, HENCE, LESS COMRISSION! However, this will be easier to design receiving filters(i.e., not sharp ones) 12

13 Sampling Theorem The sampling theorem for strictly band-limited signals of finite energy in two equivalent parts Analysis : A band-limited signal of finite energy that has no frequency components higher than B hertz is completely described by specifying the values of the signal at instants of time separated by 1/(2B) seconds. Synthesis : A band-limited signal of finite energy that has no frequency components higher than B hertz is completely recovered form knowledge of its samples taken at the rate of 2B samples per second. (using a low pass filter of cutoff freq. B) Nyquist rate (fs) The sampling rate of 2B samples per second for a signal bandwidth of B hertz Nyquist interval (Ts) 1/(2B) (measured in seconds) 13

14 Type of Sampling Ideal Practical Natural Sampling
Sample and Hold (Flat-top) 14

15 Ideal Sampling (or Impulse Sampling)
x(t)x(t) x(t) Ts Is accomplished by the multiplication of the signal x(t) by the uniform train of impulses Consider the instantaneous sampling of the analog signal x(t) Train of impulse functions select sample values at regular intervals 15

16 Ideal Sampling (or Impulse Sampling)
Exactly as if we multiply the continuous signal with a train of impulses separated by distance Ts The width of each impulse is ALMOST Zero 16

17 Practical Sampling In practice we cannot perform ideal sampling
It is not practically possible to create a train of impulses Thus a non-ideal approach to sampling must be used We can approximate a train of impulses using a train of very thin rectangular pulses: 17

18 Natural Sampling If we multiply x(t) by a train of rectangular pulses xp(t), we obtain a gated waveform that approximates the ideal sampled waveform, known as natural sampling or “gating” 18

19 Natural Sampling Each pulse in xp(t) has width Ts and amplitude 1/Ts
The top of each pulse follows the variation of the signal being sampled Xs (f) is the replication of X(f) periodically every fs Hz Xs (f) is weighted by Cn  Fourier Series Coefficient The problem with a natural sampled waveform is that the tops of the sample pulses are not flat It is not compatible with a digital system since the amplitude of each sample has infinite number of possible values Another technique known as flat top sampling is used to alleviate this problem; here, the pulse is held to a constant height for the whole sample period This technique is used to realize Sample-and-Hold (S/H) operation In S/H, input signal is continuously sampled and then the value is held for as long as it takes to for the A/D to acquire its value 19

20 Flat-Top Sampling (sample and Hold)
Time Domain Frequency Domain 20

21 Pulse-Amplitude Modulation (PAM)
Output of Sampling is known as PAM Pulse-Amplitude Modulation (PAM) The amplitude of regularly spaced pulses are varied in proportion to the corresponding sample values of a continuous message signal. The levels of the proposed quantizer may be the length M of the M-PAM 21

22 Coding example: PCM Transmitted sequence (110,111,100, …) discrete signal
22

23 Dr.-Ing. Khaled Shawky Hassan Email: khaled.shawky@guc.edu.eg
Quantization and Distortion Dr.-Ing. Khaled Shawky Hassan Room: C3-222, ext: 1204, 23 23

24 Revisiting Quantizer Quantization:
4/16/2017 Revisiting Quantizer Quantization: • Reduces the number of distinct output values to a much smaller set. • Main source of the “loss” in lossy compression. • Three different forms of quantization: 1– Uniform: midrise and midtread quantizers. 2– Nonuniform: companded quantizer. 3– Vector Quantization. From the R-D function: dR/dD = -1/2 * (1/D) * log_2(e), i.e. as D is increasing, |dR/dD| decreasing 24

25 Recall: Express everything in bits 0 and 1
Discrete finite ensemble: a,b,c,d  00, 01, 10, 11 in general: k binary digits specify 2k messages M messages need log2M bits Analoge signal: Example, ADC with 2 bits only! 1) sample at every Ts 2) represent sample value binary v 11 10 01 00 t Output 00, 10, 01, 01, 11 25

26 Uniform Quantization Uniform Quantization
where i=1, 2, , 9 and Q (x) is the output of the quantizer with respect to the input x 26

27 Quantization Amplitude quantizing: Mapping samples of a continuous amplitude waveform to a finite set of amplitudes. In Out Quantized values Average quantization noise power Signal peak power Signal power to average quantization noise power 27

28 Uniform Scalar Quantization
• A uniform scalar quantizer partitions the domain of input values into equally spaced intervals, except possibly at the two outer intervals. – The output or reconstruction value corresponding to each interval is taken to be the midpoint of the interval. – The length of each interval is referred to as the step size, denoted by the symbol Δ=(2*Xmax/M)=(2*Xmax/2R). • Two types of uniform scalar quantizers: – Midrise quantizers have even number of output levels. – Midtread quantizers have odd number of output levels, including zero as one of them 28

29 4/16/2017 Revisiting Quantizer From the R-D function: dR/dD = -1/2 * (1/D) * log_2(e), i.e. as D is increasing, |dR/dD| decreasing Uniform Scalar Quantizers: (a) Midrise, (b) Midtread. 29

30 4/16/2017 Revisiting Quantizer • For the special case where Δ (what is it again?) = 1, we can simply compute the output values for these quantizers as: • Performance of an M level quantizer. Let B = {b0, b1, , bM} be the set of decision boundaries and Y = {y1, y2, , yM} be the set of reconstruction or output values. • Suppose the input is uniformly distributed in the interval [−Xmax,Xmax]. The rate (in bits!!) of the quantizer is: From the R-D function: dR/dD = -1/2 * (1/D) * log_2(e), i.e. as D is increasing, |dR/dD| decreasing 30

31 Quantization Error of Uniformly Distributed Source
4/16/2017 Quantization Error of Uniformly Distributed Source • Granular distortion: quantization error caused by the quantizer for bounded input. – To get an overall figure for granular distortion, notice that decision boundaries bi for a midrise quantizer are [(i − 1)Δ, iΔ], i = 1..M/2, covering positive data X (and another half for negative X values). – Output values yi are the midpoints iΔ−Δ/2, i = 1..M/2 • Since the reconstruction values yi are the midpoints of each interval, the quantization error must lie within the values [− Δ/2, Δ/2]. For a uniformly distributed source, the graph of the quantization error is shown in the next Fig From the R-D function: dR/dD = -1/2 * (1/D) * log_2(e), i.e. as D is increasing, |dR/dD| decreasing 31

32 Quantization error of a uniformly distributed source.
4/16/2017 Quantization Error of Uniformly Distributed Source Zoom on one-level error! From the R-D function: dR/dD = -1/2 * (1/D) * log_2(e), i.e. as D is increasing, |dR/dD| decreasing Quantization error of a uniformly distributed source. 32

33 Quantization example x(nTs): sampled values xq(nTs): quantized values
amplitude x(t) x(nTs): sampled values xq(nTs): quantized values boundaries Quant. levels Red-Green = Quantization error Quant. levels Ts: sampling time t PCM codeword PCM TX sequence 33

34 Uniform Quantization of a Non-uniformly Distributed Source
4/16/2017 Uniform Quantization of a Non-uniformly Distributed Source  Many data are non-uniformly distributed and even unbounded From the R-D function: dR/dD = -1/2 * (1/D) * log_2(e), i.e. as D is increasing, |dR/dD| decreasing This results in two kinds of errors: Granular noise: Quantization error in inner bins. Overload noise: Quantization error in the outer-most bins. 34

35 Uniform Quantization of a Non-uniformly Distributed Source
4/16/2017 Uniform Quantization of a Non-uniformly Distributed Source  Many data are non-uniformly distributed and even unbounded From the R-D function: dR/dD = -1/2 * (1/D) * log_2(e), i.e. as D is increasing, |dR/dD| decreasing 35

36 Example 2: Gaussian PDF Quantization
One bit quantization (using the conditional mean of the half): 34% PDF With more bits, the solution is less obvious. Let X ~ N(0, ) 36

37 How to minimize the Quantization Error?
Let us denote the distortion as MSE; then it will be for a certain thresholds: Given M (the levels), the optimal bi and yi that minimize MSE satisfy: Lagrangian condition : yi is the centroid of interval [b(i-1), b(i)]; (conditional mean)

38 How to minimize the Quantization Error?
Now, differentiating w.r.t. bi This gives: Summary of Lloyd-Max conditions:

39 A special case of Lloyd-Max Quantizer
If f(x) = c (uniform), Lloyd-Max quantizer reduces to uniform quantizer This gives: Then, the solution will be an iterative method !!

40 How to minimize the Quantization Error?
Scalar Quantizer: Lloyd algorithm: pdf-optimized quantizer assuming that distribution is known The Lloyd algorithm functions (SQ) Start with an initial set of reconstruction values Set k = 0, D(0) = 0 (distortion). Select threshold e. Find decision boundaries Compute distortion (MSE) D(k) , and if e > D(k) –D(k-1) , stop; otherwise k = k+1 (new iteration), then compute new reconstruction values from the PDF fX(x)

41 Non-uniform Quantization
41

42 Non-uniform Quantization
Non-uniform quantizers have unequally spaced levels The spacing can be chosen to optimize the Signal-to-Noise Ratio for a particular type of signal It is characterized by: Variable step size Quantizer size depend on signal size; i.e., if the signal amplitude is reduced, it will find more levels in the middle to be will represented. This does not require automatic scaling of the signal as in the automatic- gain control (AGC)! 42

43 Non-uniform Quantization (Companding)
Compressor Uniform Quantizer Expander 43

44 Non-uniform Quantization (Companding)
Compressor Uniform Quantizer Expander The 3 stages combine to give the characteristics of a Non-uniform quantizer, which is stile a 45o line with more levels near to the orgin. 44

45 Companded quantization.
4/16/2017 Non-Uniform Scalar Quantization Non-Uniformly Distributed Source Companded quantization. • Companded quantization is nonlinear. • As shown above, a compander consists of a compressor function G, a uniform quantizer, and an expander function G−1. • The two commonly used companders are the μ-law and A-law companders. From the R-D function: dR/dD = -1/2 * (1/D) * log_2(e), i.e. as D is increasing, |dR/dD| decreasing 45

46 Non-Uniform Scalar Quantization
4/16/2017 Non-Uniform Scalar Quantization Non-Uniformly Distributed Source – Compressor function “G” From the R-D function: dR/dD = -1/2 * (1/D) * log_2(e), i.e. as D is increasing, |dR/dD| decreasing 46

47 Example of a Compressor Function
Fig-a Signal at Compressor Input Fig-b Signal at Compressor Output 47

48 Types of Companding Basically, companding introduces a nonlinearity into the signal This maps a nonuniform distribution into something that more closely resembles a uniform distribution A standard ADC with uniform spacing between levels can be used after the compressor The companding operation is inverted at the receiver There are in fact two standard logarithm based companding techniques US standard called µ-law companding European standard called A-law companding 48

49 x and y represent the input and output voltages
 -Law Companding Standard (North & South America, and Japan) function (compression y, expander y-1) where x and y represent the input and output voltages  is a constant number determined by experiment In the U.S., telephone lines uses companding with  = 255 Samples 4 kHz speech waveform at 8,000 sample/sec Encodes each sample with 8 bits, L = 256 quantizer levels Hence data rate R = 64 kbit/sec  = 0 corresponds to uniform quantization 49

50 A-Law Companding Standard (Europe, China, Russia, Asia, Africa)
where x and y represent the input and output voltages A is a constant number determined by experiment 50

51 Example on Companding 51

52 Example on Companding G G−1 52

53 Example on Companding Example 1: Suppose we had an input of 0.9:
If we quantize directly with the uniform quantizer, we get an output of 0.5, resulting in a quantization error of 0.4. If we use the companded quantizer, we first use the compressor mapping, mapping the input value of 0.9 to 1.8. Quantizing this with the same uniform quantizer results in an output of 1.5, with an apparent error of 0.3. The expander then maps this to the final reconstruction value of 0.75, which is 0.15 away from the input (<< 0.4) For all values in the interval [−1 1] (in this case), we will get a decrease in the quantization error Remember: Q_midr = ceil(x)-0.5 Q_midt = floor(x+0.5) 53

54 Example on Companding Example 2: Suppose we have an input of 2.7:
If we quantized this directly with the uniform quantizer, we would get an output of 2.5, with a corresponding error of 0.2 Applying the compressor mapping, the value of 2.7 would be mapped to 3.13, resulting in a quantized value of 3.5. Mapping this back through the expander, we get a reconstructed value of 3.25, which differs from the input by 0.55 (>> 0.2) For all values outside the interval [−1 1] (in this case), we will get an increase in the quantization error!! Remember: Q_midr = ceil(x)-0.5 Q_midt = floor(x+0.5) 54

55 Example on Companding Note: What is the effective input-output map of this quantizer? All inputs in the interval [0, 0.5] get mapped into the interval [0, 1], For which the quantizer output is 0.5. Which in turn corresponds to the reconstruction value (after C-1 ) of 0.25 Similarly, the interval [0.5, 1] are represented by the values [1 to 2] (represented by 1.5), then after C-1 it is 0.75 55

56 Lossy Compression Example with Lena
Original 8 bits/pixel 3 bits/pixel 56 2 bits/pixel 1 bit/pixel

57 The Quantization Problem/Distortion
Consider first one the simplest cases: quantization of a continuous valued random variable (analogue signal) to a discrete random value (RV) (digital signal). The quantized version of the RV X is denoted as X’(X), With R bits available, the quantized representation can get 2R different values. The problem: find the optimum set of values (reproduction points, code-points) for the quantized version and the regions associated to those. Optimum is measured by distortion measure, which Is often the mean squared error: E[X–X’]2 57

58 (Communication System-Simon Haykin, Question in Lecture !!)
The basic elements of a PCM system (Communication System-Simon Haykin, Question in Lecture !!) 58

59 Questions 1- What are the two compressors in a basic continuous transmitter and how do we evaluate their errors? 2-What is the function of the low-pass filter before the sampling at the receiver? 3- What is the output format from each block in the communication blocks (analog, disc. binary? 4- How can sampler reduce the overall rate? State what will be problematic in this case? How to select the low-pass filter in this case (before the sampler) 5- What will happen in the Quantizer is uniform and the signal dynamic range decreased dramatically? 6- When will be the repeater (Regenerators, Amplifiers) be safe to use without Automatic Gain Control? 7- What will be the optimum Quantizer? What is the more simple solution? 8- Describe what will be the regenerators signal if a DSP with memory is used. 9- Assume a Gaussian source with ~N(0,sigma^2), how can you use 2-bit (4-level) Quantizer safely and what will be the probability of the possible quantization error in your case? 10- Draw the basic lossy compression block-diagram showing all compression (lossy / lossless) steps and define the output of each block.


Download ppt "Lecture 7 Source Coding and Compression Lossy Compression Concepts"

Similar presentations


Ads by Google