Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Source Coding and Compression Dr.-Ing. Khaled Shawky Hassan Room: C3-222, ext: 1204, Lecture 10 Rate-Distortion.

Similar presentations


Presentation on theme: "1 Source Coding and Compression Dr.-Ing. Khaled Shawky Hassan Room: C3-222, ext: 1204, Lecture 10 Rate-Distortion."— Presentation transcript:

1 1 Source Coding and Compression Dr.-Ing. Khaled Shawky Hassan Room: C3-222, ext: 1204, Email: khaled.shawky@guc.edu.eg Lecture 10 Rate-Distortion

2 2 Rate Distortion Theory

3 3 Main Idea in A Less Formal Way Rate distortion theory deals with the problem of representing information allowing a distortion representing information allowing a distortion idea: less exact representation needs less bits

4 Rate-Distortion Theory 4 o Addresses the problem of determining the minimal number of bits per symbol, as measured by the rate R, that should be communicated over a channel o So that the source (input signal) can be approximately reconstructed at the receiver (output signal) without exceeding a given maximum distortion D. o Audio, speech, image, and video compression techniques have transforms, quantization, and bit-rate allocation procedures that capitalize on the general shape of rate– distortion functions. Claude Shannon o Rate–distortion theory was created by Claude Shannon in his foundational work on information theory

5 Rate-Distortion Theory 5 o Rate: o It is the number of bits per data sample to be stored or transmitted. o Distortion: o It is defined as the variance of the difference between input and output. o Measured by: 1.Hamming distance 2.Squared error

6 6 Model Analoge source H(X) bits/source symbol fine quantizer Analoge source rough quantizer X‘ X H(X‘) < H(X) bits/source symbol discrete source X‘‘ less exact Xa H(X‘‘) << H(X) bits/source symbol Xa Lossless Enc. Lossless Enc. Lossless Enc. Xq

7 7 Rate Distortion Theory X Problem : find representations under a distortion constraint Represents X‘

8 8 Rate Distortion Theory X Problem : find representations under a distortion constraint Represents X‘

9 9 Rate Distortion Theory X Problem : find representations under a distortion constraint Represents X‘

10 Distortion Function 10 A distortion function or distortion measure: The most popular one is MSE Simple, connections to least squares (LS) estimation techniques A distortion function or distortion measure: – A measure of the cost or loss in representing symbol X in the form of X’ – Bounded if Examples: – Error distortion (Hamming distance –> max 1): e.g., d = 0, when x=x’ and 1 when x != x' – Squared error distortion: Distortion for sequences:

11 Rate Distortion vs. Distortion Rate Function 11 The rate distortion function R(D): is the minimum* of all the rates R such that the pair (R,D) is in the rate distortion region of the source given distortion D. – i.e., rate as a function of distortion The distortion rate function D(R): is the minimum* of all the distortions D such that the pair (R,D) is in the rate distortion region of the source given rate R. – i.e., distortion as a function of rate. R(D) and D(R) are equivalent and include the same information. – Rate distortion R(D) is usually applied instead of D(R). *Actually, it is infimum which is the greatest lower bound

12 12 Source representation X X‘ sourcequantize Property of a quantizer: I( X; X‘ ) =H(X‘) – H(X‘|X) minimize => H(X‘) or (increase the compression) I( X; X‘ ) = H(X) – H(X|X‘) maximize => H(X|X') or (increase distortion but... ) → Where → H(D) = H(X|X') Hence: minimize I( X; X‘ ) under the constraint D choose since X is given, choose the quantizer (mapping of X to X‘) Condition: average distortion between X and X‘  D = 0 iff X  X‘

13 13 Source representation H(X) H(X‘) I(X; X‘) H(X|X‘) H(X‘|X) Source Entropy = H(X) Distortion amount = Amount lost from the source = H(D) H(D) = H(X|X') I (X|X') = H(X) – H(X|X') → mutual infromation between source and quantized samples

14 14 Source representation I( X; X‘ ) = H(X) – H(X|X‘) = H(X‘) – H(X‘|X) H(X‘) = I( X; X‘ ) + H(X‘|X) → (now H(X‘|X)  0) Hence, minimizing I(X;X‘) for all possible transitions X  X‘ giving an average distortion D gives a lower bound R(D) for the representation of I( X; X‘ )

15 15 Formal definition The rate distortion function for X and X‘ is formally defined as Where - D is the maximum average distortion - Hence, the problem is to find a channel from X  X‘ and its transition probabilities...... such that the average distortion between X and X‘ is  D

16 16 R(D) for a Bernoulli Source For a Bernoulli(p → {from 0 to 1}) source with Hamming distortion, the information rate distortion function is Where D is the maximum average distortion Hence, the problem is to find a channel from X  X‘ and its transition probabilities...... such that the average distortion between X and X‘ is  D

17 17 R(D) for a Bernoulli Source Proof: Consider a binary source X ~ Bernoulli(p) with a Hamming distortion measure. Without loss of generality, we may assume that p < 1/2. We wish to calculate the rate distortion function, For a Bernoulli(p → {from 0 to 1}) source with Hamming distortion, the information rate distortion function is

18 18 R(D) for a Bernoulli Source For a Bernoulli(p → {from 0 to 1}) source with Hamming distortion, Let us denote modulo 2 addition, X(+)X' = 1 is equivalent to X ≠ X'.

19 19 Distortion Measure Lower the bit-rate R by allowing some acceptable distortion D of the signal D = 0 P=1/2 R = 0 & D=0.5

20 Bit Allocation in Lossy Coding

21 Lossy Data Compression Codes Where the error free capacity: C = 1/2*log2(1+SNR)

22 constraint on channel capacity a)To transmit a source through a channel with capacity “C”, which could be less than the source entropy, is challenging. a)I repeated few times before: Channel coding theorem always introduces certain error for rate Re above channel capacity, i.e., Re > C, and almost error free transmission for Re < C in case of good modern codes. a)The amount of decoding error could go beyond control when one desires to convey source that generates information at rates above channel capacity C. manageable distortion a)In order to have a good control of the transmission error, another approach is to first reduce the data rate with manageable distortion, and then transmit the source data at rates less than the channel capacity, e.g., Re. a)With such approach, the transmission error is only introduced at the (lossy) data compression step since the error due to transmission over channel can be made arbitrarily small

23 The Gaussian Source Lossy Compression Quantizer achieves compression in a lossy way (right?) e.g., Lloyd-Max quantizer minimize MSE distortion with a given rate (This is lossy!!) Need at least to know how many # bits to achieve a certain amount of error? Rate-Distortion theory is the ANSWER Rate distortion func. Minimum average rate R D bits/sample required to represent this r.v. while allowing a fixed distortion D Example: for Gaussian r.v. and MSE D RDRD 22

24 The Gaussian Source Lossy Compression D=

25 25 Where: C = log 2 (1+S/N) Re Maximum practical rate Gap = C – Re Rr = Rate required DG : Distortion for Gaussian quantizer DQ1 = Quantizer model 1 DQ2 = Quantizer model 2 Dmax = Maximum possible distortion The Gaussian Source Lossy Compression

26 26 Bit Allocation (Step-by-Step) Problem formulation To encode a set of independent Gaussian r.v. {X1, …, Xn}, X k ~ N( 0,  k 2 ) {X1q, …, Xnq}, X kq ~ N( 0,  k 2 - D k ) Allocate R k bits to represent each r.v. X k, incurring distortion D k Total bit cost is: R = R1+…+Rn Total MSE distortion: D = D1+…+Dn  What is the best bit allocation {R1, …, Rn} such that R is minimized and total distortion D  D (req) ?  What is the best bit allocation {R1, …, Rn} such that D is minimized and total rate R  R (req) ? Recall R k = max( ½ * log(  k 2 / D k ), 0 ) !!!! Solving the constrained optimization problem using Lagrange multiplier  0

27 27 Bit Allocation (How Many Bits for Each Coeff.) Determined by the variance of coeff. More bits for high variance  k 2 to keep total MSE small where  k 2 -- variance of k-th coeff. v(k); n k – # bits allocated for v(k) f(-) – quantization distortion func. “Inverse Water-filling” (from info. theory) –Try to keep same amount of error in each freq. band OR

28 Details on Reverse Water-filling Solution Construct func. using Lagrange multiplier Necessary condition  Keep the same marginal gain Necessary condition for choosing 

29 Rate/Distortion Allocation via Reverse Water-filling How many bits to be allocated for each coeff.? Determined by the variance of the coefficients More bits for high variance  k 2 to keep total MSE small Optimal solution for Gaussian: “Reverse Water-filling” –Idea: Try to keep same amount of error in each freq. band and no need to spend bit resource to represent coeff. w/ small variance than water level –Results based on R-D function & via Lagrange-multiplier optimization given D, determine  and then R D ; or vice versa for given R D => “Equal slope” idea can be extended to other convex (operational) R-D functions


Download ppt "1 Source Coding and Compression Dr.-Ing. Khaled Shawky Hassan Room: C3-222, ext: 1204, Lecture 10 Rate-Distortion."

Similar presentations


Ads by Google