Presentation is loading. Please wait.

Presentation is loading. Please wait.

EE465: Introduction to Digital Image Processing 1 Data Compression: Advanced Topics  Huffman Coding Algorithm Motivation Procedure Examples  Unitary.

Similar presentations


Presentation on theme: "EE465: Introduction to Digital Image Processing 1 Data Compression: Advanced Topics  Huffman Coding Algorithm Motivation Procedure Examples  Unitary."— Presentation transcript:

1 EE465: Introduction to Digital Image Processing 1 Data Compression: Advanced Topics  Huffman Coding Algorithm Motivation Procedure Examples  Unitary Transforms Definition Properties Applications

2 EE465: Introduction to Digital Image Processing 2 Recall: Recall: Variable Length Codes (VLC) Assign a long codeword to an event with small probability Assign a short codeword to an event with large probability Self-information It follows from the above formula that a small-probability event contains much information and therefore worth many bits to represent it. Conversely, if some event frequently occurs, it is probably a good idea to use as few bits as possible to represent it. Such observation leads to the idea of varying the code lengths based on the events’ probabilities.

3 EE465: Introduction to Digital Image Processing 3 Two Goals of VLC design  –log 2 p(x)  For an event x with probability of p(x), the optimal code-length is, where  x  denotes the smallest integer larger than x (e.g.,  3.4  =4 ) achieve optimal code length (i.e., minimal redundancy) satisfy uniquely decodable (prefix) condition code redundancy: Unless probabilities of events are all power of 2, we often have r>0

4 EE465: Introduction to Digital Image Processing 4 “Big Question” How can we simultaneously achieve minimum redundancy and uniquely decodable conditions? D. Huffman was the first one to think about this problem and come up with a systematic solution.

5 EE465: Introduction to Digital Image Processing 5 Huffman Coding (Huffman’1952)  Coding Procedures for an N-symbol source Source reduction  List all probabilities in a descending order  Merge the two symbols with smallest probabilities into a new compound symbol  Repeat the above two steps for N-2 steps Codeword assignment  Start from the smallest source and work back to the original source  Each merging point corresponds to a node in binary codeword tree

6 EE465: Introduction to Digital Image Processing 6 symbol x p(x) S W N E 0.5 0.25 0.125 0.25 0.5 Example-I Step 1: Source reduction (EW) (NEW) compound symbols

7 EE465: Introduction to Digital Image Processing 7 p(x) 0.5 0.25 0.125 0.25 0.5 1 0 1 0 1 0 codeword 0 10 110 111 Example-I (Con’t) Step 2: Codeword assignment symbol x S W N E NEW 0 10EW 110 EW N S 0 1 10 10 111

8 EE465: Introduction to Digital Image Processing 8 Example-I (Con’t) NEW 0 10EW 110 EW N S 0 1 10 10 NEW 1 01EW 000 EW N S 1 0 01 10 001 The codeword assignment is not unique. In fact, at each merging point (node), we can arbitrarily assign “0” and “1” to the two branches (average code length is the same). or

9 EE465: Introduction to Digital Image Processing 9 symbol x p(x) e o a i 0.4 0.2 0.1 0.4 0.2 0.40.6 0.4 Example-II Step 1: Source reduction (iou) (aiou) compound symbols u0.1 0.2 (ou) 0.4 0.2

10 EE465: Introduction to Digital Image Processing 10 symbol x p(x) e o a i 0.4 0.2 0.1 0.4 0.2 0.40.6 0.4 Example-II (Con’t) (iou) (aiou) compound symbols u0.1 0.2 (ou) 0.4 0.2 Step 2: Codeword assignment codeword 0 1 1 01 000 0010 0011

11 EE465: Introduction to Digital Image Processing 11 Example-II (Con’t) 01 0100 000001 00100011 e ou (ou) i (iou)a (aiou) binary codeword tree representation

12 EE465: Introduction to Digital Image Processing 12 Example-II (Con’t) symbol x p(x) e o a i 0.4 0.2 0.1 u codeword 1 01 000 0010 0011 length 1 2 3 4 4 If we use fixed-length codes, we have to spend three bits per sample, which gives code redundancy of 3-2.122=0.878bps

13 EE465: Introduction to Digital Image Processing 13 Example-III Step 1: Source reduction compound symbol

14 EE465: Introduction to Digital Image Processing 14 Example-III (Con’t) Step 2: Codeword assignment compound symbol

15 EE465: Introduction to Digital Image Processing 15 Summary of Huffman Coding Algorithm  Achieve minimal redundancy subject to the constraint that the source symbols be coded one at a time  Sorting symbols in descending probabilities is the key in the step of source reduction  The codeword assignment is not unique. Exchange the labeling of “0” and “1” at any node of binary codeword tree would produce another solution that equally works well  Only works for a source with finite number of symbols (otherwise, it does not know where to start)

16 EE465: Introduction to Digital Image Processing 16 Data Compression: Advanced Topics  Huffman Coding Algorithm Motivation Procedure Examples  Unitary Transforms Definition Properties Applications

17 EE465: Introduction to Digital Image Processing 17 An Example of 1D Transform with Two Variables x1x1 x2x2 y1y1 y2y2 Transform matrix (1,1) (1.414,0)

18 EE465: Introduction to Digital Image Processing 18 Decorrelating Property of Transform x1x1 x2x2 y1y1 y2y2 x 1 and x 2 are highly correlated p(x 1 x 2 )  p(x 1 )p(x 2 ) y 1 and y 2 are less correlated p(y 1 y 2 )  p(y 1 )p(y 2 ) Please use MATLAB demo program to help your understanding why it is desirable to have less correlation for image compression

19 EE465: Introduction to Digital Image Processing 19 Transform=Change of Coordinates  Intuitively speaking, transform plays the role of facilitating the source modeling Due to the decorrelating property of transform, it is easier to model transform coefficients (Y) instead of pixel values (X)  An appropriate choice of transform (transform matrix A) depends on the source statistics P(X) We will only consider the class of transforms corresponding to unitary matrices

20 EE465: Introduction to Digital Image Processing 20 A matrix A is called unitary if A -1 =A* T Unitary Matrix Definition conjugate transpose Example  For a real matrix A, it is unitary if A -1 =A T Notes:  transpose and conjugate can exchange, i.e., A *T =A T*

21 EE465: Introduction to Digital Image Processing 21 Example 1: Discrete Fourier Transform (DFT) Re Im DFT: DFT Matrix:

22 EE465: Introduction to Digital Image Processing 22 Discrete Fourier Transform (Con’t)  Properties of DFT matrix symmetry unitary Proof: If we denote then we have (identity matrix)

23 EE465: Introduction to Digital Image Processing 23 realYou can check it using MATLAB demo Example 2: Discrete Cosine Transform (DCT)

24 EE465: Introduction to Digital Image Processing 24 DCT Examples N=2: 0.5000 0.5000 0.5000 0.5000 0.6533 0.2706 -0.2706 -0.6533 0.5000 -0.5000 -0.5000 0.5000 0.2706 -0.6533 0.6533 -0.2706 N=4: % generate DCT matrix with size of N-by-N Function C=DCT_matrix(N) for i=1:N; x=zeros(N,1);x(i)=1;y=dct(x);C(:,i)=y;end; end Here is a piece of MATLAB code to generate DCT matrix by yourself Haar Transform

25 EE465: Introduction to Digital Image Processing 25 % generate Hadamard matrix N=2^{n} function H=hadamard(n) H=[1 1;1 -1]/sqrt(2); i=1; while i<n H=[H H;H -H]/sqrt(2); i=i+1; end Here is a piece of MATLAB code to generate Hadamard matrix by yourself Example 3: Hadamard Transform

26 EE465: Introduction to Digital Image Processing 26 When the transform matrix A is unitary, the defined 1D transform is called unitary transform 1D Unitary Transform Inverse TransformForward Transform

27 EE465: Introduction to Digital Image Processing 27 Basis Vectors basis vectors corresponding to inverse transform basis vectors corresponding to forward transform (column vectors of transform matrix A) (column vectors of transform matrix A *T )

28 EE465: Introduction to Digital Image Processing 28 Do N 1D transforms in parallel From 1D to 2D

29 EE465: Introduction to Digital Image Processing 29 2D forward transform 1D column transform1D row transform Definition of 2D Transform

30 EE465: Introduction to Digital Image Processing 30 2D Transform=Two Sequential 1D Transforms Conclusion:  2D separable transform can be decomposed into two sequential  The ordering of 1D transforms does not matter row transform column transform row transform column transform (left matrix multiplication first) (right matrix multiplication first)

31 EE465: Introduction to Digital Image Processing 31 Basis Images T T Basis image B ij can be viewed as the response of the linear system (2D transform) to a delta-function input  ij T N1N1 1N1N

32 EE465: Introduction to Digital Image Processing 32 Example 1: 8-by-8 Hadamard Transform B ij i j DC In MATLAB demo, you can generate these 64 basis images and display them

33 EE465: Introduction to Digital Image Processing 33 Example 2: 8-by-8 DCT In MATLAB demo, you can generate these 64 basis images and display them i j DC

34 EE465: Introduction to Digital Image Processing 34 inverse transform Since A is a unitary matrix, we have Proof 2D Unitary Transform forward transform Suppose A is a unitary matrix,

35 EE465: Introduction to Digital Image Processing 35 Properties of Unitary Transforms  Energy compaction: only a small fraction of transform coefficients have large magnitude Such property is related to the decorrelating capability of unitary transforms  Energy conservation: unitary transform preserves the 2-norm of input vectors Such property essentially comes from the fact that rotating coordinates does not affect Euclidean distance

36 EE465: Introduction to Digital Image Processing 36 Energy Compaction Property  How does unitary transform compact the energy? Assumption: signal is correlated; no energy compaction can be done for white noise even with unitary transform Advanced mathematical analysis can show that DCT basis is an approximation of eigenvectors of AR(1) process (a good model for correlated signals such as an image)  A frequency-domain interpretation Most transform coefficients would be small except those around DC and those corresponding to edges (spatially high-frequency components) Images are mixture of smooth regions and edges

37 EE465: Introduction to Digital Image Processing 37 Energy Compaction Example in 1D Hadamard matrix significant A coefficient is called significant if its magnitude is above a pre-selected threshold th insignificant coefficients (th=64)

38 EE465: Introduction to Digital Image Processing 38 Energy Compaction Example in 2D  Example insignificant coefficients (th=64) A coefficient is called significant if its magnitude is above a pre-selected threshold th

39 EE465: Introduction to Digital Image Processing 39 Image Example Notice the excellent energy compaction property of DCT Original cameraman image X Its DCT coefficients Y (2451 significant coefficients, th=64) low-frequency high-frequency

40 EE465: Introduction to Digital Image Processing 40 Counter Example Original noise image XIts DCT coefficients Y No energy compaction can be achieved for white noise

41 EE465: Introduction to Digital Image Processing 41 Energy Conservation Property in 1D A is unitary Proof 1D case

42 EE465: Introduction to Digital Image Processing 42 Numerical Example Check:

43 EE465: Introduction to Digital Image Processing 43 Implication of Energy Conservation A is unitary Q TT -1 Linearity of Transform

44 EE465: Introduction to Digital Image Processing 44 Energy Conservation Property in 2D 2-norm of a matrix X Step 1: A unitary Proof: Using energy conservation property in 1D, we have

45 EE465: Introduction to Digital Image Processing 45 Energy Conservation Property in 2D (Con’t) Step 2: A unitary Hint: row transform column transform 2D transform can be decomposed into two sequential 1D transforms, e.g., Use the result you obtained in step 1 and note that

46 EE465: Introduction to Digital Image Processing 46 Numerical Example T Check:

47 EE465: Introduction to Digital Image Processing 47 Implication of Energy Conservation Q TT -1 Similar to 1D case, quantization noise in the transform domain has the same energy as that in the spatial domain Linearity of Transform

48 EE465: Introduction to Digital Image Processing 48 forward Transform entropy coding image X binary bit stream probability estimation Y Why Energy Conservation? f Y ^ super channel entropy decoding s inverse Transform image X f -1 s ^


Download ppt "EE465: Introduction to Digital Image Processing 1 Data Compression: Advanced Topics  Huffman Coding Algorithm Motivation Procedure Examples  Unitary."

Similar presentations


Ads by Google