Presentation is loading. Please wait.

Presentation is loading. Please wait.

Image Compression (Chapter 8)

Similar presentations


Presentation on theme: "Image Compression (Chapter 8)"— Presentation transcript:

1 Image Compression (Chapter 8)
CS474/674 – Prof. Bebis

2 Goal of Image Compression
The goal of image compression is to reduce the amount of data required to represent a digital image.

3 Approaches Lossless Lossy
Information preserving Low compression ratios Lossy Not information preserving High compression ratios Trade-off: image quality vs compression ratio

4 Data ≠ Information Data and information are not synonymous terms!
Data is the means by which information is conveyed. Data compression aims to reduce the amount of data required to represent a given quantity of information while preserving as much information as possible.

5 Data vs Information (cont’d)
The same amount of information can be represented by various amount of data, e.g.: Ex1: Your wife, Helen, will meet you at Logan Airport in Boston at 5 minutes past 6:00 pm tomorrow night Your wife will meet you at Logan Airport at 5 minutes past 6:00 pm tomorrow night Helen will meet you at Logan at 6:00 pm tomorrow night Ex2: Ex3:

6 Definitions: Compression Ratio

7 Definitions: Data Redundancy
Relative data redundancy: Example:

8 Types of Data Redundancy
(1) Coding Redundancy (2) Interpixel Redundancy (3) Psychovisual Redundancy Compression attempts to reduce one or more of these redundancy types.

9 Coding Redundancy Code: a list of symbols (letters, numbers, bits etc.) Code word: a sequence of symbols used to represent a piece of information or an event (e.g., gray levels). Code word length: number of symbols in each code word 9

10 Coding Redundancy (cont’d)
N x M image rk: k-th gray level P(rk): probability of rk l(rk): # of bits for rk Expected value:

11 Coding Redundancy (con’d)
Case 1: l(rk) = constant length Example:

12 Coding Redundancy (cont’d)
Case 2: l(rk) = variable length variable length

13 Interpixel redundancy
Interpixel redundancy implies that pixel values are correlated (i.e., a pixel value can be reasonably predicted by its neighbors). A B A B autocorrelation: f(x)=g(x)

14 Interpixel redundancy (cont’d)
To reduce interpixel redundancy, the data must be transformed in another format (i.e., using a transformation) e.g., thresholding, DFT, DWT, etc. Example: (profile – line 100) original threshold thresholded (1+10) bits/pair

15 Psychovisual redundancy
The human eye does not respond with equal sensitivity to all visual information. It is more sensitive to the lower frequencies than to the higher frequencies in the visual spectrum. Idea: discard data that is perceptually insignificant!

16 Psychovisual redundancy (cont’d)
Example: quantization 256 gray levels 16 gray levels 16 gray levels/random noise i.e., add to each pixel a small pseudo-random number prior to quantization C=8/4 = 2:1

17 Measuring Information
What is the minimum amount of data that is sufficient to describe completely an image without loss of information? How do we measure the information content of a message/image?

18 Modeling Information Note: I(E)=0 when P(E)=1
We assume that information generation is a probabilistic process. Idea: associate information with probability! A random event E with probability P(E) contains: Note: I(E)=0 when P(E)=1

19 How much information does a pixel contain?
Suppose that gray level values are generated by a random process, then rk contains: units of information!

20 How much information does an image contain?
Average information content of an image: using units/pixel (e.g., bits/pixel) Entropy: (assumes statistically independent random events)

21 Redundancy (revisited)
where: Note: if Lavg= H, then R=0 (no redundancy)

22 Entropy Estimation It is not easy to estimate H reliably! image

23 Entropy Estimation (cont’d)
First order estimate of H: 23

24 Estimating Entropy (cont’d)
Second order estimate of H: Use relative frequencies of pixel blocks : image

25 Estimating Entropy (cont’d)
The first-order estimate provides only a lower-bound on the compression that can be achieved. Differences between higher-order estimates of entropy and the first-order estimate indicate the presence of interpixel redundancy! Need to apply some transformations to deal with interpixel redundancy!

26 Estimating Entropy (cont’d)
Example: consider differences: 16

27 Estimating Entropy (cont’d)
Entropy of difference image: Better than before (i.e., H=1.81 for original image) An even better transformation could be found since:

28 Image Compression Model

29 Image Compression Model (cont’d)
Mapper: transforms input data in a way that facilitates reduction of interpixel redundancies.

30 Image Compression Model (cont’d)
Quantizer: reduces the accuracy of the mapper’s output in accordance with some pre-established fidelity criteria. 30

31 Image Compression Model (cont’d)
Symbol encoder: assigns the shortest code to the most frequently occurring output values. 31

32 Image Compression Models (cont’d)
Inverse steps are performed. Note that quantization is irreversible in general.

33 Fidelity Criteria How close is to ? Criteria
Subjective: based on human observers Objective: mathematically defined criteria 33

34 Subjective Fidelity Criteria

35 Objective Fidelity Criteria
Root mean square error (RMS) Mean-square signal-to-noise ratio (SNR) 35

36 Objective Fidelity Criteria (cont’d)
RMSE = 5.17 RMSE = 15.67 RMSE = 14.17

37 Lossless Compression 37

38 Lossless Methods - Taxonomy

39 Huffman Coding (coding redundancy)
A variable-length coding technique. Symbols are encoded one at a time! There is a one-to-one correspondence between source symbols and code words Optimal code (i.e., minimizes code word length per source symbol).

40 Huffman Coding (cont’d)
Forward Pass 1. Sort probabilities per symbol 2. Combine the lowest two probabilities 3. Repeat Step2 until only two probabilities remain.

41 Huffman Coding (cont’d)
Backward Pass Assign code symbols going backwards

42 Huffman Coding (cont’d)
Lavg assuming Huffman coding: Lavg assuming binary codes:

43 Huffman Coding/Decoding
Coding/decoding can be implemented using a look-up table. Decoding can be done unambiguously.

44 Arithmetic (or Range) Coding (coding redundancy)
Instead of encoding source symbols one at a time, sequences of source symbols are encoded together. There is no one-to-one correspondence between source symbols and code words. Slower than Huffman coding but typically achieves better compression.

45 Arithmetic Coding (cont’d)
A sequence of source symbols is assigned to a sub-interval in [0,1) which corresponds to an arithmetic code, e.g., We start with the interval [0, 1) ; as the number of symbols in the message increases, the interval used to represent the message becomes smaller. arithmetic code α1 α2 α3 α3 α4 [ , ) 0.068

46 Arithmetic Coding (cont’d)
Encode message: α1 α2 α3 α3 α4 1) Start with interval [0, 1) 1 2) Subdivide [0, 1) based on the probabilities of αi 3) Update interval by processing source symbols

47 Example Encode α1 α2 α3 α3 α4 [0.06752, 0.0688) or 0.068
0.8 [ , ) or 0.068 (must be inside interval) 0.4 0.2

48 Example (cont’d) The message is encoded using 3 decimal digits or 3/5 = 0.6 decimal digits per source symbol. The entropy of this message is: Note: finite precision arithmetic might cause problems due to truncations! α1 α2 α3 α3 α4 -(3 x 0.2log10(0.2)+0.4log10(0.4))= digits/symbol

49 Arithmetic Decoding α4 Decode 0.572 α3 α2 α3 α3 α1 α2 α4 α1 1.0 0.8
0.72 0.592 0.5728 α4 0.8 0.72 0.688 0.5856 Decode 0.572 α3 0.4 0.56 0.624 0.5728 α2 α3 α3 α1 α2 α4 0.2 0.48 0.592 0.5664 α1 0.0 0.4 0.56 0.56 0.5664

50 LZW Coding (interpixel redundancy)
Requires no priori knowledge of pixel probability distribution values. Assigns fixed length code words to variable length symbol sequences. Included in GIF, TIFF and PDF file formats

51 Dictionary Location Entry
LZW Coding A codebook (or dictionary) needs to be constructed. Initially, the first 256 entries of the dictionary are assigned to the gray levels 0,1,2,..,255 (i.e., assuming 8 bits/pixel) Initial Dictionary Consider a 4x4, 8 bit image Dictionary Location Entry 0 0 1 1 . .

52 Dictionary Location Entry
LZW Coding (cont’d) As the encoder examines image pixels, gray level sequences (i.e., blocks) that are not in the dictionary are assigned to a new entry. Dictionary Location Entry 0 0 1 1 . . - Is 39 in the dictionary……..Yes - What about 39-39………….No * Add at location 256 39-39

53 Example 39 39 126 126 Concatenated Sequence: CS = CR + P CR = empty
Concatenated Sequence: CS = CR + P (CR) (P) CR = empty If CS is found: (1) No Output (2) CR=CS else: (1) Output D(CR) (2) Add CS to D (3) CR=P

54 Decoding LZW Use the dictionary for decoding the “encoded output” sequence. The dictionary need not be sent with the encoded output. Can be built on the “fly” by the decoder as it reads the received code words.

55 Run-length coding (RLC) (interpixel redundancy)
Used to reduce the size of a repeating string of symbols (i.e., runs):  (1,5) (0, 6) (1, 1) a a a b b b b b b c c  (a,3) (b, 6) (c, 2) Encodes a run of symbols into two bytes: (symbol, count) Can compress any type of data but cannot achieve high compression ratios compared to other compression methods. 55

56 Combining Huffman Coding with Run-length Coding
Assuming that a message has been encoded using Huffman coding, additional compression can be achieved using run-length coding. e.g., (0,1)(1,1)(0,1)(1,0)(0,2)(1,4)(0,2)

57 Bit-plane coding (interpixel redundancy)
Process each bit plane individually. (1) Decompose an image into a series of binary images. (2) Compress each binary image (e.g., using run-length coding)

58 Lossy Methods - Taxonomy

59 Lossy Compression Transform the image into a domain where compression can be performed more efficiently (i.e., reduce interpixel redundancies). ~ (N/n)2 subimages

60 Example: Fourier Transform
The magnitude of the FT decreases, as u, v increase! K << N K-1 K-1

61 Transform Selection T(u,v) can be computed using various transformations, for example: DFT DCT (Discrete Cosine Transform) KLT (Karhunen-Loeve Transformation)

62 DCT (Discrete Cosine Transform)
Forward: Inverse: if u=0 if v=0 if v>0 if u>0

63 DCT (cont’d) Set of basis functions for a 4x4 image (i.e., cosines of different frequencies).

64 DCT (cont’d) DFT WHT DCT RMS error: 2.32 1.78 1.13 Using
8 x 8 subimages 64 coefficients per subimage 50% of the coefficients truncated RMS error:

65 DCT (cont’d) DCT minimizes "blocking artifacts" (i.e., boundaries between subimages do not become very visible). DFT i.e., n-point periodicity gives rise to discontinuities! DCT i.e., 2n-point periodicity prevents discontinuities!

66 DCT (cont’d) Subimage size selection: original 2 x 2 subimages

67 JPEG Compression Accepted as an international image compression standard in 1992. It uses DCT for handling interpixel redundancy. Modes of operation: (1) Sequential DCT-based encoding (2) Progressive DCT-based encoding (3) Lossless encoding (4) Hierarchical encoding

68 JPEG Compression (Sequential DCT-based encoding)
Entropy encoder Entropy decoder

69 JPEG Steps Divide the image into 8x8 subimages; For each subimage do:
2. Shift the gray-levels in the range [-128, 127] - DCT requires range be centered around 0 3. Apply DCT  64 coefficients 1 DC coefficient: F(0,0) 63 AC coefficients: F(u,v)

70 Example [-128, 127] (non-centered spectrum)

71 JPEG Steps 4. Quantize the coefficients (i.e., reduce the amplitude of coefficients that do not contribute a lot). Q(u,v): quantization table

72 Example Quantization Table Q[i][j]

73 Example (cont’d) Quantization 73

74 JPEG Steps (cont’d) 5. Order the coefficients using zig-zag ordering
- Places non-zero coefficients first - Creates long runs of zeros (i.e., ideal for run-length encoding)

75 Example 75

76 JPEG Steps (cont’d) 6. Encode coefficients:
6.1 Form “intermediate” symbol sequence 6.2 Encode “intermediate” symbol sequences into a binary sequence.

77 Intermediate Symbol Sequence
DC symbol_1 (SIZE) symbol_2 (AMPLITUDE) DC (6) (61) SIZE: # bits for encoding the amplitude of coefficient

78 DC Coefficients Encoding
symbol_ symbol_2 (SIZE) (AMPLITUDE) predictive coding:

79 Intermediate Symbol Sequence (cont’d)
AC end of block symbol_1 (RUN-LENGTH, SIZE) symbol_2 (AMPLITUDE) AC (0, 2) (-3) SIZE: # bits for encoding the amplitude of coefficient RUN-LENGTH: run of zeros preceding coefficient If RUN-LENGTH > 15, use symbol (15,0) , i.e., RUN-LENGTH=16

80 Example: AC Coefficients Encoding
Symbol_1 (Variable Length Code (VLC)) Symbol_2 (Variable Length Integer (VLI)) # bits (1,4) (12)  ( ) VLC VLI

81 Final Symbol Sequence 81

82 Effect of “Quality” (8k bytes) (21k bytes) (58k bytes)
higher compression lower compression

83 Effect of “Quality” (cont’d)

84 Effect of Quantization: homogeneous 8 x 8 block

85 Effect of Quantization: homogeneous 8 x 8 block (cont’d)
Quantized De-quantized

86 Effect of Quantization: homogeneous 8 x 8 block (cont’d)
Reconstructed Error Original

87 Effect of Quantization: non-homogeneous 8 x 8 block

88 Effect of Quantization: non-homogeneous 8 x 8 block (cont’d)
Quantized De-quantized

89 Effect of Quantization: non-homogeneous 8 x 8 block (cont’d)
Reconstructed Error Original:

90 WSQ Fingerprint Compression
An image coding standard for digitized fingerprints employing the discrete wavelet transform (Wavelet/Scalar Quantization or WSQ). Developed and maintained by: FBI Los Alamos National Lab (LANL) National Institute for Standards and Technology (NIST)

91 Memory Requirements FBI is digitizing fingerprints at 500 dots per inch with 8 bits of grayscale resolution. A single fingerprint card turns into about 10 MB of data! A sample fingerprint image 768 x 768 pixels =589,824 bytes

92 Preserving Fingerprint Details
The "white" spots in the middle of the black ridges are sweat pores They’re admissible points of identification in court, as are the little black flesh ‘‘islands’’ in the grooves between the ridges These details are just a couple pixels wide!

93 What compression scheme should be used?
Better use a lossless method to preserve every pixel perfectly. Unfortunately, in practice lossless methods haven’t done better than 2:1 on fingerprints! Does JPEG work well for fingerprint compression? 93

94 Results using JPEG compression
file size bytes compression ratio: 12.9 The fine details are pretty much history, and the whole image has this artificial ‘‘blocky’’ pattern superimposed on it. The blocking artifacts affect the performance of manual or automated systems!

95 Results using WSQ compression
file size bytes compression ratio: 12.9 The fine details are preserved better than they are with JPEG. NO blocking artifacts!

96 WSQ Algorithm 96

97 Varying compression ratio
FBI’s target bit rate is around 0.75 bits per pixel (bpp) i.e., corresponds to a target compression ratio of 10.7 (assuming 8-bit images) This target bit rate is set via a ‘‘knob’’ on the WSQ algorithm. i.e., similar to the "quality" parameter in many JPEG implementations.

98 Varying compression ratio (cont’d)
Original image 768 x 768 pixels ( bytes)

99 Varying compression ratio (cont’d) 0.9 bpp compression
WSQ image, file size bytes, compression ratio 12.4 JPEG image, file size bytes, compression ratio 11.9

100 Varying compression ratio (cont’d) 0.75 bpp compression
WSQ image, file size bytes compression ratio 15.0 JPEG image, file size bytes, compression ratio 14.5

101 Varying compression ratio (cont’d) 0.6 bpp compression
JPEG image, file size bytes, compression ratio 19.6 WSQ image, file size bytes, compression ratio 19.0

102 JPEG Modes JPEG supports several different modes
Sequential Mode Progressive Mode Hierarchical Mode Lossless Mode Sequential is the default mode Each image component is encoded in a single left-to-right, top-to-bottom scan. This is the mode we have just described!

103 Progressive JPEG The image is encoded in multiple scans, in order to produce a quick, rough decoded image when transmission time is long. Sequential Progressive

104 Progressive JPEG (cont’d)
Each scan, encodes a subset of DCT coefficients. (1) Progressive spectral selection algorithm (2) Progressive successive approximation algorithm (3) Combined progressive algorithm

105 Progressive JPEG (cont’d)
(1) Progressive spectral selection algorithm Group DCT coefficients into several spectral bands Send low-frequency DCT coefficients first Send higher-frequency DCT coefficients next

106 Example

107 Progressive JPEG (cont’d)
(2) Progressive successive approximation algorithm Send all DCT coefficients but with lower precision. Refine DCT coefficients in later scans.

108 Example

109 Example after 0.9s after 1.6s after 3.6s after 7.0s

110 Progressive JPEG (cont’d)
(3) Combined progressive algorithm Combines spectral selection and successive approximation.

111 Hierarchical JPEG Hierarchical mode encodes the image at different resolutions. Image is transmitted in multiple passes with increased resolution at each pass.

112 Hierarchical JPEG (cont’d)
N/4 x N/4 N/2 x N/2 N x N

113 More Methods … See “Image Compression Techniques”, IEEE Potentials, February/March 2001


Download ppt "Image Compression (Chapter 8)"

Similar presentations


Ads by Google