Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 4: Compression (Part 2)

Similar presentations


Presentation on theme: "Chapter 4: Compression (Part 2)"— Presentation transcript:

1 Chapter 4: Compression (Part 2)
Image Compression

2 The Scientist and Engineer's Guide to Digital Signal Processing
Acknowledgement Some figures and pictures are taken from: The Scientist and Engineer's Guide to Digital Signal Processing by Steven W. Smith

3 Lossy compression Motivations:
Uncompressed images, video and audio data are huge, e.g., in HDTV, bit rate easily exceeds 1Gbps. Lossless methods (Huffman, Arithmetic, LZW) are inadequate for images and video because the spatial and/or temporal redundancy of pixel values are not exploited. Special characteristics of human perception (e.g., more sensitive to low spatial frequencies) should be taken advantage of to achieve a higher compression ratio. 24

4 Spatial sensitivity a higher spatial frequency
requires a larger contrast

5 Vector quantization (VQ)
A general lossy compression technique Scalar quantization: 3,200,134 ~ 3M VQ: a generalization of scalar quantization: subjects to be quantized are vectors. VQ can be viewed as a form of pattern recognition where an input pattern (a vector) is approximated by one of a predetermined set of standard patterns. “Doesn’t quantization mean round the figure? So how can people get slim with it?” Benny.

6 Vector quantization (Def’n)
A vector quantizer Q of dimension k and size N is a mapping from a vector in a k-dimensional Euclidean space into a finite set C containing N output or reproduction points, called code vectors. C: the codebook (with N vectors). Vector Q C k N

7 Vector quantization (Def’n)
The rate of Q is r = (log2N)/k = number of bits per vector component used to represent the input vector. Two issues: how to match a vector to a code vector (pattern recognition), how to set the codebook.

8 Searching the codebook
Given a vector, we need to search the codebook (finding an index) for a code vector that gives the minimum distortion. Squared error distortion: vector to be coded code vector the ith component of vector x

9 Codebook training Get a large sample of data (the training set).
Pick an initial set of code vectors. Partition the training set into cells. Use the cells to tune the codebook. Repeat. Q cell find the centroid training set

10 Codebook training Step 1 Given a training set, X, with M vectors
Let d = the mean square distortion measure Let the iteration index be j and set j=1 Select an initial codebook C0 Set initial distortion d0 = infinity Pick a convergence threshold E

11 Codebook training Step 2
Optimally encode all vector x within X using Cj-1 Assign x to cell Pi,j-1 if x is quantized as yi,j-1 where yi,j-1 is the i-th codevector in Cj-1 Compute dj = sum of all vector distortions If (dj-1 – dj ) / dj < E then quit with codebook = Cj-1; otherwise go to step 3.

12 Codebook training Step 3
Update the codevectors as yi,j = the average of all the vectors assigned to cell Pi,j-1 (i.e., the centroid). j++; go to Step 2.

13 Codebook training (illustration)
vector 1 [25,10,24] 2 [30,30,30] 3 [11,91,11] 4 [28,28,29] 5 [20,81,11] 6 [15,42,52] 7 [24,11,24] 8 [28,29,28] 9 [25,12,25] 10 [10,89,12] code set code vector 00 [25,33,40] 01 [13,53,61] 10 [20,88,30] 11 [21,10,24] codebook training vectors

14 Codebook training (illustration)
vector 1 [25,10,24] 2 [30,30,30] 3 [11,91,11] 4 [28,28,29] 5 [20,81,11] 6 [15,42,52] 7 [24,11,24] 8 [28,29,28] 9 [25,12,25] 10 [10,89,12] code set code vector 00 [25,33,40] 01 [13,53,61] 10 [20,88,30] 11 [21,10,24] codebook d([25,10,24],[25,33,40]) = 785 d([25,10,24],[13,53,61]) = 3362 d([25,10,24],[20,88,30]) = 6145 d([25,10,24],[21,10,24]) = 16 training vectors

15 Codebook training (illustration)
vector 1 [25,10,24] 2 [30,30,30] 3 [11,91,11] 4 [28,28,29] 5 [20,81,11] 6 [15,42,52] 7 [24,11,24] 8 [28,29,28] 9 [25,12,25] 10 [10,89,12] code set code vector 00 [25,33,40] 01 [13,53,61] 10 [20,88,30] 11 1 [21,10,24] codebook d([25,10,24],[25,33,40]) = 785 d([25,10,24],[13,53,61]) = 3362 d([25,10,24],[20,88,30]) = 6145 d([25,10,24],[21,10,24]) = 16 training vectors

16 Codebook training (illustration)
vector 1 [25,10,24] 2 [30,30,30] 3 [11,91,11] 4 [28,28,29] 5 [20,81,11] 6 [15,42,52] 7 [24,11,24] 8 [28,29,28] 9 [25,12,25] 10 [10,89,12] code set code vector 00 2,4,8 [25,33,40] 01 6 [13,53,61] 10 3,5,10 [20,88,30] 11 1,7,9 [21,10,24] codebook training vectors

17 Codebook training (illustration)
vector 1 [25,10,24] 2 [30,30,30] 3 [11,91,11] 4 [28,28,29] 5 [20,81,11] 6 [15,42,52] 7 [24,11,24] 8 [28,29,28] 9 [25,12,25] 10 [10,89,12] code set code vector 00 2,4,8 [25,33,40] 01 6 [13,53,61] 10 3,5,10 [20,88,30] 11 1,7,9 [21,10,24] codebook [30,30,30]+[28,28,29]+[28,29,28] = [28,29,29] 3 training vectors

18 Codebook training (illustration)
vector 1 [25,10,24] 2 [30,30,30] 3 [11,91,11] 4 [28,28,29] 5 [20,81,11] 6 [15,42,52] 7 [24,11,24] 8 [28,29,28] 9 [25,12,25] 10 [10,89,12] code set code vector 00 2,4,8 [28,29,29] 01 6 [15,42,52] 10 3,5,10 [13,87,11] 11 1,7,9 [24,11,24] codebook training vectors

19 Codebook training (illustration)
vector code vector code 1 [25,10,24] [24,11,24] 11 2 [30,30,30] [28,29,29] 00 3 [11,91,11] [13,87,11] 10 4 [28,28,29] 5 [20,81,11] 6 [15,42,52] 01 7 8 [28,29,28] 9 [25,12,25] [10,89,12]

20 VQ and image compression
A simple way of applying VQ to image compression is to decompose an image into a number of (say) 22 blocks. Each block then derives a 4-element vector. Instead of encoding the pixel values of a block, one trains a code book and encodes a block by an index into the code book. To train a code book, a number of images of similar nature are used e.g., facial images are used to train a code book for compressing facial images

21 [154,154,154,147] [175,182,168,154] [189,168,168,168] [217,175,196,175] [175,154,175,168] [203,175,168,168]

22 Image & video compression
JPEG: spatial redundancy removal in intra-frame coding. H.261 and MPEG: both spatial and temporal redundancy removal in intra-frame and inter-frame coding.

23 Sub-sampling techniques
Sub-sample to compress. Interpolation techniques are used upon reconstruction of the original data. Sub-sampling results in information loss. However, the loss is acceptable by the virtue of the physiological characteristics of human eyes. Chromatic sub-sampling: Human eye is more sensitive to changes in brightness than to color changes. Very often, RGB values are transformed to Y’CBCR values. The chroma components are then sub-sampled to reduce the data requirement. 15

24 Chromatic sub-sampling
4:2:2 sub-sample color signals horizontally by a factor of 2 (CCIR 601 standard). 4:1:1 sub-sample horizontally by a factor of 4. 4:2:0 sub-sample in both dimensions by a factor of 2. 4:2:0 is often used in JPEG and MPEG. 16

25 Chromatic sub-sampling (notation)
luma horizontal sampling reference chroma horizontal sampling either same as the 2nd digit; or 0, indicating that CB and CR are vertically sub-sampled at a factor of 2. 4:2:2

26 Example: a frame with pixel dimensions of 720  480:

27 JPEG compression JPEG stands for “Joint Photographic Experts Group”.
JPEG is commonly used to refer to a standard for compressing and encoding continuous-tone still images. adjustable compression/quality 4 modes of operations: Sequential (line-by-line) (baseline implementation) Progressive (blur-to-clear) Lossless (pixel-for-pixel) Hierarchical (multiple resolutions)

28 JPEG (steps) 1. Preparation 2. Processing
Uncom- pressed picture Compressed preparation processing quantization entropy encoding 1. Preparation includes analog-to-digital conversion. Image can be separated in Y’CBCR components to facilitate sub-sampling on the chrominance components. The image is segmented into 88 blocks. 2. Processing sophisticated algorithms, such as transformation from time to frequency domain using DCT. 25

29 1/4 1/2 1/2 1/2 1/4

30 JPEG (steps) 3. Quantization
map real-number values from the previous step to integers. This process results in loss of precision, but achieves data compression. It specifies the granularity of the mapping, allowing control of the precision carried in the compressed data. Different levels of quantization are applied to the luminance and chrominance components, exploiting the sensitivity of human perception.

31 JPEG (steps) 4. Entropy encoding
It compresses a sequential data stream without loss. Steps of zigzag scan to linearize the data. Predictive encoding and RLE are used to encode the DC and AC components. Finally, Huffman scheme to encode the data. Arithmetic coding instead of Huffman is possible, but not required.

32 JPEG (schematic diagram)
CB Y’CBCR CR 26

33 Image preparation Each image consists of a number of components (e.g., RGB, Y’CBCR). Divide each component into 8  8 blocks. Each block is a “data unit” subject to DCT transformation. The values in a block are shifted from unsigned integers with range [0, 2p-1] to signed integers with range [-2p-1, 2p-1-1]. e.g., in 8-bit mode, the range [0,255] is shifted to [-128,127]. Baseline mode: 8-bit pixel depth; 12-bit is possible but not all applications implement this mode.

34

35 DCT (Discrete Cosine Transform)
An 8  8 image block is a 2D function f(x,y) (0  x, y  7) in spatial domain. 231 224 217 203 189 196 210 182 175 154 140 168 161 126 119 112 105 84 98 63 7 6 5 y 4 3 2 1 x 1 2 3 4 5 6 7 28

36 DCT (Discrete Cosine Transform)
We define 64 basis functions for frequency variables u, v (0  u, v  7) in a 2-dimensional space: e.g., 28

37 DCT (Discrete Cosine Transform)
These are wave functions of successively increasing frequencies. (Imagine them as undulating surfaces of increasingly frequent ups and downs.) Given a 2D function (imagine it as a 2D surface), one can decompose it into a linear combination of these wave functions. So, DCT is a frequency (uv coordinates) representation of a spatial (xy coordinates) function. 28

38 A 1-D example

39 u0v0 u2v2 u5v1 u1v0 u6v3 u0v1 Some 2-D Basis Functions y u1v1 x

40 y x Some 2-D basis functions with quantized values

41 DCT The 64 (8  8) DCT basis functions (top view) are: 29

42 DCT coefficients - example -
An 8  8 block DCT coefficients after transformation in x,y co-ordinates in u,v co-ordinates 30

43

44 DCT From the original spatial function f(x,y), extract the frequency components by multiplying f(x,y) with these basis functions. 31

45 DCT The result is a function F(u,v) in frequency domain, 64 (8  8) coefficients representing the 64 frequency components of the original image function. Of the 64 coefficients, F(0,0) is due to the basis function of u,v = 0, a flat wave function. F(0,0) is also known as the DC-coefficient. The other coefficients are called the AC-coefficients. 31

46 DCT The DC component determines the fundamental gray (color) intensity of the 8  8 pixels. The AC components add the intensity variation to the pixel values to give the original image function. Typical image consists of large regions of single intensity and color. DCT thus concentrates most of the signal in the lower spatial frequencies. Many of the high-frequency coefficients are of very low values. Entropy encoding applied to the DCT would normally achieve high data reduction. 32

47 IDCT The inverse of DCT (IDCT) takes the 64 DCT coefficients and reconstructs a 64-point output image by summing the basis signals. The result is a summation of all the frequency components, yielding a reconstruction of the original image. (Imagine adding up the respective undulating surfaces to yield the original surfaces.) 33

48

49 for the “eye” block

50 DCT A 1-D example to illustrate the decomposition and reconstruction.
8 16 24 32 40 48 56 64 100 -52 -5 -2 0.4 15 57 63 DCT truncate IDCT 34

51 Quantization The 64 DCT coefficients are real numbers (i.e., not integers). These coefficients are quantized to throw away bits, and that is the main source of lossiness. 35

52 Quantization Uniform quantization Quantization tables
DCT coefficients can be divided by a constant N and the result is rounded. Equal treatment to all DCT coefficients. Quantization tables Each of the 64 coefficients can be adjusted separately. Specific frequencies can be given more importance than others according to the characteristics of the original image. 35

53 Quantization Quantization tables
In JPEG, each F(u,v) is divided by a different quantizer step size Q(u,v) given in a quantization table: 36

54 Quantization The eye is most sensitive to low frequencies (upper left corner), less sensitive to high frequencies (lower right corner). JPEG standard defines 2 default quantization tables, one for luma (above), one for chroma. Quality factor: How would scaling the quantization numbers affect the image, say if I double them all? In most implementations, quality factor is the scaling factor for the default quantization table. 37

55 Zig-Zag scan This step linearizes the 8  8 block of DCT coefficients. It maps an 8  8 block to a 64-byte stream. RLE and Entropy encoding methods are then applied on the byte stream. Why zig-zag? It is to group the coefficients from low to high frequencies, so that zeros in high frequencies are grouped together. Consecutive zeros would be effectively compressed using RLE. high frequencies can be truncated easily. 38

56 Zig-Zag scan

57 Entropy encoding DC component encoded using predictive encoding
DC coefficient determines the average color (or intensity) of the 8  8 block. Between adjacent blocks, the variation is fairly small. Encode the difference between the current DC coefficient and the one of the previous block. AC components encoded with RLE The 63-number stream has lots of zeros in it. Encode as (skip,value) pairs, where skip is the number of zeros and value is the next non-zero component. 39

58 Entropy encoding convert the DCT coefficients after quantization into a compact binary sequence in 2 steps: forming an intermediate symbol sequence converting the sequence into binary using Huffman table intermediate symbol sequence each AC coefficient is represented by a pair of symbols: Symbol-1 (Runlength, Size) Symbol-2 (Amplitude)

59 AC encoding Runlength is the # of consecutive 0-valued AC coefficients preceding a nonzero AC coefficient. Runlength is in the range 0 to 15. Size is the # of bits used to encode the magnitude of Amplitude. Amplitude can use up to 10 bits. Amplitude is the amplitude of the nonzero AC coefficient in the range of [-1024,+1023]  10 bits.

60 AC encoding e.g., given the sequence: ..., 0, 0, 0, 0, 0, 0, 476, …  (6,9)(476) // 2 symbols If Runlength > 15, then Symbol-1 (15,0) = 16 0’s e.g., what is the sequence represented by: (15,0) (15,0) (7,4) (12)? (0,0) = End-of-Block symbol: all remaining coefficients are 0’s.

61 DC encoding Categorize DC values into Size (number of bits needed to represent, symbol-1) and the Amplitude (symbol-2). If DC is 4, 3 bits are needed. Encode Size as a Huffman symbol, then the actual 3 bits. Since DC are differentially encoded, its range is [-2048,2047]. 40

62

63 JPEG example Nelson suggested the following program to generate the quantization table: for (i=0;i<n;i++) for (j=0; j<n; j++) Q[i][j]= 1 + [(1+i+j)  quality]; The JPEG standard proposes Huffman encoding tables. One example (partial): cr = 64*8/96 = 5.22 nb = 98/64 = 1.53

64 Compression measures Compression Ratio (CR):
CR = Original data size / Compressed data size higher CR  lower picture quality. Wallace suggested a measure Nb = # of bits per pixel in the compressed image. An observation: bits/pixel: moderate to good quality; bits/pixel: good to very good quality; bits/pixel: excellent quality; bits/pixel: usually indistinguishable from the original.

65 Compression and picture quality
Original DC only 0.19 bpp DC AC 0.96 bpp DC AC 0.43 bpp

66 Lossless mode of JPEG compression
A special case of JPEG where there is no loss in the encoding process. In this mode, image processing and quantization use a predictive technique instead of transformation encoding. Neighboring pixels are taken as predictors, and the difference between the predicted and the actual values are encoded using Huffman methods. 43

67 Lossless JPEG Entropy Predictor Encoder Table Specification Source
Data Predictor Entropy Encoder Comp. Table Specification

68 Lossless JPEG Normally, pixel values do not vary by much except at intensity (color) edges. The differences have small values in most regions of the image. Effective entropy compression is possible. 44

69 Lossless JPEG For each pixel, the predictor uses a linear combination of previously encoded neighbors. The typical predictor functions used are: A user can pick a predictor function 44

70 Lossless JPEG 2D predictors (4-7) usually do better than 1D predictors. (P0 is “no prediction”) Typical compression ratio achieved is about 2:1. 45

71 Sequential encoding In sequential encoding, the whole image is encoded and decoded in a single run. It allows decoding with immediate presentation, but in top-to-bottom sequence. 46

72 Progressive encoding Progressive mode encodes and reconstructs the image with a very rough representation, and refines it during successive steps. Also known as layered coding. Typically, most of the image features are recognized after 5-10% of the data has been decompressed 47

73 Successive refinement
2 ways to successive refinement: Spectral selection. Send DC component for entropy encoding, then first few ACs, then some more ACs, etc. Successive approximation. Send all DCT coefficients in each run, but single bits within a coefficient are processed in different runs. The most-significant bits encoded first, followed by the less-significant bits.

74

75 Successive refinement
Original 7 MSBs of DC 0.15 bpp +5MSB of AC 0.3 bpp +7 MSB of AC 0.8 bpp

76 Hierarchical mode down-sample by factors of 2 in both directions, e.g., reduce 640480 to 320240 to 160120, etc. Repeat the following process recursively until the full resolution image is compressed. Initially, encode the smallest image. Then at each level: decode and up-sample the smaller image. encode the difference between the up-sampled and the original images. 50

77 Hierarchical mode original 640x480 File: down sample 180x120 360x 240
JPEG JPEG diff. JPEG diff. uncomp. uncompress up sample sum up sample File:

78 Hierarchical mode Since the original image is encoded at different resolutions, it requires more storage for multiple resolutions. Advantage: the picture is immediately available at different resolutions. Scaling is cheap when display system works only with a lower resolution. 50

79 Wavelet coding used in JPEG 2000
consider a one-dimensional array of values: 101, 102, 103, 104, 105, 106, 107, 108 we can represent these values by averages of sums and differences: pair-wise sums: ( )/2; ( )/2; ( )/2; ( )/2 pair-wise diffs: ( )/2; ( )/2; ( )/2; ( )/2 put these sums and differences into a sequence: 101 ½, 103 ½, 105 ½, 107 ½, -1/2, -1/2, -1/2, -1/2 problem with DCT-based transform: blocking artifacts at low bit rate At a CR of 25:1, JPEG performs better than DWT; for higher CR, JPEG degrades rapidly.

80 Wavelet transform Note that the original values can be reconstructed by the sums and differences. 101 ½ 103 ½ 105 ½ 107 ½ -1/2 -1/2 -1/2 -1/2 101 102 103 104 105 106 107 108 addition subtraction

81 Wavelet transform Note that if we replace the four –1/2’s by 0’s, the recovered sequence is not too far off from the original: Hence, quantization and RLE could be applied to effectively reduce the size of the sequence. 101 ½ 103 ½ 105 ½ 107 ½ 101 ½ 101 ½ 103 ½ 103 ½ 105 ½ 105 ½ 107 ½ 107 ½

82 Wavelet transform recursively apply the idea to the averages …
101, 102, 103, 104, 105, 106, 107, 108 101 ½, 103 ½, 105 ½, 107 ½, -1/2, -1/2, -1/2, -1/2 102 ½, 106 ½, -1, -1, -1/2, -1/2, -1/2, -1/2 104 ½, -2, -1, -1, -1/2, -1/2, -1/2, -1/2

83 Wavelet transform recursively apply the idea to the averages …
101, 102, 103, 104, 105, 106, 107, 108 101 ½, 103 ½, 105 ½, 107 ½, -1/2, -1/2, -1/2, -1/2 102 ½, 106 ½, -1, -1, -1/2, -1/2, -1/2, -1/2 104 ½, -2, -1, -1, -1/2, -1/2, -1/2, -1/2 2nd level details: average values of first half and second half

84 Wavelet transform recursively apply the idea to the averages …
101, 102, 103, 104, 105, 106, 107, 108 101 ½, 103 ½, 105 ½, 107 ½, -1/2, -1/2, -1/2, -1/2 102 ½, 106 ½, -1, -1, -1/2, -1/2, -1/2, -1/2 104 ½, -2, -1, -1, -1/2, -1/2, -1/2, -1/2 3rd level details: average values of first pair, second pair, third pair, and final pair

85 Wavelet transform recursively apply the idea to the averages …
101, 102, 103, 104, 105, 106, 107, 108 101 ½, 103 ½, 105 ½, 107 ½, -1/2, -1/2, -1/2, -1/2 102 ½, 106 ½, -1, -1, -1/2, -1/2, -1/2, -1/2 104 ½, -2, -1, -1, -1/2, -1/2, -1/2, -1/2 full details: all values

86 apply wavelet transform to each row of pixels

87 averages diff’s

88 apply wavelet transform to each column of pixels

89

90

91 more important data less important data

92 JPEG vs. JPEG 2000 author: Christopher M. Brislawn JPEG at 0.27 bpp
original JPEG 2000 at 0.27 bpp author: Christopher M. Brislawn

93 JPEG vs. JPEG 2000 JPEG at 1 bpp original JPEG 2000 at 1 bpp

94 JPEG vs. JPEG 2000 JPEG at 0.5 bpp original JPEG 2000 at 0.5 bpp


Download ppt "Chapter 4: Compression (Part 2)"

Similar presentations


Ads by Google