Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lempel–Ziv–Welch (LZW) Universal lossless data compression algorithmlossless data compressionalgorithm Created by Abraham Lempel, Jacob Ziv, and Terry.

Similar presentations


Presentation on theme: "Lempel–Ziv–Welch (LZW) Universal lossless data compression algorithmlossless data compressionalgorithm Created by Abraham Lempel, Jacob Ziv, and Terry."— Presentation transcript:

1 Lempel–Ziv–Welch (LZW) Universal lossless data compression algorithmlossless data compressionalgorithm Created by Abraham Lempel, Jacob Ziv, and Terry Welch.Abraham LempelJacob ZivTerry Welch It was published by Welch in 1984 as an improved implementation of the LZ78 algorithm published by Lempel and Ziv in 1978.LZ78 The algorithm is designed to be fast to implement but is not usually optimal because it performs only limited analysis of the data.

2 Fixed Length: LZW Coding Error Free Compression Technique Error Free Compression Technique Remove Inter-pixel redundancy Remove Inter-pixel redundancy Requires no priori knowledge of probability distribution of pixels Requires no priori knowledge of probability distribution of pixels Assigns fixed length code words to variable length sequences Assigns fixed length code words to variable length sequences Patented Algorithm US 4,558,302 Patented Algorithm US 4,558,302 Included in GIF and TIFF and PDF file formats Included in GIF and TIFF and PDF file formats

3 The scenario described in Welch's 1984 paper [1] encodes sequences of 8- [1] bit data as fixed-length 12-bit codes. The codes from 0 to 255 represent 1-character sequences consisting of the corresponding 8-bit character, and the codes 256 through 4095 are created in a dictionary for sequences encountered in the data as it is encoded. At each stage in compression, input bytes are gathered into a sequence until the next character would make a sequence for which there is no code yet in the dictionary. The code for the sequence (without that character) is emitted, and a new code (for the sequence with that character) is added to the dictionary.

4 LZW Coding Coding Technique Coding Technique –A codebook or a dictionary has to be constructed –For an 8-bit monochrome image, the first 256 entries are assigned to the gray levels 0,1,2,..,255. –As the encoder examines image pixels, gray level sequences that are not in the dictionary are assigned to a new entry. –For instance sequence can be assigned to entry 256, the address following the locations reserved for gray levels 0 to 255.

5 LZW Coding Example Example Consider the following 4 x 4 8 bit image Dictionary Location Entry Initial Dictionary

6 LZW Coding Is 39 in the dictionary……..Yes What about 39-39………….No Then add in entry 256 And output the last recognized symbol…39 Dictionary Location Entry

7 Software Research Workshop Code the following image using LZW codes Code the following image using LZW codes * How can we decode the compressed sequence to obtain the original image ?

8 LZW Coding

9 Vector Quantization: Definitions           X={x 1,x 2,…,x N } is set of input vectors in d -D space c={c 1,c 2,…,c M } is set of code vectors in the space P is a partition of the space into M code cells (cluster) C={C 1,C 2,…,C M }  

10 Structure of vector quantizer           Code cells (clusters) are non-overlapping regions so that the input space is completely covered. A codevector c j is assigned to each code cell (cluster) C j. Vector quantization maps each input vector x i in code cell (cluster) C j to the code vector: x i  c j Set of code vectors is codebook

11 Structure of vector quantizer           Code cells (clusters) are non-overlapping regions so that the input space is completely covered. A codevector c j is assigned to each code cell (cluster) C j. Vector quantization maps each input vector x i in code cell (cluster) C j to the code vector: x i  c j Set of code vectors is codebook

12 Encoding/Decoding with codebook Image Codebook Image xixi j xiCjxiCj cjcj   cjcj xixi Distortion measure:

13 Distortion with measure L 2  Distortion (quantization error) D for a cluster C j : Distortion D for data X : Optimal partition:

14 The optimal partition into clusters 1. Nearest neighbor condition: Point is assigned to the nearest codevector 2. Centroid condition: Code vector c j is centroid of the cluster C j : Voronoi cells

15 Code book generation Random code book GLA, LBG, k-means, c-means algorithm Merge approach: Pairwise Nearest Neighbor method Split approach Split-and-Merge approach Stochastic Relaxation, simulated annealing Genetic algorithm Stuctured quantizers Lattice quantizers

16 GLA: Generalized Lloyd algorithm Other names: LBG, k -means, or c -mean algorithm GLA (X,P,C): returns (P,C) REPEAT FOR i:=1 TO N DO // For each point x i j  Find_Nearest_Centroid(x i,C) FOR j:=1 TO M DO // For each cluster C j c j  Calculate_Centroid(X,C j ) UNTIL no improvement.                                       C PCP

17 Complexity of GLA 1) For each point x i find the nearest centroid C j The number of operation: O(NM) 2) For each cluster C j calculate centroid c j The number of operation: O(N) Totally: O(kMN), where k is the number of iterations Data format: use ( d+1 )th coordinate x i,d+1 to store the cluster number n i which the point x i belongs to: x i  C n x 1 : x 1,1, x 1,2, …, x 1,d, x 1,d+1 =n 1 x 2 : x 2,1, x 2,2, …, x 2,d, x 2,d+1 =n 1 … x N : x N,1, x N,2, …, x N,d, x N,d+1 =n N

18 Wavelet transform: wavelet mother function Translation (  ) and dilation (scaling, s) How to obtain a set of wavelet functions?

19 Scaling (stretching or compressing) s=1 s=0.5 s=0.25

20 Translation (shift)

21 Examples of mother wavelets

22 Discrete Wavelet Transform Subband Coding Subband Coding –The spectrum of the input data is decomposed into a set of bandlimitted components, which is called subbands –Ideally, the subbands can be assembled back to reconstruct the original spectrum without any error The input signal will be filtered into lowpass and highpass components through analysis filters The input signal will be filtered into lowpass and highpass components through analysis filters The human perception system has different sensitivity to different frequency band The human perception system has different sensitivity to different frequency band –The human eyes are less sensitive to high frequency-band color components –The human ears is less sensitive to the low-frequency band less than 0.01 Hz and high-frequency band larger than 20 KHz

23 Subband Transform Separate the high freq. and the low freq. by subband decomposition Separate the high freq. and the low freq. by subband decomposition

24 Subband Transform Filter each row and downsample the filter output to obtain two N x M/2 images. Filter each row and downsample the filter output to obtain two N x M/2 images. Filter each column and downsample the filter output to obtain four N/2 x M/2 images Filter each column and downsample the filter output to obtain four N/2 x M/2 images

25 Haar wavelet transform Haar wavelet transform: Haar wavelet transform:  Average : resolution  Difference : detail Example for one dimension

26 Haar wavelet transform Example: data=( ) Example: data=( ) -average:(5+7)/2, (6+5)/2, (3+4)/2, (6+9)/2 -detail coefficients: (5-7)/2, (6-5)/2, (3-4)/2, (6-9)/2 n’= ( | ) n’= ( | ) n’’= (23/4 22/4 | ) n’’= (23/4 22/4 | ) n’’’= (45/8 | 1/ ) n’’’= (45/8 | 1/ )

27 Haar wavelet transform

28 Subband Transform The standard image wavelet transform The standard image wavelet transform The Pyramid image wavelet transform The Pyramid image wavelet transform

29 Subband Transform

30 Wavelet Transform and Filter Banks h 0 (n) is scaling function, low pass filter (LPF) h 1 (n) is wavelet function, high pass filter (HPF) is subsampling (decimation)

31 2-D Wavelet transform Horizontal filteringVertical filtering

32 Subband Transform

33 2-D wavelet transform HH 1 LH 2 HL 1 HL 2 LH 1 HH 2 HH 3 HH 4 LL 3

34 Software Research Wavelet Coding

35 Wavelet Transform Put a pixel in each quadrant-  No size change

36 Software Research Now let Now let  a = (x1+x2+x3+x4)/4  b =(x1+x2-x3-x4)/4  c =(x1+x3-x2-x4)/4  d =(x1+x4-x2-x3)/4 Wavelet Transform a b c d

37

38

39

40 Lec14 – Wavelet Coding [40] 2-D Wavelet Transform via Separable Filters From Matlab Wavelet Toolbox Documentation

41 2-D Example

42 Wavelet Transform

43 JPEG 2000 Image Compression Standard

44 JPEG 2000 JPEG 2000 is a new still image compression standard ”One-for-all” image codec: * Different image types: binary, grey-scale, color, multi-component * Different applications: natural images, scientific, medical remote sensing text, rendered graphics * Different imaging models: client/server, consumer electronics, image library archival, limited buffer and resources.

45 History Call for Contributions in 1996 The 1st Committee Draft (CD) Dec Final Committee Draft (FCD) in March 2000 Accepted as Draft International Standard in Aug Published as ISO Standard in Jan. 2002

46 Key components Transform Transform –Wavelet –Wavelet packet –Wavelet in tiles Quantization Quantization –Scalar Entropy coding Entropy coding –(EBCOT) code once, truncate anywhere –Rate-distortion optimization –Context modeling –Optimized coding order

47 Key components Visual Weighting Masking Region of interest (ROI) Lossless color transform Error resilience The codestream obtained after compression of an image with JPEG 2000 is scalable in nature, meaning that it can be decoded in a number of ways; for instance, by truncating the codestream at any point, one may obtain a representation of the image at a lower resolution, or signal-to-noise ratiosignal-to-noise

48 JPEG 2000: A Wavelet-Based New Standard Targets and features Targets and features –Excellent low bit rate performance without sacrifice performance at higher bit rate –Progressive decoding to allow from lossy to lossless For details For details –David Taubman: “High Performance Scalable Image Compression with EBCOT”, IEEE Trans. On Image Proc, vol.9(7), 7/2000. –JPEG2000 Tutorial by IEEE Sig. Proc Magazine 9/2001 –Taubman’s book on JPEG 2000 (on library reserve) –Links and

49 Superior compression performance: at high bit rates, where artifacts become nearly imperceptible, JPEG 2000 has a small machine-measured fidelity advantage over JPEG. At lower bit rates (e.g., less than 0.25 bits/pixel for grayscale images), JPEG 2000 has a much more significant advantage over certain modes of JPEG: artifacts are less visible and there is almost no blocking Multiple resolution representation: JPEG 2000 decomposes the image into a multiple resolution representation in the course of its compression process. This representation can be put to use for other image presentation purposes beyond compression as such.

50 Progressive transmission by pixel and resolution accuracy, commonly referred to as progressive decoding and signal-to-noise ratio (SNR) scalability: JPEG 2000 provides efficient code-stream organizations which are progressive by pixel accuracy and by image resolution (or by image size). This way, after a smaller part of the whole file has been received, the viewer can see a lower quality version of the final picture. The quality then improves progressively through downloading more data bits from the source.

51

52

53 2-D wavelet transform Original 128, 129, 125, 64, 65, … Transform Coeff. 4123, -12.4, -96.7, 4.5, …

54 Quantization of wavelet coefficients Transform Coeff. 4123, -12.4, -96.7, 4.5, … Quantized Coeff.(Q=64) 64, 0, -1, 0, …

55 Entropy coding Coded Bitstream Quantized Coeff.(Q=64) 64, 0, -1, 0, …

56 EBCOT Key features of EBCOT: Embedded Block Coding with Optimized Truncation Key features of EBCOT: Embedded Block Coding with Optimized Truncation –Low memory requirement in coding and decoding –Easy rate control –High compression performance –Region of interest (ROI) access –Error resilience –Modest complexity

57 Block structure in EBCOT Encode each block separately & record the bitstream of each block. Block size is 64x64.

58 Progressive encoding

59 ROI: Region of interest Scale-down the coefficients outside the ROI so those are in lowerer bit-planes. Decoded or refined ROI bits before the rest of the image.

60 ROI: Region of interest Sequence based code Sequence based code –ROI coefficients are coded as independent sequences –Allows random access to ROI without fully decoding –Can specify exact quality/bitrate for ROI and the BG Scaling based mode: Scaling based mode: –Scale ROI mask coefficients up (decoder scales down) –During encoding the ROI mask coefficients are found significant at early stages of the coding –ROI always coded with better quality than BG –Can't specify rate for BG and ROI

61 Tiling Image  Component  Tile  Subband  Code-Block  Bit-Planes Image  Component  Tile  Subband  Code-Block  Bit-Planes

62 JPEG 2000 vs JPEG DCT WT

63 JPEG 2000 vs JPEG: Quantization JPEG JPEG 2000

64 JPEG 2000 vs JPEG: Bitrate=0.3 bpp MSE=150 MSE=73 PSNR=26.2 db PSNR=29.5 db

65 JPEG 2000 vs JPEG: Bitrate=0.2 bpp MSE=320 MSE=113 PSNR=23.1 db PSNR=27.6 db

66 Lec14 – Wavelet Coding [66] Examples JPEG2K vs. JPEG

67

68

69 - 17 Mbps data rate through 1994, 20.4 Mbps from 1995 to 2004, 16 bit from 2005 onward - 10 bit data encoding through 1994, 12 bit from Silicon (Si) detectors for the visible range, indium gallium arsenide (InGaAr) for the NIR, and indium-antimonide (InSb) detectors for the SWIR - "Whisk broom" scanning - 12 Hz scanning rate - Liquid Nitrogen (LN2) cooled detectors - 10 nm nominal channel bandwidth, calibrated to within 1 nm - 34 degrees total field of view (full 677 samples) - 1 milliradian Instantaneous Field Of View (IFOV, one sample), calibrated to within 0.1 mrad - 76GB hard disk recording medium

70

71

72

73

74

75

76 The End


Download ppt "Lempel–Ziv–Welch (LZW) Universal lossless data compression algorithmlossless data compressionalgorithm Created by Abraham Lempel, Jacob Ziv, and Terry."

Similar presentations


Ads by Google