Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lempel–Ziv–Welch (LZW) Universal lossless data compression algorithm Created by Abraham Lempel, Jacob Ziv, and Terry Welch. It was published.

Similar presentations


Presentation on theme: "Lempel–Ziv–Welch (LZW) Universal lossless data compression algorithm Created by Abraham Lempel, Jacob Ziv, and Terry Welch. It was published."— Presentation transcript:

1

2

3

4

5

6 Lempel–Ziv–Welch (LZW)
Universal lossless data compression algorithm Created by Abraham Lempel, Jacob Ziv, and Terry Welch. It was published by Welch in 1984 as an improved implementation of the LZ78 algorithm published by Lempel and Ziv in 1978. The algorithm is designed to be fast to implement but is not usually optimal because it performs only limited analysis of the data.

7 Fixed Length: LZW Coding
Error Free Compression Technique Remove Inter-pixel redundancy Requires no priori knowledge of probability distribution of pixels Assigns fixed length code words to variable length sequences Patented Algorithm US 4,558,302 Included in GIF and TIFF and PDF file formats

8 The scenario described in Welch's 1984 paper[1] encodes sequences of 8-
bit data as fixed-length 12-bit codes. The codes from 0 to 255 represent 1-character sequences consisting of the corresponding 8-bit character, and the codes 256 through 4095 are created in a dictionary for sequences encountered in the data as it is encoded. At each stage in compression, input bytes are gathered into a sequence until the next character would make a sequence for which there is no code yet in the dictionary. The code for the sequence (without that character) is emitted, and a new code (for the sequence with that character) is added to the dictionary.

9 LZW Coding Coding Technique
A codebook or a dictionary has to be constructed For an 8-bit monochrome image, the first 256 entries are assigned to the gray levels 0,1,2,..,255. As the encoder examines image pixels, gray level sequences that are not in the dictionary are assigned to a new entry. For instance sequence can be assigned to entry 256, the address following the locations reserved for gray levels 0 to 255.

10 Dictionary Location Entry
LZW Coding Example Consider the following 4 x 4 8 bit image Dictionary Location Entry 0 0 1 1 . . Initial Dictionary

11 Dictionary Location Entry
LZW Coding Is 39 in the dictionary……..Yes What about 39-39………….No Then add in entry 256 And output the last recognized symbol…39 Dictionary Location Entry 0 0 1 1 . . 39-39

12 Workshop Code the following image using LZW codes 39 39 126 126
* How can we decode the compressed sequence to obtain the original image ? Software Research

13 LZW Coding

14

15

16

17

18

19

20 JPEG standard The name "JPEG" stands for Joint Photographic Experts Group, the name of the committee that created the JPEG standard and also other standards. The group was organized in 1986,[4] issuing the first JPEG standard in 1992, which was approved in September 1992 as ITU-T Recommendation T.81[5] and in 1994 as ISO/IEC The JPEG standard specifies the codec, which defines how an image is compressed into a stream of bytes and decompressed back into an image, but not the file format used to contain that stream.[6] The Exif and JFIF standards define the commonly used formats for interchange of JPEG-compressed images.

21 JPEG standard The JPEG compression algorithm is at its best on photographs and paintings of realistic scenes with smooth variations of tone and color. For web usage, where the amount of data used for an image is important, JPEG is very popular. JPEG/Exif is also the most common format saved by digital cameras. JPEG may not be as well suited for line drawings and other textual or iconic graphics, where the sharp contrasts between adjacent pixels can cause noticeable artifacts. Such images may be better saved in a lossless graphics format such as TIFF, GIF, PNG, or a raw image format. The JPEG standard actually includes a lossless coding mode, but that mode is not supported in most products. As the typical use of JPEG is a lossy compression method, which somewhat reduces the image fidelity, it should not be used in scenarios where the exact reproduction of the data is required (such as some scientific and medical imaging applications and certain technical image processing work).

22

23 Introduction JPEG defines four modes of operation
Sequential Lossless Mode: Compress the image in a single scan and the decoded image is an exact replica of the original image. Sequential DCT-based Mode: Compress the image in a single scan using DCT-based lossy compression technique. As a result, the decoded image is not an exact replica, but an approximation of the original image. Progressive DCT-based Mode: Compress the image in multiple scans and also decompress the image in multiple scans with each successive scan producing a better-quality image. Hierarchical Mode: Compress the image at multiple resolutions for display on different devices.

24 Introduction Modes 2, 3 and 4 are lossy
Mode 1 uses predictive coding with no quantization, hence lossless Sequential DCT-based JPEG algorithm In it’s simplest form called baseline JPEG algorithm, which is based on Huffman coding for entropy encoding The other form is based on arithmetic coding for entropy encoding

25 The JPEG Lossless Coding Algorithm
The value of the pixel in location X is predicted using one or more of the 3-neighbors as shown in fig-(a) Then the prediction error / prediction residual is coded using Huffman coding / Arithmetic Coding Fig (a) : 3-pixel neighborhood for pixel X Fig (b) : Encoder Diagram for Lossless mode

26 The JPEG Lossless Coding Algorithm
Eight possible options for predictions Option-0 is available for only JPEG compression in hierarchical mode Options 1-3 are 1-Dimensional Predictors Options 4-7 are 2-Dimensional Predictors The Chosen Predictor function is available at the header of the compressed file Table-1 : Prediction Functions for Lossless JPEG Option Prediction Function Type Of Prediction No Prediction Differential coding 1 Xp=A 1-D horiz. Predict 2 Xp=B 1-D Vert. Predict 3 Xp=C 1-D Diagonal Predict. 4 Xp=A+B-C 2-D Prediction 5 Xp=A+½ (B-C) 6 Xp=B+½ (A-C) 7 Xp=½ (A+B)

27

28

29 Discrete Cosine Transform (2/2)
The 8-by-8 DCT basis u v

30

31

32 Progressive DCT-Based Mode
The image is coded sequentially in multiple scan Need to wait till the decoding is over to view the image in sequential mode Transmit a coarser version of the image in the first scan and then progressively improve the reconstructed quality at the receiver by transmitting more compressed bits in successive scans Complete the computation of DCT coefficients before the start of entropy coding Perform selective encoding of the DCT coefficients & transmit Two methods spectral selection and successive approximation

33 Progressive DCT-Based Mode[2]. PROGRESSIVE CODING
Progressive DCT-Based Mode[2] PROGRESSIVE CODING SUCESSIVE APPROXIMATION Encode sets of DCT coefficients starting from lower frequencies and moving progressively to higher frequencies. For example Encode all the DC coefficients of all the 8 × 8 DCT blocks in the first scan Encode and transmit the first three AC coefficients of the zig-zag sequence of all the DCT blocks in the second scan Transmit the next 3 AC coefficients in the third scan, and so on The last three AC coefficients can be transmitted in the 21st scan Number of coefficients in each scan could be different and user selectable. Start encoding with certain number of most significant bits (say N1) of all the DCT coefficients of all the blocks and transmit in the first scan. In the second scan, the following N2 most significant bits are encoded and transmitted and so on. The scans continue until the least significant bit of all the coefficients is encoded. Usually the successive approximation offers better reconstructed quality in the earlier scans compared to the spectral selection method.

34

35

36 2,6 : 1 15 : 1 23 : 1 46 : 1 144 : 1 Extremely minor artifacts
2,6 : : : 1 Extremely minor artifacts Initial signs of subimage artifacts Stronger artifacts; loss of high resolution information 46 : : 1 Severe high frequency loss; artifacts on subimage boundaries ("macroblocking") are obvious Extreme loss of color and detail; the leaves are nearly unrecognizable

37

38

39

40

41

42

43

44

45

46 Wavelet transform: wavelet mother function
How to obtain a set of wavelet functions? Translation () and dilation (scaling, s)

47 Scaling (stretching or compressing)

48 Translation (shift)

49 Examples of mother wavelets

50 Discrete Wavelet Transform
Subband Coding The spectrum of the input data is decomposed into a set of bandlimitted components, which is called subbands Ideally, the subbands can be assembled back to reconstruct the original spectrum without any error The input signal will be filtered into lowpass and highpass components through analysis filters The human perception system has different sensitivity to different frequency band The human eyes are less sensitive to high frequency-band color components The human ears is less sensitive to the low-frequency band less than 0.01 Hz and high-frequency band larger than 20 KHz

51 Subband Transform Separate the high freq. and the low freq. by subband decomposition

52 Subband Transform Filter each row and downsample the filter output to obtain two N x M/2 images. Filter each column and downsample the filter output to obtain four N/2 x M/2 images

53 Haar wavelet transform
Average : resolution Difference : detail Example for one dimension

54 Haar wavelet transform
Example: data=( ) -average:(5+7)/2, (6+5)/2, (3+4)/2, (6+9)/2 -detail coefficients: (5-7)/2, (6-5)/2, (3-4)/2, (6-9)/2 n’= ( | ) n’’= (23/4 22/4 | ) n’’’= (45/8 | 1/ )

55 Haar wavelet transform

56 Subband Transform The standard image wavelet transform
The Pyramid image wavelet transform

57 Subband Transform

58 Wavelet Transform and Filter Banks
h0(n) is scaling function, low pass filter (LPF) h1(n) is wavelet function, high pass filter (HPF) is subsampling (decimation)

59 2-D Wavelet transform Horizontal filtering Vertical filtering

60 Subband Transform

61 2-D wavelet transform LL3 HH4 LH2 HH3 LH1 HL2 HH2 HL1 HH1

62 Wavelet Coding Software Research

63 Wavelet Transform 1 2 3 4 Put a pixel in each quadrant- No size change

64 Wavelet Transform Now let a = (x1+x2+x3+x4)/4 b =(x1+x2-x3-x4)/4
c =(x1+x3-x2-x4)/4 d =(x1+x4-x2-x3)/4 c d Software Research

65 Wavelet Transform

66 Wavelet Transform

67 Wavelet Transform

68 2-D Wavelet Transform via Separable Filters
From Matlab Wavelet Toolbox Documentation

69 2-D Example

70 Wavelet Transform

71 JPEG 2000 Image Compression Standard

72 JPEG 2000 JPEG 2000 is a new still image compression standard
”One-for-all” image codec: * Different image types: binary, grey-scale, color, multi-component * Different applications: natural images, scientific, medical remote sensing text, rendered graphics * Different imaging models: client/server, consumer electronics, image library archival, limited buffer and resources.

73 History Call for Contributions in 1996
The 1st Committee Draft (CD) Dec. 1999 Final Committee Draft (FCD) in March 2000 Accepted as Draft International Standard in Aug. 2000 Published as ISO Standard in Jan. 2002

74 Key components Transform Wavelet Wavelet packet Wavelet in tiles
Quantization Scalar Entropy coding (EBCOT) code once, truncate anywhere Rate-distortion optimization Context modeling Optimized coding order

75 Key components Visual Weighting Masking Region of interest (ROI)
Lossless color transform Error resilience The codestream obtained after compression of an image with JPEG 2000 is scalable in nature, meaning that it can be decoded in a number of ways; for instance, by truncating the codestream at any point, one may obtain a representation of the image at a lower resolution, or signal-to-noise ratio

76 JPEG 2000: A Wavelet-Based New Standard
2/16/2018 JPEG 2000: A Wavelet-Based New Standard Targets and features Excellent low bit rate performance without sacrifice performance at higher bit rate Progressive decoding to allow from lossy to lossless For details David Taubman: “High Performance Scalable Image Compression with EBCOT”, IEEE Trans. On Image Proc, vol.9(7), 7/2000. JPEG2000 Tutorial by IEEE Sig. Proc Magazine 9/2001 Taubman’s book on JPEG 2000 (on library reserve) Links and Ref: see also JPEG 2000 Tutorial by Christopolous et IEEE Trans. on consumer electronics 11/00

77 Superior compression performance: at high bit rates, where artifacts become nearly imperceptible, JPEG 2000 has a small machine-measured fidelity advantage over JPEG. At lower bit rates (e.g., less than 0.25 bits/pixel for grayscale images), JPEG 2000 has a much more significant advantage over certain modes of JPEG: artifacts are less visible and there is almost no blocking Multiple resolution representation: JPEG 2000 decomposes the image into a multiple resolution representation in the course of its compression process. This representation can be put to use for other image presentation purposes beyond compression as such.

78 Progressive transmission by pixel and resolution accuracy, commonly referred to as progressive decoding and signal-to-noise ratio (SNR) scalability: JPEG 2000 provides efficient code-stream organizations which are progressive by pixel accuracy and by image resolution (or by image size). This way, after a smaller part of the whole file has been received, the viewer can see a lower quality version of the final picture. The quality then improves progressively through downloading more data bits from the source.

79

80 2-D wavelet transform Original Transform Coeff.
128, 129, 125, 64, 65, … Transform Coeff. 4123, -12.4, -96.7, 4.5, …

81 Quantization of wavelet coefficients
Transform Coeff. 4123, -12.4, -96.7, 4.5, … Quantized Coeff.(Q=64) 64, 0, -1, 0, …

82 Entropy coding 0 1 1 0 1 1 0 1 0 1 . . . Coded Bitstream
Quantized Coeff.(Q=64) 64, 0, -1, 0, …

83 Tiling Image  Component  Tile  Subband  Code-Block  Bit-Planes

84 JPEG 2000 vs JPEG DCT WT

85 JPEG 2000 vs JPEG: Quantization

86 JPEG 2000 vs JPEG: Bitrate=0.3 bpp
MSE= MSE=73 PSNR=26.2 db PSNR=29.5 db

87 JPEG 2000 vs JPEG: Bitrate=0.2 bpp
MSE= MSE=113 PSNR=23.1 db PSNR=27.6 db

88 Examples JPEG2K vs. JPEG

89

90 - 17 Mbps data rate through 1994, 20
- 17 Mbps data rate through 1994, 20.4 Mbps from 1995 to 2004, 16 bit from 2005 onward - 10 bit data encoding through 1994, 12 bit from 1995 - Silicon (Si) detectors for the visible range, indium gallium arsenide (InGaAr) for the NIR, and indium-antimonide (InSb) detectors for the SWIR - "Whisk broom" scanning - 12 Hz scanning rate - Liquid Nitrogen (LN2) cooled detectors - 10 nm nominal channel bandwidth, calibrated to within 1 nm - 34 degrees total field of view (full 677 samples) - 1 milliradian Instantaneous Field Of View (IFOV, one sample), calibrated to within 0.1 mrad - 76GB hard disk recording medium

91

92

93

94 The End


Download ppt "Lempel–Ziv–Welch (LZW) Universal lossless data compression algorithm Created by Abraham Lempel, Jacob Ziv, and Terry Welch. It was published."

Similar presentations


Ads by Google