Presentation is loading. Please wait.

Presentation is loading. Please wait.

DCC ‘99 - Adaptive Prediction Lossless Image Coding Adaptive Linear Prediction Lossless Image Coding Giovanni Motta, James A. Storer Brandeis University.

Similar presentations


Presentation on theme: "DCC ‘99 - Adaptive Prediction Lossless Image Coding Adaptive Linear Prediction Lossless Image Coding Giovanni Motta, James A. Storer Brandeis University."— Presentation transcript:

1 DCC ‘99 - Adaptive Prediction Lossless Image Coding Adaptive Linear Prediction Lossless Image Coding Giovanni Motta, James A. Storer Brandeis University Volen Center for Complex Systems Computer Science Department Waltham MA-02454, US {gim, storer}@cs.brandeis.edu Bruno Carpentieri Universita' di Salerno Dip. di Informatica ed Applicazioni "R.M. Capocelli” I-84081 Baronissi (SA), Italy bc@dia.unisa.it

2 Problem Graylevel lossless image compression addressed from the point of view of the achievable compression ratio

3 Outline  Motivations  Main Idea  Algorithm  Predictor Assessment  Entropy Coding  Final Experimental Results  Conclusion  Motivations  Main Idea  Algorithm  Predictor Assessment  Entropy Coding  Final Experimental Results  Conclusion

4 Past Results / Related Works Until TMW, the best existing lossless digital image compressors (CALIC, LOCO-I, etc..) seemed unable to improve compression by using image-by-image optimization techniques or more sophisticate and complex algorithms A year ago, B. Meyer and P. Tischer were able, with their TMW, to improve some current best results by using global optimization techniques and multiple blended linear predictors.

5 Past Results / Related Works In spite of the its high computational complexity, TMW’s results are in any case surprising because: Linear predictors are not effective in capturing image edginess; Global optimization seemed to be ineffective; CALIC was thought to achieve a data rate close to the entropy of the image.

6 Motivations Multiple Adaptive Linear Predictors Pixel-by-pixel optimization Local image statistics Investigation on an algorithm that uses:

7 Main Idea Explicit use of local statistics to: Classify the context of the current pixel; Select a Linear Predictor; Refine it.

8 Prediction Window Statistics are collected inside the window W x,y (R p ) Not all the samples in W x,y (R p ) are used to refine the predictor Window W x,y (R p ) R p +1 2R p +1 Current Pixel I(x,y) Encoded Pixels Current Context

9 Prediction Context 6 pixels fixed shape weights w 0,…,w 5 change to minimize error energy inside W x,y (R p ) w1w1 w2w2 w3w3 w5w5 w4w4 w0w0 Prediction: I’(x,y) = int(w 0 * I(x,y-2) + w 1 * I(x-1,y-1) + w 2 * I(x,y-1) + w 3 * I(x+1,y-1) + w 4 * I(x-2,y) + w 5 * I(x-1,y)) Error: Err(x,y) = I’(x,y) - I(x,y)

10 Predictor Refinement Gradient descent is used to refine the predictor Window W x,y (R p ) R p +1 2R p +1 Current Pixel I(x,y) Encoded Pixels Current Context

11 Algorithm for every pixel I(x,y) do begin /* Classification */ Collect samples W x,y (R p ) Classify the samples in n clusters (LBG on the contexts) Classify the context of the current pixel I(x,y) Let P i ={w 0,.., w 5 } be the predictor that achieves the smallest error on the current cluster C k /* Prediction */ Refine the predictor P i on the cluster C k Encode and send the prediction error ERR(x,y) end

12 Results Summary Compression is better when structures and textures are present Compression is worse on high contrast zones Local Adaptive LP seems to capture features not exploited by existing systems

13 Test Images downloaded from the ftp site of X. Wu: ”ftp:\\ftp.csd.uwo.ca/pub/from_wu/images” 9 “pgm” images,720x576 pixels, 256 greylevels (8 bits) BalloonBarbBarb2BoardBoats GirlGoldHotelZelda

14 Outline  Motivations  Main Idea  Algorithm  Predictor Assessment  Entropy Coding  Final Experimental Results  Conclusion  Motivations  Main Idea  Algorithm  Predictor Assessment  Entropy Coding  Final Experimental Results  Conclusion

15 File Size vs. Number of Predictors. ( R p =6 ) Using an adaptive AC # of predictors 1 2 4 6 8 Balloon 154275150407150625150221150298 Barb 227631223936224767225219225912 Barb2 250222250674254582256896258557 Board 193059190022190504190244190597 Boats 210229208018209408209536210549 Girl 204001202004202326202390202605 Gold 235682237375238728239413240352 Hotel 236037236916239224240000240733 Zelda 195052193828194535195172195503 Total (bytes) 1906188 1893180 1904699 1909091 1915106

16 File Size vs. window radius R P (# pred.=2) Using an adaptive AC Rp 6 8 10 12 14 Balloon 150407149923149858150019150277 Barb 223936223507224552225373226136 Barb2 250674249361246147247031246265 Board 190022190319190911191709192509 Boats 208018206630206147206214206481 Girl 202004201189201085201410201728 Gold 237375235329234229234048234034 Hotel 236916235562235856236182236559 Zelda 193828193041192840192911193111 Total (bytes) 1893180 1884861 1881625 1884897 1887100

17 Prediction Error 2.50 3.00 3.50 4.00 4.50 5.00 5.50 baloonbarbbarb2boardboatsgirlgoldhotelzelda Image LOCO-I (Error Entropy after Context Modeling) LOCO-I (Entropy of the Prediction Error) 2 Predictors, Rp=10, Single Adaptive AC

18 Prediction Error

19 Prediction Error (histogram) Test image “Hotel”

20 Prediction Error (magnitude and sign) Test image “Hotel” Sign Magnitude

21 Prediction Error (magnitude and sign) Test image “Board” Magnitude Sign

22 Outline  Motivations  Main Idea  Algorithm  Predictor Assessment  Entropy Coding  Final Experimental Results  Conclusion  Motivations  Main Idea  Algorithm  Predictor Assessment  Entropy Coding  Final Experimental Results  Conclusion

23 Entropy Coding AC model determined in a window W x,y (R e ) Two different ACs for typical and non typical symbols (for practical reasons) Global determination of the cutting point

24 Compressed File Size vs. error window radius R e (# of predictors = 2 and R p =10) R e 8 10 12 14 16 18 balloon 147227 147235 147341 147479 147620 147780 barb 216678 216082 215906 215961 216135 216370 barb2 234714 233303 232696 232455 232399 232473 board 186351 186171 186187 186303 186467 186646 boats 202168 201585 201446 201504 201623 201775 girl 197243 197013 197040 197143 197245 197356 gold 230619 229706 229284 229111 229026 229012 hotel 229259 228623 228441 228491 228627 228785 zelda 189246 188798 188576 188489 188461 188469 Total (bytes) 1833505 1828516 1826917 1826936 1827603 1828666

25 Outline  Motivations  Main Idea  Algorithm  Predictor Assessment  Entropy Coding  Final Experimental Results  Conclusion  Motivations  Main Idea  Algorithm  Predictor Assessment  Entropy Coding  Final Experimental Results  Conclusion

26 Comparisons Compression rate in bit per pixel. (# of predictors = 2, R p =10) balloon barb barb2 board boats girl gold hotel zelda Avg. SUNSET 2.89 4.64 4.71 3.72 3.99 3.90 4.60 4.48 3.79 4.08 LOCO-I 2.90 4.65 4.66 3.64 3.92 3.90 4.47 4.35 3.87 4.04 UCM 2.81 4.44 4.57 3.57 3.85 3.81 4.45 4.28 3.80 3.95 Our 2.84 4.16 4.48 3.59 3.89 3.80 4.42 4.41 3.64 3.91 CALIC 2.78 4.31 4.46 3.51 3.78 3.72 4.35 4.18 3.69 3.86 TMW 2.65 4.08 4.38 3.61 4.28 3.80

27 Comparisons

28 Conclusion Compression is better when structures and textures are present Compression is worse on high contrast zones Local Adaptive LP seems to capture features not exploited by existing systems

29 Future Research Compression Better context classification (to improve on high contrast zones) Adaptive windows MAE minimization (instead of MSE min.) Complexity Gradient Descent More efficient entropy coding Additional experiments On different test sets

30


Download ppt "DCC ‘99 - Adaptive Prediction Lossless Image Coding Adaptive Linear Prediction Lossless Image Coding Giovanni Motta, James A. Storer Brandeis University."

Similar presentations


Ads by Google