DCC ‘99 - Adaptive Prediction Lossless Image Coding Adaptive Linear Prediction Lossless Image Coding Giovanni Motta, James A. Storer Brandeis University.

Slides:



Advertisements
Similar presentations
Feature Selection as Relevant Information Encoding Naftali Tishby School of Computer Science and Engineering The Hebrew University, Jerusalem, Israel NIPS.
Advertisements

IMPROVING THE PERFORMANCE OF JPEG-LS Michael Syme Supervisor: Dr. Peter Tischer.
Histograms of Oriented Gradients for Human Detection
11/11/02 IDR Workshop Dealing With Location Uncertainty in Images Hasan F. Ates Princeton University 11/11/02.
Word Spotting DTW.
Michael Alves, Patrick Dugan, Robert Daniels, Carlos Vicuna
1 Pixel Interpolation By: Mieng Phu Supervisor: Peter Tischer.
RBF Neural Networks x x1 Examples inside circles 1 and 2 are of class +, examples outside both circles are of class – What NN does.
Fast, Multiscale Image Segmentation: From Pixels to Semantics Ronen Basri The Weizmann Institute of Science Joint work with Achi Brandt, Meirav Galun,
Improving Scene Cut Quality for Real-Time Video Decoding Giovanni Motta, Brandeis University James A. Storer, Brandeis University Bruno Carpentieri, Universita’
Introduction Lossless compression of grey-scale images TMW achieves world’s best lossless image compression  3.85 bpp on Lenna Reasons for performance.
Spatial and Temporal Data Mining
Robust Real-time Object Detection by Paul Viola and Michael Jones ICCV 2001 Workshop on Statistical and Computation Theories of Vision Presentation by.
SWE 423: Multimedia Systems Chapter 7: Data Compression (2)
Losslessy Compression of Multimedia Data Hao Jiang Computer Science Department Sept. 25, 2007.
Ekaterina Smorodkina and Dr. Daniel Tauritz Department of Computer Science Power Grid Protection through Rapid Response Control of FACTS Devices.
CS :: Fall 2003 MPEG-1 Video (Part 1) Ketan Mayer-Patel.
September 23, 2010Neural Networks Lecture 6: Perceptron Learning 1 Refresher: Perceptron Training Algorithm Algorithm Perceptron; Start with a randomly.
IMPROVING THE PERFORMANCE OF JPEG-LS Michael Syme Supervisor: Dr. Peter Tischer.
Low Complexity Scalable DCT Image Compression IEEE International Conference on Image Processing 2000 Philips Research Laboratories, Eindhoven, Netherlands.
Department of Computer Engineering University of California at Santa Cruz Data Compression (2) Hai Tao.
SWE 423: Multimedia Systems Chapter 7: Data Compression (4)
Image Pyramids and Blending
Ensemble Learning (2), Tree and Forest
Texture Optimization for Example-based Synthesis
Radial-Basis Function Networks
Lossy Compression Based on spatial redundancy Measure of spatial redundancy: 2D covariance Cov X (i,j)=  2 e -  (i*i+j*j) Vertical correlation   
CS559-Computer Graphics Copyright Stephen Chenney Image File Formats How big is the image? –All files in some way store width and height How is the image.
Context-Based Adaptive Entropy Coding Xiaolin Wu McMaster University Hamilton, Ontario, Canada.
Huffman Coding Vida Movahedi October Contents A simple example Definitions Huffman Coding Algorithm Image Compression.
Computer Vision – Compression(2) Hanyang University Jong-Il Park.
Audio Compression Usha Sree CMSC 691M 10/12/04. Motivation Efficient Storage Streaming Interactive Multimedia Applications.
296.3Page 1 CPS 296.3:Algorithms in the Real World Data Compression: Lecture 2.5.
© Copyright 2004 ECE, UM-Rolla. All rights reserved A Brief Overview of Neural Networks By Rohit Dua, Samuel A. Mulder, Steve E. Watkins, and Donald C.
Hierarchical Distributed Genetic Algorithm for Image Segmentation Hanchuan Peng, Fuhui Long*, Zheru Chi, and Wanshi Siu {fhlong, phc,
Data Mining – Algorithms: Linear Models Chapter 4, Section 4.6.
Building Prostate Statistical Atlas using Large-Deformation Image Registration Eric Hui University of Waterloo – MIAMI Bi-weekly.
Adaptive Data Aggregation for Wireless Sensor Networks S. Jagannathan Rutledge-Emerson Distinguished Professor Department of Electrical and Computer Engineering.
M Machine Learning F# and Accord.net. Alena Dzenisenka Software architect at Luxoft Poland Member of F# Software Foundation Board of Trustees Researcher.
Stochastic Subgradient Approach for Solving Linear Support Vector Machines Jan Rupnik Jozef Stefan Institute.
1 A Gradient Based Predictive Coding for Lossless Image Compression Source: IEICE Transactions on Information and Systems, Vol. E89-D, No. 7, July 2006.
Advances in digital image compression techniques Guojun Lu, Computer Communications, Vol. 16, No. 4, Apr, 1993, pp
Recent Results in Combined Coding for Word-Based PPM Radu Rădescu George Liculescu Polytechnic University of Bucharest Faculty of Electronics, Telecommunications.
Outline Transmitters (Chapters 3 and 4, Source Coding and Modulation) (week 1 and 2) Receivers (Chapter 5) (week 3 and 4) Received Signal Synchronization.
Speeding up Lossless Image Compression: Experimental Results on a Parallel Machine Luigi Cinque and Sergio De Agostino Computer Science Department Sapienza.
CS654: Digital Image Analysis Lecture 34: Different Coding Techniques.
Face Detection using the Spectral Histogram representation By: Christopher Waring, Xiuwen Liu Department of Computer Science Florida State University Presented.
Non-Ideal Iris Segmentation Using Graph Cuts
Multipe-Symbol Sphere Decoding for Space- Time Modulation Vincent Hag March 7 th 2005.
ELE 488 F06 ELE 488 Fall 2006 Image Processing and Transmission ( ) Image Compression Quantization independent samples uniform and optimum correlated.
Chapter 20 Speech Encoding by Parameters 20.1 Linear Predictive Coding (LPC) 20.2 Linear Predictive Vocoder 20.3 Code Excited Linear Prediction (CELP)
Fundamentals of Multimedia Chapter 6 Basics of Digital Audio Ze-Nian Li and Mark S. Drew 건국대학교 인터넷미디어공학부 임 창 훈.
NCHU1 The LOCO-I Lossless image Compression Algorithm: Principles and Standardization into JPEG-LS Authors: M. J. Weinberger, G. Seroussi, G. Sapiro Source.
Tree and Forest Classification and Regression Tree Bagging of trees Boosting trees Random Forest.
Computer Sciences Department1. 2 Data Compression and techniques.
NCKU_UCB_TohokuISUAL-IFR : DCM (version 2.0) July 9, 2001Tong-Long Fu 1 Data Compression Module ( DCM ) Tong-Long Fu Laboratory of RF-MW Photonics, Department.
Multi-Frame Motion Estimation and Mode Decision in H.264 Codec Shauli Rozen Amit Yedidia Supervised by Dr. Shlomo Greenberg Communication Systems Engineering.
Shannon Entropy Shannon worked at Bell Labs (part of AT&T)
Data Compression.
Context-based Data Compression
Feature description and matching
Cos 429: Face Detection (Part 2) Viola-Jones and AdaBoost Guest Instructor: Andras Ferencz (Your Regular Instructor: Fei-Fei Li) Thanks to Fei-Fei.
Pyramid coder with nonlinear prediction
Source Encoding and Compression
JPEG-LS -- The new standard of lossless image compression
Image Segmentation.
A Block Based MAP Segmentation for Image Compression
Feature descriptors and matching
Li Li, Zhu Li, Vladyslav Zakharchenko, Jianle Chen, Houqiang Li
Presentation transcript:

DCC ‘99 - Adaptive Prediction Lossless Image Coding Adaptive Linear Prediction Lossless Image Coding Giovanni Motta, James A. Storer Brandeis University Volen Center for Complex Systems Computer Science Department Waltham MA-02454, US {gim, Bruno Carpentieri Universita' di Salerno Dip. di Informatica ed Applicazioni "R.M. Capocelli” I Baronissi (SA), Italy

Problem Graylevel lossless image compression addressed from the point of view of the achievable compression ratio

Outline  Motivations  Main Idea  Algorithm  Predictor Assessment  Entropy Coding  Final Experimental Results  Conclusion  Motivations  Main Idea  Algorithm  Predictor Assessment  Entropy Coding  Final Experimental Results  Conclusion

Past Results / Related Works Until TMW, the best existing lossless digital image compressors (CALIC, LOCO-I, etc..) seemed unable to improve compression by using image-by-image optimization techniques or more sophisticate and complex algorithms A year ago, B. Meyer and P. Tischer were able, with their TMW, to improve some current best results by using global optimization techniques and multiple blended linear predictors.

Past Results / Related Works In spite of the its high computational complexity, TMW’s results are in any case surprising because: Linear predictors are not effective in capturing image edginess; Global optimization seemed to be ineffective; CALIC was thought to achieve a data rate close to the entropy of the image.

Motivations Multiple Adaptive Linear Predictors Pixel-by-pixel optimization Local image statistics Investigation on an algorithm that uses:

Main Idea Explicit use of local statistics to: Classify the context of the current pixel; Select a Linear Predictor; Refine it.

Prediction Window Statistics are collected inside the window W x,y (R p ) Not all the samples in W x,y (R p ) are used to refine the predictor Window W x,y (R p ) R p +1 2R p +1 Current Pixel I(x,y) Encoded Pixels Current Context

Prediction Context 6 pixels fixed shape weights w 0,…,w 5 change to minimize error energy inside W x,y (R p ) w1w1 w2w2 w3w3 w5w5 w4w4 w0w0 Prediction: I’(x,y) = int(w 0 * I(x,y-2) + w 1 * I(x-1,y-1) + w 2 * I(x,y-1) + w 3 * I(x+1,y-1) + w 4 * I(x-2,y) + w 5 * I(x-1,y)) Error: Err(x,y) = I’(x,y) - I(x,y)

Predictor Refinement Gradient descent is used to refine the predictor Window W x,y (R p ) R p +1 2R p +1 Current Pixel I(x,y) Encoded Pixels Current Context

Algorithm for every pixel I(x,y) do begin /* Classification */ Collect samples W x,y (R p ) Classify the samples in n clusters (LBG on the contexts) Classify the context of the current pixel I(x,y) Let P i ={w 0,.., w 5 } be the predictor that achieves the smallest error on the current cluster C k /* Prediction */ Refine the predictor P i on the cluster C k Encode and send the prediction error ERR(x,y) end

Results Summary Compression is better when structures and textures are present Compression is worse on high contrast zones Local Adaptive LP seems to capture features not exploited by existing systems

Test Images downloaded from the ftp site of X. Wu: ”ftp:\\ftp.csd.uwo.ca/pub/from_wu/images” 9 “pgm” images,720x576 pixels, 256 greylevels (8 bits) BalloonBarbBarb2BoardBoats GirlGoldHotelZelda

Outline  Motivations  Main Idea  Algorithm  Predictor Assessment  Entropy Coding  Final Experimental Results  Conclusion  Motivations  Main Idea  Algorithm  Predictor Assessment  Entropy Coding  Final Experimental Results  Conclusion

File Size vs. Number of Predictors. ( R p =6 ) Using an adaptive AC # of predictors Balloon Barb Barb Board Boats Girl Gold Hotel Zelda Total (bytes)

File Size vs. window radius R P (# pred.=2) Using an adaptive AC Rp Balloon Barb Barb Board Boats Girl Gold Hotel Zelda Total (bytes)

Prediction Error baloonbarbbarb2boardboatsgirlgoldhotelzelda Image LOCO-I (Error Entropy after Context Modeling) LOCO-I (Entropy of the Prediction Error) 2 Predictors, Rp=10, Single Adaptive AC

Prediction Error

Prediction Error (histogram) Test image “Hotel”

Prediction Error (magnitude and sign) Test image “Hotel” Sign Magnitude

Prediction Error (magnitude and sign) Test image “Board” Magnitude Sign

Outline  Motivations  Main Idea  Algorithm  Predictor Assessment  Entropy Coding  Final Experimental Results  Conclusion  Motivations  Main Idea  Algorithm  Predictor Assessment  Entropy Coding  Final Experimental Results  Conclusion

Entropy Coding AC model determined in a window W x,y (R e ) Two different ACs for typical and non typical symbols (for practical reasons) Global determination of the cutting point

Compressed File Size vs. error window radius R e (# of predictors = 2 and R p =10) R e balloon barb barb board boats girl gold hotel zelda Total (bytes)

Outline  Motivations  Main Idea  Algorithm  Predictor Assessment  Entropy Coding  Final Experimental Results  Conclusion  Motivations  Main Idea  Algorithm  Predictor Assessment  Entropy Coding  Final Experimental Results  Conclusion

Comparisons Compression rate in bit per pixel. (# of predictors = 2, R p =10) balloon barb barb2 board boats girl gold hotel zelda Avg. SUNSET LOCO-I UCM Our CALIC TMW

Comparisons

Conclusion Compression is better when structures and textures are present Compression is worse on high contrast zones Local Adaptive LP seems to capture features not exploited by existing systems

Future Research Compression Better context classification (to improve on high contrast zones) Adaptive windows MAE minimization (instead of MSE min.) Complexity Gradient Descent More efficient entropy coding Additional experiments On different test sets