Efficient Huffman Decoding

Slides:



Advertisements
Similar presentations
College of Information Technology & Design
Advertisements

More on Canonical Huffman coding. Gabriele Monfardini - Corso di Basi di Dati Multimediali a.a As we have seen canonical Huffman coding allows.
Lecture 3: Source Coding Theory TSBK01 Image Coding and Data Compression Jörgen Ahlberg Div. of Sensor Technology Swedish Defence Research Agency (FOI)
Bounds on Code Length Theorem: Let l ∗ 1, l ∗ 2,..., l ∗ m be optimal codeword lengths for a source distribution p and a D-ary alphabet, and let L ∗ be.
Problem: Huffman Coding Def: binary character code = assignment of binary strings to characters e.g. ASCII code A = B = C =
Data Compressor---Huffman Encoding and Decoding. Huffman Encoding Compression Typically, in files and messages, Each character requires 1 byte or 8 bits.
Arbitrary Bit Generation and Correction Technique for Encoding QC-LDPC Codes with Dual-Diagonal Parity Structure Chanho Yoon, Eunyoung Choi, Minho Cheong.
2015/6/15VLC 2006 PART 1 Introduction on Video Coding StandardsVLC 2006 PART 1 Variable Length Coding  Information entropy  Huffman code vs. arithmetic.
Parallel-Search Trie-based Scheme for Fast IP Lookup
Compression & Huffman Codes Fawzi Emad Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
Searches & Sorts V Deena Engel’s class Adapted from W. Savitch’s text An Introduction to Computers & Programming.
Fundamentals of Multimedia Chapter 7 Lossless Compression Algorithms Ze-Nian Li and Mark S. Drew 건국대학교 인터넷미디어공학부 임 창 훈.
2015/7/12VLC 2008 PART 1 Introduction on Video Coding StandardsVLC 2008 PART 1 Variable Length Coding  Information entropy  Huffman code vs. arithmetic.
Fast binary and multiway prefix searches for pachet forwarding Author: Yeim-Kuan Chang Publisher: COMPUTER NETWORKS, Volume 51, Issue 3, pp , February.
EEE377 Lecture Notes1 EEE436 DIGITAL COMMUNICATION Coding En. Mohd Nazri Mahmud MPhil (Cambridge, UK) BEng (Essex, UK) Room 2.14.
Data Compression Basics & Huffman Coding
Fast vector quantization image coding by mean value predictive algorithm Authors: Yung-Gi Wu, Kuo-Lun Fan Source: Journal of Electronic Imaging 13(2),
Dale & Lewis Chapter 3 Data Representation
Basics of Compression Goals: to understand how image/audio/video signals are compressed to save storage and increase transmission efficiency to understand.
Final Exam Review Instructor : Yuan Long CSC2010 Introduction to Computer Science Apr. 23, 2013.
PARALLEL TABLE LOOKUP FOR NEXT GENERATION INTERNET
SPANISH CRYPTOGRAPHY DAYS (SCD 2011) A Search Algorithm Based on Syndrome Computation to Get Efficient Shortened Cyclic Codes Correcting either Random.
A Memory-efficient Huffman Decoding Algorithm
Arrays.
COMPRESSION. Compression in General: Why Compress? So Many Bits, So Little Time (Space) CD audio rate: 2 * 2 * 8 * = 1,411,200 bps CD audio storage:
Complexity 20-1 Complexity Andrei Bulatov Parallel Arithmetic.
Prof. Amr Goneid, AUC1 Analysis & Design of Algorithms (CSCE 321) Prof. Amr Goneid Department of Computer Science, AUC Part 8. Greedy Algorithms.
Data Structure Introduction.
1 Power-Efficient TCAM Partitioning for IP Lookups with Incremental Updates Author: Yeim-Kuan Chang Publisher: ICOIN 2005 Presenter: Po Ting Huang Date:
Lecture 4: Lossless Compression(1) Hongli Luo Fall 2011.
AUTHOR: NIZAR BEN NEJI, ADEL BOUHOULA PUBLISHER: IEEE INTERNATIONAL CONFERENCE,2011 PRESENTER: KAI-YANG LIU DATE:2011/08/31 1 NAF Conversion: An Efficient.
A Fast LBG Codebook Training Algorithm for Vector Quantization Presented by 蔡進義.
An Algorithm for Construction of Error-Correcting Symmetrical Reversible Variable Length Codes Chia-Wei Lin, Ja-Ling Wu, Jun-Cheng Chen Presented by Jun-Cheng.
Bijective tree encoding Saverio Caminiti. 2 Talk Outline Domains Prüfer-like codes Prüfer code (1918) Neville codes (1953) Deo and Micikevičius code (2002)
Bahareh Sarrafzadeh 6111 Fall 2009
Semi-Parallel Reconfigurable Architecture for Real-time LDPC decoding Karkooti, M.; Cavallaro, J.R.; Information Technology: Coding and Computing, 2004.
Lossless Decomposition and Huffman Codes Sophia Soohoo CS 157B.
1Computer Sciences Department. 2 Advanced Design and Analysis Techniques TUTORIAL 7.
Chapter 7 Lossless Compression Algorithms 7.1 Introduction 7.2 Basics of Information Theory 7.3 Run-Length Coding 7.4 Variable-Length Coding (VLC) 7.5.
SPIHT algorithm combined with Huffman encoding Wei Li, Zhen Peng Pang, Zhi Jie Liu, 2010 Third International Symposium on Intelligent Information Technology.
بسم الله الرحمن الرحيم My Project Huffman Code. Introduction Introduction Encoding And Decoding Encoding And Decoding Applications Applications Advantages.
Sorting & Searching Geletaw S (MSC, MCITP). Objectives At the end of this session the students should be able to: – Design and implement the following.
Packet Classification Using Multi- Iteration RFC Author: Chun-Hui Tsai, Hung-Mao Chu, Pi-Chung Wang Publisher: 2013 IEEE 37th Annual Computer Software.
SCALAR PREFIX SEARCH: A NEW ROUTE LOOKUP ALGORITHM FOR NEXT GENERATION INTERNET Author: Mohammad Behdadfar, Hossein Saidi, Hamid Alaei and Babak Samari.
Efficient Huffman Decoding Aggarwal, M. and Narayan, A., International Conference on Image Processing, vol. 1, pp. 936 – 939, 2000 Presenter :Yu-Cheng.
Efficient Signature Matching with Multiple Alphabet Compression Tables Publisher : SecureComm, 2008 Author : Shijin Kong,Randy Smith,and Cristian Estan.
Compression & Huffman Codes
EE465: Introduction to Digital Image Processing
Data Structures I (CPCS-204)
Data Searching and Sorting algorithms
Review Graph Directed Graph Undirected Graph Sub-Graph
13 Text Processing Hongfei Yan June 1, 2016.
Source : Signal Processing, Volume 133, April 2017, Pages
Source :Journal of visual Communication and Image Representation
High-capacity image hiding scheme based on vector quantization
Indexing and Hashing Basic Concepts Ordered Indices
Advanced Algorithms Analysis and Design
Huffman Coding CSE 373 Data Structures.
A.R. Hurson 323 CS Building, Missouri S&T
Source : Signal Processing Image Communication Vol. 66, pp , Aug 2018
The Binary System.
Dynamic embedding strategy of VQ-based information hiding approach
Hiding Information in VQ Index Tables with Reversibility
An Algorithm for Compression of Bilevel Images
Pattern Matching 4/27/2019 1:16 AM Pattern Matching Pattern Matching
Huffman Coding Greedy Algorithm
Source: IEEE Transactions on Circuits and Systems,
Error Correction Coding
Predictive Grayscale Image Coding Scheme Using VQ and BTC
Presentation transcript:

Efficient Huffman Decoding Source: 2000. Proceedings. 2000 International Conference on Image Processing, Volume: 1 , 2000, pp. 936–939 Author: Aggarwal, M.; Narayan, A. Speaker: Hsien-Wen Tseng Date: 11/15/2001

Outline Introduction Lookup Table Method Proposed Algorithm Discussion

Introduction Memory: O(2h) Complexity: O(h) Memory: O(n) Complexity: a few computations K L I J H G F E D C B A

Lookup Table Method

Proposed Algorithm Huffman Table

LMBC Function lmbc( ): The position of the first-bit-change. Example: Decoding procedure: Determine the partition Search the codeword sequentially Example B = 00000000111000111

Canonical Huffman Code Canonical codes are a subclass of Human codes, that have a numerical sequence property, i.e. codewords with the same length are binary representations of consecutive integers. The usage of canonical codes is simple decoding technique.

Modified Huffman Table

Decoding Algorithm Determine the partition Using first bit and lmbc( ) function B = “0001011001”, LMBC = 3 Determine the common length Li and the numerically least codeword FCi in Pi Li = 7, FCi = “0001000” Let C be the first Li bits of B, then (C-FCi) can be used as an index into lookup table yielding the desired symbol and its actual length li C = “0001011”, C-FCi = 3, li = 5 The bitstream is left-shifted by li and process iterated until the last symbol

Discussion The proposed algorithm requires very few computations to decode a codeword. The memory requirement would be large in worst case. Codewords {000100, 00011111} Require 24=16 entries In the MPEG-1, MPEG-2, H.261, H.263 standards, require additional memory only about one-third of n.