269111 - Communication Technology in a Changing World Week 2.

Slides:



Advertisements
Similar presentations
Compression techniques. Why we need compression. Types of compression –Lossy and lossless Concentrate on lossless techniques. Run Length coding. Entropy.
Advertisements

Lecture 4 (week 2) Source Coding and Compression
Applied Algorithmics - week7
Michael Alves, Patrick Dugan, Robert Daniels, Carlos Vicuna
Data Structures: A Pseudocode Approach with C 1 Chapter 6 Objectives Upon completion you will be able to: Understand and use basic tree terminology and.
SIMS-201 Compressing Information. 2  Overview Chapter 7: Compression Introduction Entropy Huffman coding Universal coding.
Huffman Encoding Dr. Bernard Chen Ph.D. University of Central Arkansas.
Lecture 10 : Huffman Encoding Bong-Soo Sohn Assistant Professor School of Computer Science and Engineering Chung-Ang University Lecture notes : courtesy.
Data Compressor---Huffman Encoding and Decoding. Huffman Encoding Compression Typically, in files and messages, Each character requires 1 byte or 8 bits.
Huffman Encoding 16-Apr-17.
1 Huffman Codes. 2 Introduction Huffman codes are a very effective technique for compressing data; savings of 20% to 90% are typical, depending on the.
A Data Compression Algorithm: Huffman Compression
Is ASCII the only way? For computers to do anything (besides sit on a desk and collect dust) they need two things: 1. PROGRAMS 2. DATA A program is a.
Data Structures – LECTURE 10 Huffman coding
Chapter 9: Huffman Codes
CSE 143 Lecture 18 Huffman slides created by Ethan Apter
Fundamentals of Multimedia Chapter 7 Lossless Compression Algorithms Ze-Nian Li and Mark S. Drew 건국대학교 인터넷미디어공학부 임 창 훈.
Lossless Data Compression Using run-length and Huffman Compression pages
x x x 1 =613 Base 10 digits {0...9} Base 10 digits {0...9}
Management Information Systems Lection 06 Archiving information CLARK UNIVERSITY College of Professional and Continuing Education (COPACE)
Dale & Lewis Chapter 3 Data Representation
Data Structures and Algorithms Huffman compression: An Application of Binary Trees and Priority Queues.
Data Structures Arrays both single and multiple dimensions Stacks Queues Trees Linked Lists.
Algorithm Design & Analysis – CS632 Group Project Group Members Bijay Nepal James Hansen-Quartey Winter
Huffman Codes. Encoding messages  Encode a message composed of a string of characters  Codes used by computer systems  ASCII uses 8 bits per character.
Huffman Encoding Veronica Morales.
Lecture Objectives  To learn how to use a Huffman tree to encode characters using fewer bytes than ASCII or Unicode, resulting in smaller files and reduced.
Data Structures and Algorithms Lecture (BinaryTrees) Instructor: Quratulain.
Compression.  Compression ratio: how much is the size reduced?  Symmetric/asymmetric: time difference to compress, decompress?  Lossless; lossy: any.
ICS 220 – Data Structures and Algorithms Lecture 11 Dr. Ken Cosh.
ALGORITHMS FOR ISNE DR. KENNETH COSH WEEK 13.
Lossless Compression CIS 465 Multimedia. Compression Compression: the process of coding that will effectively reduce the total number of bits needed to.
Introduction to Algorithms Chapter 16: Greedy Algorithms.
Prof. Amr Goneid, AUC1 Analysis & Design of Algorithms (CSCE 321) Prof. Amr Goneid Department of Computer Science, AUC Part 8. Greedy Algorithms.
Huffman coding Content 1 Encoding and decoding messages Fixed-length coding Variable-length coding 2 Huffman coding.
Huffman Encodings Section 9.4. Data Compression: Array Representation Σ denotes an alphabet used for all strings Each element in Σ is called a character.
Huffman Codes Juan A. Rodriguez CS 326 5/13/2003.
Bahareh Sarrafzadeh 6111 Fall 2009
1 Algorithms CSCI 235, Fall 2015 Lecture 30 More Greedy Algorithms.
Lossless Decomposition and Huffman Codes Sophia Soohoo CS 157B.
ISNE101 – Introduction to Information Systems & Network Engineering WEEK 2.
1 Data Compression Hae-sun Jung CS146 Dr. Sin-Min Lee Spring 2004.
CSE 143 Lecture 22 Huffman slides created by Ethan Apter
Chapter 11. Chapter Summary  Introduction to trees (11.1)  Application of trees (11.2)  Tree traversal (11.3)  Spanning trees (11.4)
Compression and Huffman Coding. Compression Reducing the memory required to store some information. Lossless compression vs lossy compression Lossless.
Lesson 6 Binary Understand what Binary Code means and apply this knowledge Understand how data is represented using bit systems and be able to change decimal.
Design & Analysis of Algorithm Huffman Coding
Huffman Codes ASCII is a fixed length 7 bit code that uses the same number of bits to define each character regardless of how frequently it occurs. Huffman.
HUFFMAN CODES.
COMP261 Lecture 22 Data Compression 2.
Assignment 6: Huffman Code Generation
Algorithms for iSNE Dr. Kenneth Cosh Week 13.
ISNE101 – Introduction to Information Systems and Network Engineering
Huffman Coding Based on slides by Ethan Apter & Marty Stepp
Data Compression If you’ve ever sent a large file to a friend, you may have compressed it into a zip archive like the one on this slide before doing so.
Chapter 8 – Binary Search Tree
The Huffman Algorithm We use Huffman algorithm to encode a long message as a long bit string - by assigning a bit string code to each symbol of the alphabet.
Chapter 9: Huffman Codes
Advanced Algorithms Analysis and Design
Huffman Coding CSE 373 Data Structures.
Communication Technology in a Changing World
Communication Technology in a Changing World
Trees Addenda.
Data Structure and Algorithms
Huffman Encoding.
Podcast Ch23d Title: Huffman Compression
Algorithms CSCI 235, Spring 2019 Lecture 30 More Greedy Algorithms
Huffman Coding Greedy Algorithm
Algorithms CSCI 235, Spring 2019 Lecture 31 Huffman Codes
Presentation transcript:

Communication Technology in a Changing World Week 2

Last Week Introduction History of Communication Technology This Week Looking into Digital Communication

Project Step 1: Pick your movie! Choose a futuristic movie, which features modern communication technology Get the film approved by me - only one team per movie

Project Step 2: Pick some Communication Technology Identify some interesting futuristic communication technology used in the film Is it possible? How might it work? What other applications could use it? Later in semester you will give a presentation, and produce a report.

To begin with... Lets learn to Count! 1,2,3,4....What comes next? What happens when we reach 9? What if we use a different base?

The wonderful world of Binary Binary is Base 2 0,1,10,11,100,101,110, A '0' or a '1' is a binary digit, or 'bit'. Computers only use Binary, where each bit can have 2 states. The state of a bit can be stored as; On/Off Direction of magnetism Different voltages Different levels of light intensity

Every Base is Base 10!

How about Morse Code? 5 Elements Dots Dashes Intra-character gap Short gap (between letters) Long gap (between words) "..."=S"---"=O" "=??? "."=E Why? "-"=T"--.-"=Q (not TTET!)

Back to Binary 2 Elements 0 1 There are no gaps, so 'gaps' would have to be represented by 0's and 1's. In the same was as Morse Code we can 'encode' each character of the alphabet in 0's and 1's

Binary Encoding = A0001 = B0010 = C 0011 = D0100 = E0101 = F 0110 = G0111 = H1000 = I 1001 = J1010 = K1011 = L 1100 = M1101 = N1110 = O 1111 = P...uhoh!

A Byte A series of 4 bits (a nibble) isn't enough to encode all the capital letters There are 16 different variations Which is a single hexadecimal character! In computing we generally store things in Bytes - a sequence of 8 bits. That gives 256 different combinations Enough for every small letter, capital letter, punctuation, number...

It's all binary Everything on a computer is in binary... Colour Sound With colour we can have a picture With pictures, sound (and time) we can have a movie If a colour has 3bytes, then a picture with X pixels could have 3X bytes? If a movie has Y pictures, we have 3XY bytes! Not quite, but correct in the principle, and that is why we need bigger and bigger hard drives (and faster connections!)

Digitisation Converting 'stuff' to binary is called digitisation. Discussion: What are the effects of digitisation? on Form? on Quality? on Value? on Ownership?

Remember Morse? E is ".", T is "-", but Q is "--.-" Common letters have a short (quick!) code, while longer letters have a longer code. All symbols m i forming the set M, have probabilities of occurrence P(m i ) such that P(m i ) + … + P(m n ) =1 Infrequently occurring symbols can be assigned a long code word, while short code words are reserved for frequent symbols.

Encoding Objectives Each codeword corresponds to exactly one symbol. Decoding should not require any look ahead. – This is known as the ‘prefix’ property.

Prefix Property Symbols: A, B, C Codes: 1, 2, 12 Message: 12 Is it ‘AB’? Is it ‘C’? In Morse code, how do we know "--.-" is Q and not "TTET"?

Prefix Property Symbols: A, B, C, Codes: 1, 22, 12 Message: 1222 Read in 1, is it an A? Read in 2, was it a C? Read in 2, Should it be AB? Read in 2, Ah, finally we can assume it was CB.

Code Optimisation The length of a code for one symbol should not exceed the length of a less likely symbol; if P(m i ) ≤ P(m j ) then L(m i ) ≥ L(m j ) – There should be no unused short codes, either as stand alone encodings or as prefixs for longer codes. 01, 000, 001, 100, 101 is not ideal as 11 is not used. –

Huffman Coding Huffman coding is a method for choosing a representation for each symbol, resulting in a prefix - free code – The bit string representing some particular symbol is never a prefix of the bit string representing any other symbol The most common characters are expressed using shorter strings of bits than are used for less common symbols.

Huffman Coding Huffman creates a "Heap" based on the frequencies of each symbol. What is a "Heap"? A heap is a special kind of Binary Tree! Great! - What is a "Binary Tree"? It's a tree where each node has at most 2 children... Hmmm... - What is a "Tree"? OK, lets simplify!

A Tree

A Binary Tree A Tree where each node has 0,1 or 2 children.

A Heap A Binary Tree where the root node has the highest value, and every parent's value is greater than their children

Huffman Coding Begins by constructing a Heap based on the frequencies of each member of the set to be encoded. Each member is a leaf node, with parent nodes being the sum of their children. Take the set (with corresponding occurrence frequencies out of 120); A(10) B(15) C(5) D(15) E(20) F(5) G(15) H(30) I(5)

Huffman's Heap

Huffman Coding Each letter's code is then read based on its position from the root - 0 for left, 1 for right. A = 000 B = 010 C = 0010 D = 011 E = 111 F = G = 110 H = 10 I = 00111

Creating the Heap? Based on frequencies, such as in the British National Corpus? Based on frequencies within the specified text (or image etc.) Standard Approach to Huffman What if we don't know the frequencies? Adaptive Huffman