Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS 206 Introduction to Computer Science II 04 / 29 / 2009 Instructor: Michael Eckmann.

Similar presentations


Presentation on theme: "CS 206 Introduction to Computer Science II 04 / 29 / 2009 Instructor: Michael Eckmann."— Presentation transcript:

1 CS 206 Introduction to Computer Science II 04 / 29 / 2009 Instructor: Michael Eckmann

2 Michael Eckmann - Skidmore College - CS 206 - Fall 2008 Today’s Topics Questions/comments? Compression / Huffman Coding Closing Remarks Evaluations

3 Compression is the idea of taking some input with some size and outputting a smaller sized output that represents the input. –If the input can be generated perfectly from the output by some reverse process then it is lossless compression. –If the input cannot be generated perfectly from the output by some reverse process then it is lossy compression. Example: an image w/ 656 x 434 color pixels (3 bytes per pixel)‏ –Uncompressed size = 656*434*3 = 854112 bytes –High quality lossy JPG compressed size = 106194 bytes –Low quality lossy JPG compressed size = 14588 bytes compression

4 Huffman coding is a lossless compression technique for text. Storage for ASCII is 8 bits per character and UNICODE is 16 bits per character. Given a file of characters we examine the characters and sort them by frequency. The ones that occur most frequently are represented with fewer bits and the ones most frequently occurring are represented with more bits. Hence this is a variable length coding algorithm. How does that result in compressed data? Huffman coding

5 Huffman codes for the characters can be generated by creating a tree where the characters are in leaf nodes and the paths from root to leaves create the huffman code for the character at the leaf. Each link is a 0 or 1. Huffman coding

6 1. Start out with a forest of trees, all being one node each. Each node represents a character and the frequency of that character. The weight of a tree is the sum of its nodes' frequencies and this weight is stored in the root. 2. while there's still more than one tree –The two trees (tree1 and tree2) with the smallest weights are joined together into one tree with the root set to have the weight as the sum of the two subtree weights and tree1 is the left subtree of the root and tree2 is the right subtree of the root 3. Assign a 0 to the left link and a 1 to the right link of each node and the tree generated above will be the optimal Huffman encoding tree. Let's see a small example of generating the huffman codes by creating a huffman tree: –Let's examine the text: “Java is a programming language.” Huffman coding

7 Note that no character encoding is a prefix of any other character encoding. What does that mean? Why is it necessary? Huffman coding

8 Huffman coding is an optimal coding for coding each symbol (character) independently. Other coding schemes could supply codes for a group characters and possibly get better compression. Huffman coding

9 From the syllabus Course Goals and Objectives 1. To understand, be able to use and be able to write computer programs utilizing the various data structures that we'll cover. 2. To understand, be able to use and be able to write computer programs utilizing the various algorithms that we'll cover. 3. To be able to, to some degree, analyze algorithms that solve the same problem. Then be able to determine which is more efficient and why. 4. To become more proficient computer programmers. Learn and practice good techniques for testing and debugging. Closing Remarks

10 What have we done this semester? –Learned many common data structures that are used extensively in computer science and how they are implemented Linked lists, doubly linked lists, etc. Trees, Binary Trees, BSTs Stack Queue Priority Queue Heap Graph (directed/undirected, weighted/unweighted)‏ Hash Table Balanced Trees (AVL, B-Tree)‏ Closing Remarks

11 What have we done this semester? –Learned many algorithms that use these data structures, implemented them and analyzed (to some extent) their running times Binary tree traversals – inorder, preorder, postorder Recursive algorithms, Divide and Conquer algorithms, Dynamic Programming algorithms Breadth First Search, Depth First Search, Dijkstra's MergeSort, RadixSort, HeapSort, QuickSort Selection Huffman coding algorithm –Gained more programming experience, especially with larger programs.. Closing Remarks

12 Our final exam is in this class room on Thursday May 7 th at 1:30pm. It will be part closed book and part open book/notes. A list of topics we covered in the course is posted on our class notes page to help you study. Final exam


Download ppt "CS 206 Introduction to Computer Science II 04 / 29 / 2009 Instructor: Michael Eckmann."

Similar presentations


Ads by Google