Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS 206 Introduction to Computer Science II 12 / 10 / 2008 Instructor: Michael Eckmann.

Similar presentations


Presentation on theme: "CS 206 Introduction to Computer Science II 12 / 10 / 2008 Instructor: Michael Eckmann."— Presentation transcript:

1 CS 206 Introduction to Computer Science II 12 / 10 / 2008 Instructor: Michael Eckmann

2 Michael Eckmann - Skidmore College - CS 206 - Fall 2008 Today’s Topics Questions/comments? Selection problem Huffman Coding Closing Remarks

3 Let's discuss a similar algorithm that solves the Selection problem. The selection problem is the desire to find the kth smallest item in an unsorted list. (e.g. in a list indexed from 0 to 99, if we wanted to find the 3 rd smallest item, k=3 and if our list was sorted (which it isn't) this item would live at index 2)‏ We can use the partition ideas from quicksort. If the pivot happens to live at index k-1, we're done. If not, then focus only on the side of the pivot that k is on. That is, if pivot happens to be at index j after partitioning, then if k-1 j then only work on the right list and if k-1==j, then we're done. Let's see if we can code this now. Selection problem

4 Compression is the idea of taking some input with some size and outputting a smaller sized output that represents the input. –If the input can be generated perfectly from the output by some reverse process then it is lossless compression. –If the input cannot be generated perfectly from the output by some reverse process then it is lossy compression. Example: an image compression

5 Huffman coding is a lossless compression technique for text. Storage for ASCII is 8 bits per character and UNICODE is 16 bits per character. Given a file of characters we examine the characters and sort them by frequency. The ones that occur most frequently are represented with fewer bits and the ones most frequently occurring are represented with more bits. Hence this is a variable length coding algorithm. Huffman codes for the characters can be generated by creating a tree where the characters are in leaf nodes and the paths from root to leaves create the huffman code for the character at the leaf. Each link is a 0 or 1. Huffman coding

6 1. Start out with a forest of trees, all being one node each. Each node represents a character and the frequency of that character. The weight of a tree is the sum of it's nodes' frequencies and this weight is stored in the root. 2. while there's still more than one tree –The two trees (tree1 and tree2) with the smallest weights are joined together into one tree with the root set to have the weight as the sum of the two subtree weights and tree1 is the left subtree of the root and tree2 is the right subtree of the root 3. Assign a 0 to the left link and a 1 to the right link of each node and the tree generated above will be the optimal Huffman encoding tree. Let's see a small example of generating the huffman codes by creating a huffman tree: –Let's examine the text: “Java is a programming language.” Huffman coding

7 Note that no character encoding is a prefix of any other character encoding. What does that mean? Why is it necessary? Huffman coding

8 Huffman coding is an optimal coding for coding each symbol (character) independently. Other coding schemes could supply codes for a group characters and possibly get better compression. Huffman coding

9 From the syllabus Course Goals and Objectives 1. To understand, be able to use and be able to write computer programs utilizing the various data structures that we'll cover. 2. To understand, be able to use and be able to write computer programs utilizing the various algorithms that we'll cover. 3. To be able to, to some degree, analyze algorithms that solve the same problem. Then be able to determine which is more efficient and why. 4. To become more proficient computer programmers. Learn and practice good techniques for testing and debugging. Closing Remarks

10 What have we done this semester? –Learned many common data structures that are used extensively in computer science and how they are implemented Linked lists, doubly linked lists, etc. Trees, Binary Trees, BSTs Stack Queue Priority Queue Heap Graph (directed/undirected, weighted/unweighted)‏ Hash Table Balanced Trees (AVL, B-Tree)‏ Closing Remarks

11 What have we done this semester? –Learned many algorithms that use these data structures, implemented them and analyzed (to some extent) their running times Binary tree traversals – inorder, preorder, postorder Recursive algorithms, Divide and Conquer algorithms, Dynamic Programming algorithms Breadth First Search, Depth First Search, Dijkstra's MergeSort, RadixSort, HeapSort, QuickSort Selection Huffman coding algorithm –Gained more programming experience, especially with larger programs.. Closing Remarks


Download ppt "CS 206 Introduction to Computer Science II 12 / 10 / 2008 Instructor: Michael Eckmann."

Similar presentations


Ads by Google