Presentation is loading. Please wait.

Presentation is loading. Please wait.

Sorting.

Similar presentations


Presentation on theme: "Sorting."— Presentation transcript:

1 Sorting

2 Brute-Force Sorting Algorithm
Selection Sort Scan the array to find its smallest element and swap it with the first element. Then, starting with the second element, scan the elements to the right of it to find the smallest among them and swap it with the second elements. Generally, on pass i (0  i  n-2), find the smallest element in A[i..n-1] and swap it with A[i]: A[0]   A[i-1] | A[i], , A[min], , A[n-1] in their final positions Example: 17,16,3,14,7

3 Analysis of Selection Sort
Time efficiency: Space efficiency: Θ(1), so in place

4 Bubble Sort Example :- 12,16,3,14,7

5 Time Efficiency Θ(n^2)

6 Radix Sort Radix sort is a stable sorting algorithm used mainly for sorting strings of the same length. Description The fundamental principle of radix sort stems from the definition of the stable sort – sorting algorithm is stable, if it maintains the order of keys, which are equal. Radix sort iteratively orders all the strings by their n-th character – in the first iteration, the strings are ordered by their last character. In the second run, the strings are ordered in respect to their penultimate character. And because the sort is stable, the strings, which have the same penultimate character, are still sorted in accordance to their last characters. After n-th run the strings are sorted in respect to all character positions.

7 And because the sort is stable, the strings, which have the same penultimate character, are still sorted in accordance to their last characters. After n-th run the strings are sorted in respect to all character positions. RadixSort(A, d) for i=1 to d StableSort(A) on digit I Example:- 170,45,75,90,2,24,802,66

8 Asymptotic complexity
The asymptotic complexity of radix sort is where Is the length of sorted strings and is the complexity of the inner implementation of the a stable sort

9 Counting Sort Description
Counting sort utilizes the knowledge of the smallest and the largest element in the array (structure). Using this information, it can create a helper array of frequencies of all discrete values in the main array and later recalculate it into the array of occurrences (for every value the array of occurrences contains an index of its last occurrence in a sorted array). With this information the actual sorting is simple. Counting sort iterates over the main array and fills the appropriate values, whose positions are known thanks to the array of occurrences.

10 Advantages and disadvantages
The biggest advantage of counting sort is its complexity where is the size of the sorted array and is the size of the helper array (range of distinct values). It has also several disadvantages – if non-primitive (object) elements are sorted, another helper array is needed to store the sorted elements. Second and the major disadvantage is that counting sort can be used only to sort discrete values (for example integers), because otherwise the array of frequencies cannot be constructed.

11 Bucket Sort Bucket sort (bin sort) is a stable sorting algorithm based on partitioning the input array into several parts – so called buckets – and using some other sorting algorithm for the actual sorting of these sub-problems. Description At first algorithm divides the input array into buckets. Each bucket contains some range of input elements (the elements should be uniformly distributed to ensure optimal division among buckets). In the second phase the bucket sort orders each bucket using some other sorting algorithm, or by recursively calling itself – with bucket count equal to the range of values, bucket sort degenerates to counting sort.

12 Finally the algorithm merges all the ordered buckets
Finally the algorithm merges all the ordered buckets. Because every bucket contains different range of element values, bucket sort simply copies the elements of each bucket into the output array (concatenates the buckets). The asymptotic complexity of bucket sort is where is size of the input array, is the number of buckets and is the complexity of the inner sorting algorithm.

13 Bucket Sort Example A=(0.5, 0.1, 0.3, 0.4, 0.3, 0.2, 0.1, 0.1, 0.5, 0.4, 0.5) bucket in array B[1] 0.1, 0.1, 0.1 B[2] 0.2 B[3] 0.3, 0.3 B[4] 0.4, 0.4 B[5] 0.5, 0.5, 0.5 Sorted list: 0.1, 0.1, 0.1, 0.2, 0.3, 0.3, 0.4, 0.4, 0.5, 0.5, 0.5

14 Usage Bucket sort can be used for distributed sorting – each bucket can be ordered by a different thread or even by a different computer. Another use case is a sorting of huge input data, which cannot be loaded into the main memory by an ordinary algorithm. This problem can be solved by dividing the data into sufficiently small buckets, sorting them one by one by appropriate algorithm, while storing rest of the data in the external memory (e.g. hard drive).

15 Chapter Summary (2)

16 Heaps and Heapsort Definition A heap is a binary tree with keys at its nodes (one key per node) such that: - It is essentially complete, i.e., all its levels are full except possibly the last level, where only some rightmost keys may be missing -The key at each node is ≥ keys at its children (this is called a max-heap) The key at each node is <= keys at its children (this is called a min-heap) Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 6 6-26

17 Illustration of the heap’s definition
a heap not a heap not a heap Note: Heap’s elements are ordered top down (along any path down from its root), but they are not ordered left to right

18 Some Important Properties of a Heap
--Given n, there exists a unique binary tree with n nodes that is essentially complete, with h = log2 n --The root contains the largest key in max heap. --The root contains the smallest key in min heap. --The sub-tree rooted at any node of a heap is also a heap. --A heap can be represented as an array Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 6 6-28

19 Heap’s Array Representation
Store heap’s elements in an array (whose elements indexed, for convenience, 1 to n) in top-down left-to-right order Eg:- - Left child of node j is at 2j - Right child of node j is at 2j+1 - Parent of node j is at j/2 - Parental nodes are represented in the first n/2 locations 9 5 3 1 4 2 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 6 6-29

20 Heap Operations: Heapify()
Heapify(): maintain the heap property Given: a node i in the heap with children l and r Given: two sub-trees rooted at l and r, assumed to be heaps Problem: The sub-tree rooted at i may violate the heap property . Action: let the value of the parent node “float down” so sub-tree at i satisfies the heap property David Luebke /23/2018

21 Heap Construction (bottom-up) or Build Heap()
Step 0: Initialize the structure with keys in the order given Step 1: Starting with the last (rightmost) parental node, fix the heap rooted at it, if it doesn’t satisfy the heap condition: keep exchanging it with its larger child until the heap condition holds ( Heapify()) Step 2: Repeat Step 1 for the preceding parental node Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 6 6-30

22 Example of Heap Construction or Build Heap()
Construct a heap for the list 2, 9, 7, 6, 5, 8 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 6 6-31

23 Heap-sort Stage 1: Construct a heap for a given list of n keys
Stage 2: Repeat operation of root removal n-1 times: -- Exchange keys in the root and in the last (rightmost) leaf -- Decrease heap size by 1 -- If necessary, swap new root with larger child until the heap condition holds Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 6 6-33

24 Analyzing Heapsort The call to BuildHeap() takes O(n) time
Each of the n - 1 calls to Heapify() takes O(lg n) time Thus the total time taken by HeapSort() = O(n) + (n - 1) O(lg n) = O(n) + O(n lg n) = O(n lg n) Both worst-case and average-case efficiency: (n log n) David Luebke /23/2018

25 Example of Heap Sort Check Link for example of Heap Sort:-

26 Hashing --A very efficient method for implementing a dictionary, i.e., a set with the operations: --find --insert --delete --Important applications: --symbol tables --databases (extendible hashing) Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 7 7-14

27 Hash tables and hash functions
The idea of hashing is to map keys of a given file of size n into a table of size m, called the hash table, by using a predefined function, called the hash function, h: K  location (cell) in the hash table Example: student records, key = SSN. Hash function: h(K) = K mod m where m is some integer (typically, prime) If m = 1000, where is record with SSN= stored? Ans:- h(k)= mod 1000 Generally, a hash function should: -- be easy to compute -- distribute keys about evenly throughout the hash table

28 Collisions If h(K1) = h(K2), there is a collision
-- Good hash functions result in fewer collisions but some collisions should be expected (birthday paradox) -- Two principal hashing schemes handle collisions differently: -- Open hashing – each cell is a header of linked list of all keys hashed to it. --Closed hashing - one key per cell - in case of collision, finds another cell by - linear probing: use next free bucket - double hashing: use second hash function to compute increment Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “Introduction to the Design & Analysis of Algorithms,” 2nd ed., Ch. 7 7-16


Download ppt "Sorting."

Similar presentations


Ads by Google