Data Structures Review Session

Slides:



Advertisements
Similar presentations
Introduction to Algorithms Quicksort
Advertisements

Algorithms Analysis Lecture 6 Quicksort. Quick Sort Divide and Conquer.
A simple example finding the maximum of a set S of n numbers.
Chapter 4: Divide and Conquer Master Theorem, Mergesort, Quicksort, Binary Search, Binary Trees The Design and Analysis of Algorithms.
DIVIDE AND CONQUER APPROACH. General Method Works on the approach of dividing a given problem into smaller sub problems (ideally of same size).  Divide.
Sorting. “Sorting” When we just say “sorting,” we mean in ascending order (smallest to largest) The algorithms are trivial to modify if we want to sort.
The Substitution method T(n) = 2T(n/2) + cn Guess:T(n) = O(n log n) Proof by Mathematical Induction: Prove that T(n)  d n log n for d>0 T(n)  2(d  n/2.
Updated QuickSort Problem From a given set of n integers, find the missing integer from 0 to n using O(n) queries of type: “what is bit[j]
Quicksort Divide-and-Conquer. Quicksort Algorithm Given an array S of n elements (e.g., integers): If array only contains one element, return it. Else.
Chapter 4: Divide and Conquer The Design and Analysis of Algorithms.
Quicksort. Quicksort I To sort a[left...right] : 1. if left < right: 1.1. Partition a[left...right] such that: all a[left...p-1] are less than a[p], and.
Quicksort.
Quicksort
Data Structures Review Session 1
Cmpt-225 Sorting – Part two. Idea of Quick Sort 1) Select: pick an element 2) Divide: partition elements so that x goes to its final position E 3) Conquer:
CS 202, Spring 2003 Fundamental Structures of Computer Science II Bilkent University1 Sorting - 3 CS 202 – Fundamental Structures of Computer Science II.
Mathematics Review and Asymptotic Notation
Order Statistics. Order statistics Given an input of n values and an integer i, we wish to find the i’th largest value. There are i-1 elements smaller.
Analysis of Algorithms These slides are a modified version of the slides used by Prof. Eltabakh in his offering of CS2223 in D term 2013.
The Selection Problem. 2 Median and Order Statistics In this section, we will study algorithms for finding the i th smallest element in a set of n elements.
CS 361 – Chapters 8-9 Sorting algorithms –Selection, insertion, bubble, “swap” –Merge, quick, stooge –Counting, bucket, radix How to select the n-th largest/smallest.
Sorting: Implementation Fundamental Data Structures and Algorithms Klaus Sutner February 24, 2004.
Big Java by Cay Horstmann Copyright © 2009 by John Wiley & Sons. All rights reserved. Selection Sort Sorts an array by repeatedly finding the smallest.
Computer Science 101 Fast Algorithms. What Is Really Fast? n O(log 2 n) O(n) O(n 2 )O(2 n )
Lecture # 6 1 Advance Analysis of Algorithms. Divide-and-Conquer Divide the problem into a number of subproblems Similar sub-problems of smaller size.
Algorithm Design Techniques, Greedy Method – Knapsack Problem, Job Sequencing, Divide and Conquer Method – Quick Sort, Finding Maximum and Minimum, Dynamic.
Advanced Algorithms Analysis and Design
Fundamental Data Structures and Algorithms
Analysis of Algorithms
Subject Name: Design and Analysis of Algorithm Subject Code: 10CS43
CS 3343: Analysis of Algorithms
UNIT- I Problem solving and Algorithmic Analysis
Analysis of Algorithms
CS 3343: Analysis of Algorithms
Quicksort
Randomized Algorithms
Chapter 4 Divide-and-Conquer
Divide-And-Conquer-And-Combine
Chapter 7 Sorting Spring 14
Quicksort 1.
CS 3343: Analysis of Algorithms
CS 3343: Analysis of Algorithms
Chapter 4: Divide and Conquer
Quick Sort (11.2) CSE 2011 Winter November 2018.
Complexity Present sorting methods. Binary search. Other measures.
Unit-2 Divide and Conquer
Ch 7: Quicksort Ming-Te Chi
Randomized Algorithms
Topic: Divide and Conquer
Divide-And-Conquer-And-Combine
Analysis of Algorithms
CS200: Algorithms Analysis
Sub-Quadratic Sorting Algorithms
EE 312 Software Design and Implementation I
CS 3343: Analysis of Algorithms
Quicksort.
CS 1114: Sorting and selection (part two)
Quick-Sort 4/25/2019 8:10 AM Quick-Sort     2
Topic: Divide and Conquer
At the end of this session, learner will be able to:
Quicksort.
Algorithms Recurrences.
The Selection Problem.
Design and Analysis of Algorithms
CS203 Lecture 15.
Analysis of Algorithms CS 477/677
Analysis of Algorithms
Divide and Conquer Merge sort and quick sort Binary search
Quicksort.
Divide-and-Conquer 7 2  9 4   2   4   7
Presentation transcript:

Data Structures Review Session Ramakrishna, PhD student. Grading Assistant for this course CS 307 Fundamentals of Computer Science

Quick Sort Review PARTITIONING A key step in the Quick sort algorithm is partitioning the array We choose some (any) number p in the array to use as a pivot We partition the array into three parts: p numbers less than p p numbers greater than or equal to p CS 307 Fundamentals of Computer Science

Best case Partitioning at various levels CS 307 Fundamentals of Computer Science

Analysis of Quick Sort— Best Case Suppose each partition operation divides the array almost exactly in half Then the depth of the recursion in log2n Because that’s how many times we can halve n However, there are many recursions! How can we figure this out? We note that Each partition is linear over its sub array All the partitions at one level cover the array CS 307 Fundamentals of Computer Science

Best case II So the depth of the recursion in log2n At each level of the recursion, all the partitions at that level do work that is linear in n O(log2n) * O(n) = O(n log2n) Hence in the average case, quicksort has time complexity O(n log2n) What about the worst case? CS 307 Fundamentals of Computer Science

Worst case partitioning CS 307 Fundamentals of Computer Science

Worst case In the worst case, partitioning always divides the size n array into these three parts: A length one part, containing the pivot itself A length zero part, and A length n-1 part, containing everything else We don’t recur on the zero-length part Recurring on the length n-1 part requires (in the worst case) recurring to depth n-1 CS 307 Fundamentals of Computer Science

Worst case for quick sort In the worst case, recursion may be n levels deep (for an array of size n) But the partitioning work done at each level is still n O(n) * O(n) = O(n2) So worst case for Quick sort is O(n2) When does this happen? When the array is sorted to begin with! CS 307 Fundamentals of Computer Science

Typical case for quick sort If the array is sorted to begin with, Quick sort is terrible: O(n2) It is possible to construct other bad cases However, Quick sort is usually O(n log2n) The constants are so good that Quick sort is generally the fastest algorithm known Most real-world sorting is done by Quick sort CS 307 Fundamentals of Computer Science

Problems on Quick Sort (1) What is the running time of QUICKSORT when a. All elements of array A have the same value ? b. The array A contains distinct elements and is sorted in descending order ? Assume that you always use the last element in the sub array as pivot. ANSWER ? CS 307 Fundamentals of Computer Science

Answer (1) A) Whatever pivot you choose in each sub array it would result in WORST CASE PARTITIONING and hence the running time is O(n2). B) Same is the case. Since you always pick the maximum element in the sub array as the pivot each partition you do would be a worst case partition and hence the running time is O(n2) again !. CS 307 Fundamentals of Computer Science

Merge Sort Approach Performance Partition list of elements into 2 lists Recursively sort both lists Given 2 sorted lists, merge into 1 sorted list Examine head of both lists Move smaller to end of new list Performance O( n log(n) ) average / worst case CS 307 Fundamentals of Computer Science

Merge Example 2 4 5 2 7 4 5 8 7 8 2 2 4 5 7 7 4 5 8 8 2 4 2 4 5 7 8 7 5 8 CS 307 Fundamentals of Computer Science

Merge Sort Example 7 2 8 5 4 2 4 5 7 8 7 2 8 5 4 2 7 4 5 8 7 2 8 5 4 7 2 8 4 5 5 4 5 4 Split Merge CS 307 Fundamentals of Computer Science

Problems on Merge Sort (1) 2) Let S be a sequence of n elements. An inversion in S is a pair of elements x and y such that x appears before y in S but x>y. Describe an algorithm running in O (n log n) time for determining the number of inversions in S. Solution ? CS 307 Fundamentals of Computer Science

Simple Naïve Algorithm Pseudo Code: For I = 1 to n // n elements in A For J = I+1 to n Compare A[I] and A[J]. If A[I] > A[J] then inversions++; Time Complexity : O (n2) CS 307 Fundamentals of Computer Science

Smart Algorithm Hint: Modify the MERGE sub procedure to solve this problem efficiently !. Original Merge Sort Approach Partition list of elements into 2 lists Recursively sort both lists Given 2 sorted lists, merge into 1 sorted list Examine head of both lists Move smaller to end of new list Performance O( n log (n) ) average / worst case CS 307 Fundamentals of Computer Science

Merge Example 2 4 5 2 7 4 5 8 7 8 2 2 4 5 7 7 4 5 8 8 2 4 2 4 5 7 8 7 5 8 CS 307 Fundamentals of Computer Science

Merge Sort Example 7 2 8 5 4 2 4 5 7 8 7 2 8 5 4 2 7 4 5 8 7 2 8 5 4 7 2 8 4 5 5 4 5 4 Split Merge CS 307 Fundamentals of Computer Science

Modified Merge Sort Modified Merge Sort Approach Performance Partition list of elements into 2 lists Recursively sort both lists Given 2 sorted lists, merge into 1 sorted list Examine the head of the left list and the right list If head of left list is greater than the head of right list we have an inversion Performance O( n log (n) ) average / worst case CS 307 Fundamentals of Computer Science

Problems on Merge Sort (2) 3) Given a set A with n elements and a set B with m elements, describe an efficient algorithm for computing A XOR B, which is the set of elements that are in A or B, but not in both. Solution ? CS 307 Fundamentals of Computer Science

Naïve Algorithm Pseudo Code: For each element x in A // n elements in A For each element y in B // m elements in B Compare x and y. If both are same mark them. Go through A and B and find the elements that are unmarked. These are the elements in the XOR set. Time Complexity : O (n*m) + O(n)+ O(m) = O(n*m) CS 307 Fundamentals of Computer Science

Smart Algorithm Pseudo Code: Sort array A using an O (n log n) sorting algorithm. // n elements in A For each element x in B // m elements in B Perform a binary search for x in A. If a match is found, mark x in A ; else add x in the XOR set. Go through A and copy unmarked elements to the XOR set. Time Complexity : O (nlogn) + O(mlogn) +O(n) = O((m+n)logn) CS 307 Fundamentals of Computer Science

Problems on Sorting 4) Describe and analyze an efficient method for removing all duplicates from a collection A of n elements. Solution ? CS 307 Fundamentals of Computer Science

Smart Algorithm Pseudo Code: Sort array A using an O (n log n) sorting algorithm. // n elements in A So now since the array is sorted if we have any duplicates at all, they will be adjacent to one another. Let B the resulting array without duplicates. B[1] = A[1]; For I = 2 to n If A[I] is same as the recently added element in B, then skip it, else add A[I] in B Time Complexity : O (nlogn) + O(n) = O(nlogn) CS 307 Fundamentals of Computer Science

Problems on Sorting 5) What are the worst-case and average-case running times for insertion sort, merge sort and quick sort ? Answer: Insertion sort is O( n2 ) in both cases. Merge sort is O( n log n ) in both cases. Quick-sort is O( n log n ) on average, O(n2 ) in the worst case. In instances where the worst-case and average-case differ, give an example of how the worst case occurs ? Answer: In Quick-sort, the worst case occurs when the partition is repeatedly degenerate. CS 307 Fundamentals of Computer Science

Problems on Sorting Which of the following take linear execution time ? Insertion sort Merge sort Quick sort Quick select None of the above CS 307 Fundamentals of Computer Science

Problems on Sorting When all elements are equal, what is the running time of – Insertion sort – O(n) ( best case) Merge sort – O(n logn) Quick sort - O(n2) ( worst case ) When the input has been sorted, what is the running time of – Quick sort - O(n2) ( worst case ) CS 307 Fundamentals of Computer Science

Problems on Sorting When the input has been reverse sorted, what is the running time of – Insertion sort – O(n2) ( worst case ) Merge sort – O(n logn) Quick sort - O(n2) ( worst case ) CS 307 Fundamentals of Computer Science

Searching Problems 6) Write down the time complexities (best, worst and average cases) of performing Linear-Search and Binary-Search in a sorted array of n elements. Solution ? CS 307 Fundamentals of Computer Science

Linear Search (1) Method : Scan the input list and compare the input element with every element in the list. If a match is found, return. Worst case Time Complexity: The obvious worst case is that the input element is not available in the list. In this case go through the entire list and hence the worst case time complexity would be O(n). So this search doesn’t really exploit the fact that the list is sorted !! CS 307 Fundamentals of Computer Science

Linear Search (2) Average case Time Complexity: Given that each element is equally likely to be the one searched for and the element searched for is present in the array, a linear search will on the average have to search through half the elements. This is because half the time the wanted element will be in the first half and half the time it will be in the second half and hence its time complexity is Θ(n/2) which is Θ (n) similar to the worst case ignoring the constant. Best Case Time complexity: The obvious best case is that we find the first element of the array equal to the input element and hence the search terminates there itself. Hence the best case is Ω(1). CS 307 Fundamentals of Computer Science

Binary Search (1) Binary search can only be performed on a sorted array. In binary search we first compare the input element to the middle element and if they are equal then the search is over. If it is greater than the middle element, then we search in the right half recursively, otherwise we search in the left half recursively until the middle element is equal to the input element. Algorithm complexity is O(log2n). Best, Worst and Average case complexities are : Θ (log2 n) CS 307 Fundamentals of Computer Science

Problem on Complexity 7) Let f (n), g (n) be asymptotically nonnegative. Show that max( f (n), g (n)) = Θ(f (n) + g (n)). SOLUTION: According to the definition, f (n) = Θ(g (n)) if and only if there exists positive constants c1, c2 and n0 such that 0<=c1f(n)<=g (n)<=c2f(n) for all n>=n0 So to prove this, we need to find positive constants c1,c2 and n0 such that 0 <= c1(f(n) + g (n)) <= max (f (n), g (n)) <= c2(f(n) + g (n)) for all n >= n0. Selecting c2 = 1 clearly shows the third inequality since the maximum must be smaller than the sum. c1 should be selected as 1/2 since the maximum is always greater than the average of f (n) and g (n). Also we can select n0=1. CS 307 Fundamentals of Computer Science

Master Theorem CS 307 Fundamentals of Computer Science

Solving Recurrence relations 8) Solve the recurrence equation: T (n) = 16T(n/4) + n2. Now we solve this recurrence using the master theorem. This recurrence relation can be solved using the master’s theorem. When this recurrence relation is compared to the general equation: T(n)=aT(n/b) + f(n), we get: a=16, b=4 and f (n) = n2 which is a quadratic function of n. Before applying master’s theorem to this problem, we will have to compare the function f(n) with the function nlogba. Intuitively the solution of the recurrence is determined by the larger of the two functions. Here nlogba = nlog416 =n2 (Since 16=42 we get nlog416 to be equal to n2 as log44 =1). CS 307 Fundamentals of Computer Science

Continued.. Now here we can see that nlogba = f(n)= n2. According to Master’s theorem when nlogba is larger then the solution is T(n)= Θ(nlogba) and when f(n) is larger, the solution is T(n)= Θ (f(n)) and when both are equal (which is the case here) then we multiply it by a logarithmic factor and the solution is T(n)= Θ(nlogba logn) = Θ (f(n)logn). Now we can prove that f(n)= Θ(nlogba) as follows: By definition, f(n)=Θ(g(n)) when the following is true: there exists positive constants c1,c2 and n0 such that 0<=c1g(n)<=f(n)<=c2g(n) for all n>=n0. Here we can easily prove that by taking constants c1 and c2 to be 1. Here since nlogba = n2 we substitute that in place of g(n). Then we get 0<= n2 <= n2<= n2 which is true for all n>=0. Hence for constants c1=c2=1 and n0=0 we can prove the inequality to be true and hence f(n)= Θ(nlogba). In general when f(n)=g(n) then we can for sure say that f(n)=Θ(g(n)). Now going back to the master’s theorem, since we have proved the above the alternative 2 of master’s theorem can be used and hence T(n)= Θ(nlogba logn). So the solution is T(n)= Θ(n2 logn). CS 307 Fundamentals of Computer Science