Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Divide & Conquer Algorithms Part 4. 2 Recursion Review A function that calls itself either directly or indirectly through another function Recursive.

Similar presentations


Presentation on theme: "1 Divide & Conquer Algorithms Part 4. 2 Recursion Review A function that calls itself either directly or indirectly through another function Recursive."— Presentation transcript:

1 1 Divide & Conquer Algorithms Part 4

2 2 Recursion Review A function that calls itself either directly or indirectly through another function Recursive solutions involve:  Base case – the function returns a solution  Recursive case divide the problem into one or more simpler or smaller parts of the problem call the function (recursively) on each part, and combine the solutions of the parts into a solution for the problem.

3 3 Designing Algorithms Incremental Design  Most of the algorithms you have seen and programmed  Iterative An algorithmic technique where a function solves a problem by repeatedly working on successive parts of the problem

4 4 Designing Algorithms (cont) Divide & Conquer Design Three steps  DIVIDE Problem is divided into a number of subproblems  CONQUER Solve the subproblems recursively Base cases are simple enough to solve directly  COMBINE The solutions to the subproblems are combined to solve the original problem

5 5 Analyzing Divide & Conquer Algorithms Use a recurrence For small subproblem, solution takes constant time DIVIDE step creates a subproblems, each of size n/b  D(n) time to divide into subproblems  C(n) time to combine solutions of subproblems

6 6 MergeSort Requires additional memory space as a function of n  Unlike Insertion Sort which sorts in place Requires only a constant amount of additional space Sorts in  (nlgn)

7 7 MergeSort DIVIDE the n element sequence to be sorted into two subsequences of n/2 elements each CONQUER (sort) the two subsequences recursively using merge sort  The recursion stops when subproblem contains only one element COMBINE (merge) the two sorted subsequences to produce the sorted answer

8 8 MergeSort (Cont) …199627494139048… 1996274 94139048 199 62749413 9048 19 9627494139048

9 9 MergeSort (Cont) …913194862749094… 9196274 13489094 919 62741394 4890 19 9627494139048

10 10 MergeSort (cont) To sort entire array: MergeSort( A, 1, length(A) ) MergeSort( A, p, r ) 1.if p < r 2.q   (p + r) / 2  3.MergeSort( A, p, q ) 4.MergeSort( A, q+1, r ) 5.Merge( A, p, q, r )

11 11 MergeSort (Cont) Merge( A, p, q, r ) 1.n1  q – p + 1 2.n2  r – q 3.create arrays L[1..n1+1] and R[1..n2+1] 4.for i  1 to n1 5.L[ i ]  A[p+i-1] 6.for j  1 to n2 7.R[ i ]  A[q+j] 8.L[n1+1]   9.R[n2+1]   10.i  1 11.j  1 12.for k  p to r 13.if L[ i ]  R[ j ] 14.A[ k ]  L[ i ] 15.i = i + 1 16.else A[ k ]  R[ j ] 17.j = j + 1 Sentinel values

12 12 Analysis of MergeSort Merge function:  Line: 1- 2   (1)  Line: 3   (n)  Line: 4 – 7  i loop + j loop =  (n)  Line: 8 – 11   (1)  Line: 12   (n)  Line: 13 – 17   (1) Total run time =  (n)

13 13 Analysis of MergeSort (cont) MergeSort function  For simplicity, assume n is a power of 2  If n = 1, takes constant time  If n > 1 then DIVIDE – Lines 1, 2   (1) –D(n) =  (1) CONQUER – Lines 3, 4  2T(n/2) –two subproblems, each of size n/2 COMBINE – Line 5   (n) –C(n) =  (n)

14 14 Analysis of MergeSort (cont) So the recurrence is: Note: D(n) + C(n) =  (1) +  (n) =  (n) The solution to the recurrence

15 15 QuickSort Worst time:  (n 2 ) Expected time:  (nlgn)  Constants in the expected time are small Sorts in place

16 16 QuickSort (cont) DIVIDE – Partition A[p..r] into two subarrays A[p..q-1] and A[q+1..r] such that each element of A[p..q-1] is  A[q]  each element of A[q+1..r] Conquer – Sort the two subarrays by recursive calls to Quicksort Combine – Since subarrays are sorted in place, they are already sorted

17 17 QuickSort (cont) To sort entire array: QuickSort( A, 1, length(A) ) QuickSort( A, p, r ) 1.if p < r 2.q  Partition( A, p, r ) 3.QuickSort( A, p, q-1 ) 4.QuickSort( A, q+1, r )

18 18 QuickSort (cont) Partition( A, p, r ) x  A[ r ] i  p – 1 for j  p to r-1 if A[ j ]  x i  i + 1 Exchange( A[ i ], A[ j ] ) Exchange( A[ i+1 ], A[ r ] ) return i+1

19 19 QuickSort (cont)

20 20 QuickSort (cont) Return i+1 which is 4

21 21 Performance of QuickSort Partition function’s running time -  (n) Running time of QuickSort depends on the balance of the partitions  If balanced, QuickSort is asymptotically as fast as MergeSort  If unbalanced, it is asymptotically as bad as Insertion Sort

22 22 Performance of QuickSort (cont) Worst-case Partitioning  Partitions always of size n-1 and 0  Occurs when array is already sorted  Recurrence for the running time:

23 23 Performance of QuickSort (cont) cn c(n-1) c(n-2)...... n cn c(n-1) c(n-2) c Total:  (n 2 )

24 24 Performance of QuickSort (cont) Best-case Partitioning  Partitions always of size  n/2  and  n/2  -1  The recurrence for the running time: Case 2 of Master Method

25 25 Performance of QuickSort (cont) Balanced Partitioning  Average-case is closer to best-case  Any split of constant proportionality (say 99 to 1) will have a running time of  (nlgn) The recurrence will be Because it yields a recursion tree of depth  (lgn), where cost at each level is  (n) See page 151 (new book) for picture –Or next slide

26 26 Performance of QuickSort (cont) c(n/100)c(99n/100) cn c(n/10000)c(99n/10000) c(9801n/10000) T(1) Total:  (nlgn) log 100 n cn  cn log 100/99 n

27 27 Performance of QuickSort (cont) Intuition for the average case  The behavior depends on the relative ordering of the values Not the values themselves  We will assume (for now) that all permutations are equally likely  Some splits will be balanced, and some will be unbalanced

28 28 Performance of QuickSort (cont)  In a recursion tree for an average-case, the “good” and “bad” splits are distributed randomly throughout the tree  For our example, suppose Bad splits and good splits alternate Good splits are best-case splits Bad splits are worst-case splits Boundary case (subarray size 0) has cost of 1

29 29 Performance of QuickSort (cont)  The  (n-1) cost of the bad split can be absorbed into the  (n) cost of the good split, and the resulting split is good  Thus the running time is  (nlgn), but with a slightly larger constant n 0n - 1 (n-1) / 2 (n-1) / 2 -1 (n)(n) n (n-1) / 2 (n)(n)

30 30 Randomized QuickSort How do we increase the chance that all permutations are equally likely?  Random Sampling Don’t always use last element in subarray Swap it with a randomly chosen element from the subarray  Pivot now is equally likely to be any of the r – p + 1 elements We can now expect the split to be reasonably well-balanced on average

31 31 Randomized QuickSort (cont) Randomized-Partition( A, p, r ) 1.i  Random( p, r ) 2.Exchange( A[ r ], A[ i ] ) 3.return Partition( A, p, r ) Note that Partition( ) is same as before

32 32 Randomized QuickSort (cont) Randomized-QuickSort( A, p, r ) 1.if p < r 2.q  Randomized-Partition( A, p, r ) 3.Randomized-QuickSort( A, p, q-1 ) 4.Randomized-QuickSort( A, q+1, r )

33 33 Analysis of QuickSort A more rigorous analysis Begin with worst-case  We intuited that worst-case running time is  (n 2 )  Use substitution method to show this is true

34 34 Analysis of QuickSort (cont)  Guess:  (n 2 )  Show: for some c > 0  Substitute: q 2 +(n-q-1) 2 is max at endpoints of range. Therefore it is  (n-1) 2 = n 2 – 2n +1

35 35 Analysis of QuickSort (cont)  Problem 7.4-1 has you show that  Thus the worst-case running time of QuickSort is  (n 2 )

36 36 Analysis of QuickSort (cont) We will show that the upper-bound on expected running time is  (nlgn) We’ve already shown that the best-case running time is  (nlgn) Combined, these will give an expected running time of  (nlgn)

37 37 Analysis of QuickSort (cont) Expected Running Time  Work done is dominated by Partition  Each time a pivot is selected, this element is never included in subsequent calls to QuickSort And the pivot is in its correct place in the array  Therefore, at most n calls to Partition will be made Each call to Partition involves  (1) work plus the amount of work done in the for loop Count the total number of times line 4 is executed, we can bound the amount of time spent in the for loop Line 4: if A[j]  x

38 38 Analysis of QuickSort (cont)  Lemma 7.1 Let X be the number of comparisons performed in line 4 of Partition over the entire execution of QuickSort on an n-element array. Then the running time of QuickSort is  (n + X) Proof: –There are n calls to Partition, each of which does  (1) work then executes the for loop (which includes line 4) some number of times –Since the for loop executes line 4 during each iteration, X represents the number of iterations of the for loop along with the number of comparisons performed –Therefore T(n) =  (n  (1) + X) =  (n + X)

39 39 Analysis of QuickSort (cont)  We need to compute X  We do this by computing an overall bound on the total number of comparisons NOT by computing the number of comparisons at each call to Partition  Definitions: z 1, z 2, …, z n  elements in the array z i  i th smallest element set Z ij = {z i, z i+1, …, z j }

40 40 Analysis of QuickSort (cont)  When does the algorithm compare z i and z j ?  Note: each pair of elements is compared at most once. Why?  Our analysis uses indicator random variables

41 41 Indicator Random Variables They provide a convenient method for converting between probabilities and expectations These are random variables which take on only the value 0 or 1, so they “indicate” whether or not something has happened Indicator Random Variable I{A} is defined as:  Given a sample space S and an event A  I{A} = 1if A occurs 0if A does not occur

42 42 Indicator Random Variables (cont) A simple example  Determine the number of heads when flipping a coin  Sample space S = {H, T}  Simple random variable Y These are random variables whose range contains only a finite number of elements In this case, it takes on the values H and T Each with a probability of ½  X H is associated with the event Y = H X H = I{Y = H} = 1if Y = H 0if Y = T

43 43 Indicator Random Variables (cont)  The expected number of heads in one flip is the expected value of our indicator variable X H Thus the expected number of heads in one flip is ½

44 44 Indicator Random Variables (cont) Lemma 5.1  Given a sample space S and an event A in the sample space S, let X A = I{A}. Then E[X A ] = Pr{A} See proof on page 95 To compute the number of heads in n coin flips  Method 1 – compute the probability of getting 0 heads, 1 heads, 2 heads, etc

45 45 Indicator Random Variables (cont)  Method 2 Let X i be the indicator random variable associated with the event “the i th flip is heads” Let Y i be the random variable denoting the outcome of the i th flip –X i = I{Y i = H} Let X be the random variable denoting the total number of heads in the n coin flips

46 46 Indicator Random Variables (cont) Take the expectation of both sides: By Lemma 5.1

47 47 (Back to) Analysis of QuickSort  We will use indicator random variables X ij = I{z i is compared to z j } it indicates whether the comparison took place at any time during execution of QuickSort  Since each pair is compared at most once:

48 48 Analysis of QuickSort (cont)  Take expectation of both sides

49 49 Analysis of QuickSort (cont)  We still need to compute Pr{z i is compared to z j }  Start by thinking about when two items are not compared once a pivot x is chosen with z i < x < z j –z i and z j will never be compared if z i is the first pivot chosen in Z ij –z i will be compared to every other element in Z ij Similarly for z j  Thus, z i and z j are compared iff the first pivot from Z ij is either z i or z j

50 50 Analysis of QuickSort (cont)  What is the probability that this event occurs? Before a pivot has been chosen from Z ij, all elements of Z ij are in the same partition Each element of Z ij is equally likely to be chosen as the first pivot the probability is

51 51 Analysis of QuickSort (cont)  Thus we have Because the two events are mutually exclusive

52 52 Analysis of QuickSort (cont) Combining the two boxed equations Change of variables: k = j – i Note the changes in the summation variables Bound on Harmonic Series:

53 53 Analysis of QuickSort (cont) Thus, using Randomized-Partition, the expected running time of QuickSort is  (nlgn)

54 54 Median and Order Statistics In this section, we will study algorithms for finding the i th smallest element in a set of n elements We will again use divide-and-conquer algorithms

55 55 The Selection Problem Input: A set A of n (distinct) numbers and a number i, with 1  i  n Output: The element x  A that is larger than exactly i – 1 other elements of A  x is the i th smallest element i = 1  minimum i = n  maximum

56 56 The Selection Problem (cont) A simple solution  Sort A  Return A[ i ]  This is  (nlgn)

57 57 Minimum and Maximum Finding the minimum and maximum  Takes (n-1) comparisons (  (n))  This is the best we can do and is optimal with respect to the number of comparisons MINIMUM(A) min  A[1] for i  2 to length(A) if min > A[ i ] min  A[ i ] return min MAXIMUM(A) max  A[1] for i  2 to length(A) if max < A[ i ] max  A[ i ] return max

58 58 Minimum and Maximum (cont) Simultaneous minimum and maximum  Obvious solution is 2(n-1) comparisons  But we can do better – namely  The algorithm If n is odd, set max and min to first element If n is even, compare first two elements and set max, min Process the remaining elements in pairs Find the larger and the smaller of the pair Compare the larger of the pair with the current max And the smaller of the pair with the current min

59 59 Minimum and Maximum (cont)  Total number of comparisons If n is oddcomparisons If n is even –1 initial comparison –And 3(n – 2)/2 comparisons –For a total of 3n/2 – 2 comparisons In either case, total number of comparisons is at most

60 60 Selection in Expected Linear Time Goal: Select i th smallest element from A[p..r]. Partition into A[p..q-1] and A[q+1..r] if i = q  then return A[q] If i th smallest element is in A[p..q-1]  then recurse on A[p..q-1]  else recurse on A[q+1..r]

61 61 Selection in Expected Linear Time (cont) Randomized-Select(A, p, r, i) 1 if p = r 2 return A[p] 3 q  Randomized-Partition(A, p, r) 4 k  q - p + 1 //Why this statement? 5 if i = k 6 return A[q] 7else if i < k 8return Randomized-Partition(A, p, q-1, i) 9else 10return Randomized-Partition(A, q+1, r, i-k)

62 62

63 63 Analysis of Selection Algorithm Worst-case running time is  (n 2 )  Partition takes  (n)  If we always partition around the largest remaining element, we reduce the partition-size by one element each time What is best-case?

64 64 Analysis of Selection Algorithm (cont) Average Case  Average-case running time is  (n)  The time required is the random variable T(n)  We want an upper bound on E[T(n)]  In Randomized-Partition, all elements are equally likely to be the pivot

65 65 Analysis of Selection Algorithm (cont)  So, for each k such that 1  k  n, subarray A[p..q] has k elements All  the pivot with probability 1/n  For k = 1, 2, …, n we define indicator random variables X k where X k = I{the subarray A[p..q] has exactly k elements}  So, E[X k ] = 1/n

66 66 Analysis of Selection Algorithm (cont)  When we choose the pivot element (which ends up in A[q]) we do not know what will happen next Do we return with the i th element (k = i)? Do we recurse on A[p..q-1]? Do we recurse on A[q+1..r]?  Decision depends on i in relation to k  We will find the upper-bound on the average case by assuming that the i th element is always in the larger partition

67 67 Analysis of Selection Algorithm (cont)  Now, X k = 1 for just one value of k, 0 for all others  When X k = 1, the two subarrays have sizes k – 1 and n – k  Hence the recurrence:

68 68 Analysis of Selection Algorithm (cont)  Taking the expected values:

69 69 Analysis of Selection Algorithm (cont)  Looking at the expression max(k-1, n-k) If n is even, each term from appears twice in the summation If n is odd, each term from appears twice and appears once in the summation

70 70 Analysis of Selection Algorithm (cont)  Thus we have  We use substitution to solve the recurrence  Note: T(1) =  (1) for n less than some constant  Assume that T(n)  cn for some constant c that satisfies the initial conditions of the recurrence

71 71 Analysis of Selection Algorithm (cont)  Using this inductive hypothesis

72 72 Analysis of Selection Algorithm (cont)

73 73 Analysis of Selection Algorithm (cont)  To complete the proof, we need to show that for sufficiently large n, this last expression is at most cn i.e. As long as we choose the constant c so that c/4 – a > 0 (i.e., c > 4a), we can divide both sides by c/4 – a

74 74 Analysis of Selection Algorithm (cont)  Thus, if we assume that T(n) =  (1) for, we have T(n) =  (n)

75 75 Selection in Worst-Case Linear Time “Median of Medians” algorithm It guarantees a good split when array is partitioned  Partition is modified so that the pivot now becomes an input parameter The algorithm:  If n = 1 return A[n]

76 76 Selection in Worst-Case Linear Time (cont) 1.Divide the n elements of the input array into  n/5  groups of 5 elements each and at most one group of (n mod 5) elements 2.Find the median of each of the  n/5  groups by using insertion sort to sort list and then pick the 3 rd element of each group 3.Use Select recursively to find the median x of the  n/5  medians found in step 2. –If even number of medians, choose lower median

77 77 Selection in Worst-Case Linear Time (cont) 4.Partition the input array around the “median of medians” x using the modified version of Partition. Let k be one more than the number of elements on the low side of the partition, so that x is the k th smallest element and there are n – k elements on the high side of the partition 5.if i = k, then return x. Otherwise, use Select recursively to find the i th smallest element on the low side if i k

78 78 Selection in Worst-Case Linear Time (cont) Example of “Median of Medians”  Input Array A[1..100]  Step 1: 25 groups of 5  Step 2: We get 25 medians  Step 3: Step 1: Using the 25 medians we get 5 groups of 5 Step 2: We get 5 medians Step 3: Step 1: Using the 5 medians, we get 1 group of 5 Step 2: We get 1 median  Step 4: Partition A around the median

79 79 Analyzing “Median of Medians” Analyzing “median of medians”  The following diagram might be helpful:

80 80 Analyzing “Median of Medians” (cont)  First, we need to put a lower bound on how many elements are greater than x  How many of the medians are greater than x? At least half of the medians from the groups –Why “at least half?” medians are greater than x

81 81 The two discarded groups Analyzing “Median of Medians” (cont)  Each of these medians contribute at least 3 elements greater than x except for two groups The group that contains x –contributes only 2 elements greater than x The group that has less than 5 elements  So the total number of elements > x is at least:

82 82 Analyzing “Median of Medians” (cont)  Similarly, there are at least elements smaller than x  Thus, in the worst case, for Step 5 Select is called recursively on the largest partition The largest partition has at most elements The size of the array minus the number of elements in the smaller partition

83 83 Analyzing “Median of Medians” (cont)  Developing the recurrence: Step 1 takes  (n) time Step 2 takes  (n) time –  (n) calls to Insertion Sort on sets of size  (1) Step 3 takes Step 4 takes  (n) time Step 5 takes at most

84 84 Analyzing “Median of Medians” (cont)  So the recurrence is  Now use substitution to solve Assume T(n)  cn for some suitable large constant c and all n  ??? Also pick a constant a such that the function described by the  (n) term is bounded above by an for all n > 0

85 85 Comes from removing the   Analyzing “Median of Medians” (cont) Which is at most cn if If n = 70, then this inequality is undefined

86 86 Analyzing “Median of Medians” (cont)  We assume that n  71, so  Choosing c  710a will satisfy the inequality on the previous slide  You could choose any constant > 70 to be the base case constant Thus, the selection problem can be solved in the worst-case in linear time

87 87 Review of Sorts Review of sorts seen so far  Insertion Sort Easy to code Fast on small inputs (less than ~50) Fast on nearly sorted inputs Stable  (n) best case (sorted list)  (n 2 ) average case  (n 2 ) worst case (reverse sorted list)

88 88 Review of Sorts Stable means that numbers with the same value appear in the output array in the same order as they do in the input array. That is, ties between two numbers are broken by the rule that whichever number appears first in the input array appears first in the output array. Normally, the property of stability is important only when satellite data are carried around with the element being sorted.

89 89 Review of Sorts (cont)  MergeSort Divide and Conquer algorithm Doesn’t sort in place Requires memory as a function of n Stable  (nlgn) best case  (nlgn) average case  (nlgn) worst case

90 90 Review of Sorts (cont)  QuickSort Divide and Conquer algorithm –No merge step needed Small constants Fast in practice Not stable  (nlgn) best case  (nlgn) average case  (n 2 ) worst case

91 91 Review of Sorts (cont) Several of these algorithms sort in  (nlgn) time  MergeSort in worst case  QuickSort on average On some input we can achieve  (nlgn) time for each of these algorithms The sorted order they determine is based only on comparisons between the input elements They are called comparison sorts

92 92 Review of Sorts (cont) Other techniques for sorting exist, such as Linear Sorting which is not based on comparisons Usually with some restrictions or assumptions on input elements Linear Sorting techniques include:  Counting Sort  Radix Sort  Bucket Sort

93 93 Lower Bounds for Sorting In general, assuming unique inputs, comparison sorts are expressed in terms of comparisons.  are equivalent in learning about the order of a i and a j What is the best we can do on the worst case type of input? What is the best worst-case running time?

94 94 The Decision-Tree Model 1:2 2:3 1:3  1,2,3   2,1,3   1,3,2  3,1,2  2,3,1  3,2,1           n = 3 input:  a 1,a 2,a 3  # possible outputs = 3! = 6 Each possible output is a leaf

95 95 Analysis of Decision-Tree Model Worst Case Comparisons is equal to height of decision tree Lower bound on the worst case running time is the lower bound on the height of the decision tree. Note that the number of leaves in the decision tree  n!, where n = number elements in the input sequence

96 96 Theorem 8.1 Any comparison sort algorithm requires  (nlgn) comparisons in the worst case Proof:  Consider a decision tree of height h that sorts n elements  Since there are n! permutations of n elements, each permutation representing a distinct sorted order, the tree must have at least n! leaves

97 97 Theorem 8.1 (cont)  A binary tree of height h has at most 2 h leaves The best possible worst case running time for comparison sorts is thus  (nlgn) Mergesort, which is O(nlgn), is asymptotically optimal By equation 3.18

98 98 Sorting in Linear Time But the name of this chapter is “Sorting in Linear Time” How can we do better?  CountingSort  RadixSort  BucketSort

99 99 Counting Sort No comparisons between elements But depends on assumptions about the values being sorted  Each of the n input elements is an integer in the range 0 to k  When k =  (n), the sort runs in  (n) time  The algorithm: Input: A[1..n] where A[ j ]  {0, 1, …, k} Output: B[1..n], sorted –Notice: elements are not sorted in place Also: C[1..k] for auxiliary storage

100 100 Counting Sort (cont) Counting-Sort(A, B, k) 1.for i  0 to k 2.C[i]  0 3.for j  1 to Length(A) 4.C[A[j]]  C[A[j]] + 1 5.// C[i] now contains the number of elements = i 6.for i  1 to k 7.C[i]  C[i] + C[i-1] 8.// C[i] now contains the number of elements  i 9.for j  Length(A) downto 1 10.B[C[A[j]]]  A[j] 11.C[A[j]]  C[A[j]] – 1 k+2 k+1 n+1 n k+1 k n+1 n -------------------------------------  ---------------------------------------  --------------------------  -----------------------  -------------------------------------  --------------------------  --------------------  ---------------------------  ----------------------- 

101 101

102 102 Analysis of CountingSort:  (nlgn) does not apply because CountingSort isn’t a comparison sort CountingSort is stable because 4 th loop goes from n downto 1

103 103 Radix Sort This sort was originally used to sort computer punch-card decks. It is currently used for multi-key sorts  for example: year/month/day Consider each digit of the number as a separate key

104 104 Radix Sort (cont) Idea 1: Sort on most significant digit n, then sort on digit n-1, etc. Problem: For old sorters, sort into 10 bins, but subsequent recursive sorts require all 10 bins  Operator must store the other 9 piles Idea 2: Sort on least significant digit, then sort on next least significant digit, etc.

105 105 Radix Sort (cont) Radix-Sort(A, d) for i  1 to d use a stable sort to sort array A on digit d

106 106 Radix Sort (cont) Radix-n: another way of indicating the sort used  Implies each digit can differentiate among n different symbols.  In the previous case we assumed radix-10. This is why the name Radix Sort is given

107 107 Analysis of RadixSort If each digit is in the range 1 to k (or 0 to k-1), use CountingSort for each pass Each pass over a digit is  (n + k) For d digits  (d(n + k)) If d is a constant and k =  (n) T(n) =  (n)

108 108 Proof by induction that RadixSort works Base Case: d = 1  since only one digit, sorting by that digit sorts the list Inductive Step: Assume it holds for d – 1digits Show: that it works for d digits  A radix sort of d digits is the same as a radix sort of d-1 digits followed by a sort on digit d  By our inductive hypothesis, the sort on d-1 digits works and the digits are in order according to their low-order d- 1 digits

109 109 Proof by induction that RadixSort works (cont) The sort on digit d will order the elements by their d th digit Consider two elements a and b, with d th digits a d and b d respectively If a d < b d, the sort will put a before b, which is correct, since a < b regardless of their low order digits

110 110 Proof by induction that RadixSort works (cont) If a d > b d, the sort will put a after b, which is correct, since a > b regardless of their low order digits if a d = b d, the sort will leave a and b in the same order they were in, because it is stable. But that order is already correct, since the correct order of a and b is determined by the low-order d-1 digits when their d th digits are equal

111 111 Radix Sort Example Show how n integers in the range 1 to n 2 can be sorted in  (n) time  Subtract 1 from each number So they’re in the range 0 to n 2 -1 We’ll add the one back after they are sorted  Use a radix-n sort  Each digit requires n symbols, and log n n 2 digits are needed (d=2, k=n). i.e., treat the numbers as 2-digit numbers in radix n Each digit ranges from 0 to n-1

112 112 Radix Sort Example (cont)  Sort these 2-digit numbers with radix sort  There are 2 calls to counting sort For  (2(n + n)) =  (n) time  The passes to subtract 1 and add 1each take  (n) time  Hence, total running time of  (n)

113 113 Radix Sort Example (cont) Take 15 integers in the range of 1 to 15 2 (225) Subtract one from each number to get the range 0 to 224 We will use a radix-15 sort We will need 2 digits of n-many symbols  Each digit ranges from 0 to n-1 See Handout

114 114 Bucket Sort Bucket Sort assumes that the input values are uniformly distributed over the range [0,1), 0  x < 1 Procedure:  Divide inputs into n equal-sized subintervals (buckets) over the range [0,1).  Sort each bucket and concatenate the buckets. T(n) =  (n)

115 115 Bucket Sort (cont)

116 116 Bucket Sort (cont) Bucket-Sort(A) 1.n  Length(A)1 2.for i  1 to nn+1 3.insert A[i] into list B[  nA[i]  ]n 4.for i  0 to n – 1n+1 5.sort list B[i] with InsertionSortn*T(n) 6.concatenate the lists B[0], B[1], …,B[n-1] n together in order

117 117 Analysis of Bucket Sort All lines except Line 5 take  (n) time What is the cost of the calls to InsertionSort?  Let n i be the random variable denoting the number of elements in bucket B[i]  And we know that InsertionSort runs in  (n 2 ) time

118 118 Analysis of Bucket Sort (cont) By equation c.21 EQ 8.2 EQ 8.1

119 119 Analysis of Bucket Sort (cont)  To prove the above equation, define indicator random variables X ij = I{A[j] falls into bucket i] –for i = 0, 1, …, n-1 –for j = 1, 2, …, n  Thus…

120 120 Analysis of Bucket Sort (cont) EQ 8.3

121 121 Analysis of Bucket Sort (cont) Evaluate the two summations separately  Lemma 5.1 E[X A ] = Pr{A}

122 122 Analysis of Bucket Sort (cont) Substitute these into EQ 8.3 Which proves EQ 8.2

123 123 Thus, when input is drawn from a uniform distribution, BucketSort runs in linear time Analysis of Bucket Sort (cont) Use the expected value in EQ 8.1

124 124 Strassen’s Algorithm for Matrix Multiplication It is another divide-and-conquer algorithm The naïve (straightforward) method for multiplying two n x n matrices is  (n 3 )  n 3 multiplications  n 2 (n-1) additions Strassen’s algorithm runs in  (n lg7 ) =  (n 2.81 )

125 125 Strassen’s Algorithm for Matrix Multiplication (cont) We wish to compute C = AB  All are n x n matrices  Assume n is a power of 2 Divide each of A, B, and C into four n / 2 x n / 2 matrices, rewriting C = AB

126 126 Strassen’s Algorithm for Matrix Multiplication (cont) To compute each of the sub-matrices of C:  r = ae + bg  s = af + bh  t = ce + dg  u = cf + dh These are the eight “essential terms”

127 127 Strassen’s Algorithm for Matrix Multiplication (cont) To solve C  We need to make 8 recursive calls to “multiply” (with input size n / 2 )  And then perform 4 matrix additions Therefore,  No better than the “naïve” method The addition of two n x n matrices takes  (n 2 ) time

128 128 Strassen’s Algorithm for Matrix Multiplication (cont) Strassen was the first to discover an asymptotically faster algorithm (in the 1970s) By case 1 of the Master Method

129 129 Strassen’s Algorithm for Matrix Multiplication (cont) His method  Divide the input matrices into n / 2 x n / 2 submatrices  Using  (n 2 ) scalar additions and subtractions, compute 14 matrices A 1, B 1, A 2, B 2, …, A 7, B 7, each of which is n / 2 x n / 2  Recursively compute the seven matrix products P i = A i B i for i = 1, 2, …, 7

130 130 Strassen’s Algorithm for Matrix Multiplication (cont)  Compute the desired submatrices r, s, t, u of the result matrix C by adding and/or subtracting various combinations of the P i matrices, using only  (n 2 ) scalar additions and subtractions

131 131 Strassen’s Algorithm for Matrix Multiplication (cont) The P i matrices: Notice that each of these 7 P i matrices requires ONE multiplication only

132 132 Strassen’s Algorithm for Matrix Multiplication (cont) Use the P i matrices to compute:

133 133 Strassen’s Algorithm for Matrix Multiplication (cont) The simple recurrence to compute C is: The cost of the 7 recursive calls The cost of the 18 n / 2 x n / 2 matrix additions and subtractions

134 134 An Example of Strassen’s Algorithm  Let C, A, and B be 2x2 matrices with  Therefore:

135 135 An Example of Strassen’s Algorithm (cont)

136 136 An Example of Strassen’s Algorithm (cont)

137 137 Discussion of Strassen’s Algorithm Strassen’s algorithm is numerically stable, but not always as accurate as the naïve algorithm It requires a significant amount of workspace to store intermediate results The constant factor in the running time is greater than that in the naive algorithm

138 138 Discussion of Strassen’s Algorithm (cont) When matrices are sparse, there are faster algorithms designed for sparse matrices A potential attraction is its natural block decomposition; instead of continuing the recursion until n = 1, one could instead use a more standard matrix multiply as soon as n is small enough

139 139 Final Notes on Strassen’s Algorithm What if n is not a power of 2?  And you want to compute the product of two n x n matrices A and B Set Consider the following two matrices

140 140 Final Notes on Strassen’s Algorithm (cont) Since is a power of two, Strassen’s algorithm can be used to multiply Since, the resulting algorithm still uses operations to multiply A and B together

141 141 An Example

142 142 Polynomials and FFT To add two polynomials of degree n takes  (n) time To multiply two polynomials of degree n takes  (n 2 ) time Using the Fast Fourier Transform, we can reduce the time of multiplying to  (nlgn) time

143 143 Polynomials Coefficients  Degree k  The highest nonzero coefficient is a k Degree-bound  Any integer strictly greater than the degree of a polynomial Any polynomial with degree 0 to n-1 has a degree-bound n

144 144 Polynomials (cont) Polynomial addition  If A(x) and B(x) are polynomials of degree- bound n, then A(x) + B(x) = C(x) is also of degree-bound n

145 145 Polynomials (cont) Polynomial multiplication  degree(C) = degree(A) + degree(B) which implies Degree-bound (C) = degree-bound(A) + degree-bound(B) – 1 <= degree-bound(A) + degree-bound(B)

146 146 Review of Complex Numbers Have the form a + bi where i =

147 147 Coefficient Representation of Polynomials Is a vector of coefficients Evaluating the polynomial A(x 0 ) using Horner’s rule:  Takes time  (n) y  0 i  n while i >= 0 y  a i + x * y i  i - 1

148 148 Coefficient Representation of Polynomials (cont) Adding two polynomials  Simply add the two coefficient vectors a and b representing A(x) and B(x)  Takes time  (n)

149 149 Coefficient Representation of Polynomials (cont) Multiplying two polynomials  Multiply each element of coefficient vector a by each element in coefficient vector b giving coefficient vector c  c is called the convolution of vectors a and b It is denoted by c = a  b  Takes time  (n 2 )

150 150 Coefficient Representation of Polynomials (cont) Multiplying polynomials and computing convolutions is done often in practice We need efficient algorithms for them

151 151 Point-Value Representation of Polynomials A point-value representation of a polynomial A(x) of degree-bound n is a set of n point-value pairs such that all of the x k are distinct and y k = A(x k ) for k = 0, 1, …, n-1

152 152 Point-Value Representation of Polynomials (cont) Uniquely specified by knowing A(x) at n different values of x x y = A(x) xjxj yjyj

153 153 Point-Value Representation of Polynomials (cont) To compute the point-value representation for a polynomial in coefficient form  Select n distinct points x 0, x 1, …, x n-1  Evaluate A(x k ) for k = 0, 1, …, n-1  Using Horner’s method this takes  (n 2 ) time

154 154 Point-Value Representation of Polynomials (cont) Interpolation  Taking the point-value representation and determining the coefficient form of the polynomial This is the same as “evaluating” the polynomial in point-value form

155 155 Point-Value Representation of Polynomials (cont) Theorem 30.1 (Uniqueness of an interpolating polynomial)  For any set of n point-value pairs such that all the x k values are distinct, there is a unique polynomial A(x) of degree-bound n such that y k = A(x k ) for k = 0, 1, …, n-1  Proof – but first a review of some matrix operations

156 156 Review of Matrix Operations n x n identity matrix (I n ) = Inverse of an n x n matrix A is the n x n matrix A -1 (if it exists), such that

157 157 Review of Matrix Operations (cont) If a matrix has an inverse, it is called  Invertible  Nonsingular If a matrix has no inverse, it is called  Noninvertible  Singular When a matrix inverse exists, it is unique

158 158 Review of Matrix Operations (cont) Minor  The ijth minor of an n x n matrix A, for n > 1, is the (n-1) x (n-1) matrix A [ i j ] obtained by deleting the i th row and j th column of A

159 159 Review of Matrix Operations (cont) Determinant of an n x n matrix A  Defined recursively in terms of its minors by

160 160 Review of Matrix Operations (cont) To easily find the determinant of a 2 x 2 matrix:

161 161 Review of Matrix Operations (cont) To find the determinant of a 3 x 3 matrix

162 162 Review of Matrix Operations (cont) Example: Find the determinant of:

163 163 Review of Matrix Operations (cont) Same example, different notation

164 164 Review of Matrix Operations (cont) Example: Find the determinant of:

165 165 Review of Matrix Operations (cont)

166 166 Review of Matrix Operations (cont) Same example, different notation

167 167 Review of Matrix Operations (cont) Theorem 28.5  An n x n matrix A is singular (is noninvertible) if and only if det(A) = 0  Conversely, an n x n matrix A is nonsingular (is invertible) if and only if det(A)  0

168 168 Review of Matrix Operations (cont) Theorem: Square Systems with Exactly One Solution  If A is an invertible square matrix, then the system of equations given by AX = B has the solution X = A -1 B  Proof:

169 169 Back To Point-Value Representation of Polynomials Theorem 30.1 (Uniqueness of an interpolating polynomial)  For any set of n point-value pairs such that all the x k values are distinct, there is a unique polynomial A(x) of degree-bound n such that y k = A(x k ) for k = 0, 1, …, n-1

170 170 Point-Value Representation of Polynomials (cont) Proof of Theorem 30.1:  y k = A(x k ) for k = 0, 1, …, n-1 is equivalent to the matrix equation

171 171 Point-Value Representation of Polynomials (cont) The Vandermonde matrix (shown on previous slide), has the determinant And is therefore invertible if the x k are distinct. Thus, we can solve for vector a: See problem 28.1-11

172 172 Point-Value Representation of Polynomials (cont) To interpolate by solving the set of linear equations using the standard method of Gaussian Elimination takes  (n 3 ) time A faster algorithm for n-point interpolation based on Lagrange’s formula

173 173 Point-Value Representation of Polynomials (cont) To compute the coefficients takes time  (n 2 )

174 174 Point-Value Representation of Polynomials (cont) An example using Lagrange’s Formula Why is this algorithm only  (n 2 ) time?

175 175 Point-Value Representation of Polynomials (cont) Adding two polynomials:  Given point-value representation for two polynomials A and B:  Then the point-value representation for C:  The time is  (n) The x’s are the same!

176 176 Point-Value Representation of Polynomials (cont) Multiplying two polynomials:  We just need to multiply the corresponding values of A and B to get the point-value representation of C  A problem arises because the degree- bound of C (2n) is the sum of the degree- bounds for A and B (n) And we need 2n point-value pairs to represent C

177 177 Point-Value Representation of Polynomials (cont)  So, we must extend A and B to include 2n point-value pairs each  Then the point-value representation for C:  The time is  (n)

178 178 Best of Both Worlds Can we get "fast" multiplication and evaluation? coefficient Representation O(n 2 ) Multiplication O(n) Evaluation point-valueO(n)O(n 2 ) FFTO(n log n)

179 179 Best of Both Worlds (cont) Yes! Convert back and forth between two representations O(n) point-value multiplication O(n 2 ) coefficient multiplication O(n log n) evaluation FFT interpolation inverse FFT O(n log n)

180 180 Best of Both Worlds (cont) Key idea: choose {x 0, x 1,..., x n-1 } to make computation easier This will allow us to convert between representations in  (nlgn) time We will use “complex roots of unity” as the evaluation points

181 181 Complex Roots of Unity A complex n th root of unity is a complex number  such that  n = 1 There are exactly n many n th roots of unity

182 182 Complex Roots of Unity (cont) The principal nth root of unity All the complex n th roots of unity are powers of  n For each term,

183 183 Complex Roots of Unity (cont) Definition of the exponential of a complex number: And We get an n th root of unity by

184 184 Complex Roots of Unity (cont) But 1 can be written using different arguments as follows:

185 185 Complex Roots of Unity (cont) Hence dividing the argument in each case by n gives the following n th roots of unity

186 186 Complex Roots of Unity (cont) An example: Find the cube roots of unity

187 187 Complex Roots of Unity (cont) Another example: Find the fourth roots of unity

188 188 Complex Roots of Unity (cont) The n complex roots of unity are equally spaced around the circle of unit radius The cube roots of unity

189 189 Euler’s Formula: e  i +1 = 0 Properties of Roots of Unity Lemma 30.3 (Cancellations Lemma)  For any integers n  0, k  0, and d > 0,  Proof: Corollary 30.4  For any even integer n > 0,  Proof:

190 190 Properties of Roots of Unity (cont) Corollary 99  Let n > 0 be even, and let  n and n/2 be the principal n th and ( n / 2 ) th roots of unity. Then  Proof:

191 191 Properties of Roots of Unity (cont) Lemma 30.5 (Halving lemma)  Let n > 0 be even. Then, the squares of the n complex n th roots of unity are the n / 2 complex ( n / 2 ) th roots of unity.  Proof: If we square all of the n th roots of unity, then each (n/2) th root is obtained exactly twice since:

192 192 Properties of Roots of Unity (cont) Lemma 30.6 (Summation Lemma)  For any integer n  1 and nonnegative integer k not divisible by n  Proof:

193 193 The DFT Remember, we want to evaluate a polynomial of degree-bound n at  Actually, since we will eventually multiply, n is actually 2n We are working with complex (2n)th roots of unity  Also assume that n is a power of 2 We can add new high-order zero coefficients as necessary

194 194 The DFT (cont) Polynomial A is in coefficient form The vector y is the Discrete Fourier Transform of the coefficient vector a

195 195 The FFT The FFT allows us to compute the DFT n (a) in time  (nlgn) Define “even” and “odd” polynomials:

196 196 The FFT (cont) Reduces problem of  Evaluating degree n polynomial A(x) at  0,  1,...,  n-1 to  evaluating two degree n/2 polynomials at: (  0 ) 2, (  1 ) 2,..., (  n-1 ) 2. Halving Lemma  A [E] (x) and A [O] (x) evaluated only at n/2 complex (n/2) th roots of unity

197 197 The FFT Algorithm O(n) time multiplies if we pre-compute  k

198 198 The FFT Algorithm (cont) Recursion Tree

199 199 The FFT Algorithm (cont) The recurrence for this algorithm is

200 200 Proof of the FFT Algorithm We need to show that for each k = 0, …, n-1 Base case: n = 1 Induction step: Assume algorithm correct for n/2

201 201 Proof of the FFT Algorithm (cont) Let be the principal (n/2) th root of unity

202 202 Proof of the FFT Algorithm (cont)

203 203 The Inverse FFT O(n) point-value multiplication O(n 2 ) coefficient multiplication O(n log n) evaluation FFT interpolation inverse FFT O(n log n)

204 204 The Inverse FFT (cont) Forward FFT  given {a 0, a 1,..., a n-1 } compute {y 0, y 1,..., y n-1 } VnVn

205 205 The Inverse FFT (cont) Inverse FFT  given {y 0, y 1,..., y n-1 } compute {a 0, a 1,..., a n-1 } V n -1

206 206 The Inverse FFT (cont) It is the same algorithm as FFT, except use  n -1 as “principal” n th root of unity (and divide by n).

207 207 The Inverse FFT (cont) Theorem 30.7  For j, k = 0, 1, …, n-1, the (j, k) entry of  Proof: We will show that V n -1 V n = I n Consider the (j, j ) entry of V n -1 V n

208 208 The Inverse FFT (cont) If j = j, the summation = 1 If j  j, the summation = 0 Therefore, the interpolation/DFT n -1 (y) is By the Summation Lemma

209 209 The Inverse FFT Algorithm

210 210 Summary Slides Divide & Conquer Algorithms Recursion Review Designing Algorithms Analyzing Divide & Conquer Algorithms MergeSort Analysis of MergeSort QuickSort Performance of QuickSort

211 211 Summary Slides (cont.) Randomized QuickSort Analysis of QuickSort Indicator Random Variables Median and Order Statistics The Selection Problem Minimum and Maximum Selection in Expected Linear Time Analysis of Selection Algorithm

212 212 Summary Slides (cont.) Selection in Worst-Case Linear Time Analyzing “Median of Medians” Review of Sorts Lower Bounds for Sorting The Decision-Tree Model Analysis of Decision-Tree Model Theorem 8.1 Sorting in Linear Time

213 213 Summary Slides (cont.) Counting Sort Analysis of CountingSort: Radix Sort Analysis of RadixSort Proof by induction that RadixSort works Radix Sort Example Bucket Sort Analysis of Bucket Sort

214 214 Summary Slides (cont.) Strassen’s Algorithm for Matrix Multiplication An Example of Strassen’s Algorithm Discussion of Strassen’s Algorithm Final Notes on Strassen’s Algorithm Polynomials and FFT Polynomials Review of Complex Numbers

215 215 Summary Slides (cont.) Coefficient Representation of Polynomials Point-Value Representation of Polynomials Review of Matrix Operations Best of Both Worlds Complex Roots of Unity Properties of Roots of Unity

216 216 Summary Slides (cont.) The DFT The FFT The FFT Algorithm Proof of the FFT Algorithm The Inverse FFT The Inverse FFT Algorithm


Download ppt "1 Divide & Conquer Algorithms Part 4. 2 Recursion Review A function that calls itself either directly or indirectly through another function Recursive."

Similar presentations


Ads by Google