Chapter 4 Divide-and-Conquer Copyright © 2007 Pearson Addison-Wesley. All rights reserved.

Slides:



Advertisements
Similar presentations
Divide-and-Conquer The most-well known algorithm design strategy:
Advertisements

Divide and Conquer Strategy
Algorithm Design Paradigms
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. Problem Reduction Dr. M. Sakalli Marmara Unv, Levitin ’ s notes.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 5 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
Divide-and-Conquer Dr. Yingwu Zhu P65-74, p83-88, p93-96, p
1 Divide-and-Conquer The most-well known algorithm design strategy: 1. Divide instance of problem into two or more smaller instances 2. Solve smaller instances.
Divide and Conquer The most well known algorithm design strategy: 1. Divide instance of problem into two or more smaller instances 2. Solve smaller instances.
CS4413 Divide-and-Conquer
Chapter 4: Divide and Conquer Master Theorem, Mergesort, Quicksort, Binary Search, Binary Trees The Design and Analysis of Algorithms.
Theory of Algorithms: Divide and Conquer
Introduction to Algorithms Rabie A. Ramadan rabieramadan.org 6 Ack : Carola Wenk nad Dr. Thomas Ottmann tutorials.
DIVIDE AND CONQUER APPROACH. General Method Works on the approach of dividing a given problem into smaller sub problems (ideally of same size).  Divide.
Analysis of Algorithms CS 477/677 Sorting – Part B Instructor: George Bebis (Chapter 7)
Quicksort CS 3358 Data Structures. Sorting II/ Slide 2 Introduction Fastest known sorting algorithm in practice * Average case: O(N log N) * Worst case:
25 May Quick Sort (11.2) CSE 2011 Winter 2011.
Introduction to Algorithms Rabie A. Ramadan rabieramadan.org 4 Some of the sides are exported from different sources.
Lecture 8 Jianjun Hu Department of Computer Science and Engineering University of South Carolina CSCE350 Algorithms and Data Structure.
Algorithm Design Strategy Divide and Conquer. More examples of Divide and Conquer  Review of Divide & Conquer Concept  More examples  Finding closest.
CS 253: Algorithms Chapter 7 Mergesort Quicksort Credit: Dr. George Bebis.
Chapter 4 Divide-and-Conquer Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Chapter 4: Divide and Conquer The Design and Analysis of Algorithms.
5 - 1 § 5 The Divide-and-Conquer Strategy e.g. find the maximum of a set S of n numbers.
Design and Analysis of Algorithms - Chapter 41 Divide and Conquer The most well known algorithm design strategy: 1. Divide instance of problem into two.
Design and Analysis of Algorithms – Chapter 51 Divide and Conquer (I) Dr. Ying Lu RAIK 283: Data Structures & Algorithms.
Divide and Conquer The most well known algorithm design strategy: 1. Divide instance of problem into two or more smaller instances 2. Solve smaller instances.
Design and Analysis of Algorithms - Chapter 41 Divide and Conquer The most well known algorithm design strategy: 1. Divide instance of problem into two.
ALGORITHM ANALYSIS AND DESIGN INTRODUCTION TO ALGORITHMS CS 413 Divide and Conquer Algortihms: Binary search, merge sort.
Chapter 4 Divide-and-Conquer Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
MA/CSSE 473 Day 17 Divide-and-conquer Convex Hull Strassen's Algorithm: Matrix Multiplication.
Theory of Algorithms: Divide and Conquer James Gain and Edwin Blake {jgain | Department of Computer Science University of Cape Town.
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 Chapter.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 5 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
Complexity of algorithms Algorithms can be classified by the amount of time they need to complete compared to their input size. There is a wide variety:
Merge Sort. What Is Sorting? To arrange a collection of items in some specified order. Numerical order Lexicographical order Input: sequence of numbers.
Sorting. Pseudocode of Insertion Sort Insertion Sort To sort array A[0..n-1], sort A[0..n-2] recursively and then insert A[n-1] in its proper place among.
CSC-305 Design and Analysis of AlgorithmsBS(CS) -6 Fall-2014CSC-305 Design and Analysis of AlgorithmsBS(CS) -6 Fall-2014 Design and Analysis of Algorithms.
Divide-and-Conquer The most-well known algorithm design strategy: 1. Divide instance of problem into two or more smaller instances 2.Solve smaller instances.
Divide And Conquer A large instance is solved as follows:  Divide the large instance into smaller instances.  Solve the smaller instances somehow. 
Divide and Conquer Strategy
Sorting Fundamental Data Structures and Algorithms Aleks Nanevski February 17, 2004.
1 Ch.19 Divide and Conquer. 2 BIRD’S-EYE VIEW Divide and conquer algorithms Decompose a problem instance into several smaller independent instances May.
Young CS 331 D&A of Algo. Topic: Divide and Conquer1 Divide-and-Conquer General idea: Divide a problem into subprograms of the same kind; solve subprograms.
Lecture 6 Sorting II Divide-and-Conquer Algorithms.
MA/CSSE 473 Day 14 Strassen's Algorithm: Matrix Multiplication Decrease and Conquer DFS.
Subject Name: Design and Analysis of Algorithm Subject Code: 10CS43
Chapter 4 Divide-and-Conquer
Chapter 5 Divide & conquer
Introduction to the Design and Analysis of Algorithms
Divide-and-Conquer The most-well known algorithm design strategy:
Chapter 4 Divide-and-Conquer
CSCE350 Algorithms and Data Structure
Divide-and-Conquer The most-well known algorithm design strategy:
Chapter 4: Divide and Conquer
Divide-and-Conquer The most-well known algorithm design strategy:
Decrease-and-Conquer
Divide-and-Conquer The most-well known algorithm design strategy:
Chapter 4 Divide-and-Conquer
Divide and Conquer Merge sort and quick sort Binary search
Chapter 4.
Divide-and-Conquer The most-well known algorithm design strategy:
Algorithms Dr. Youn-Hee Han April-May 2013
CSC 380: Design and Analysis of Algorithms
CSC 380: Design and Analysis of Algorithms
Introduction to Algorithms
CSC 380: Design and Analysis of Algorithms
CSC 380: Design and Analysis of Algorithms
Divide and Conquer Merge sort and quick sort Binary search
Divide and Conquer Merge sort and quick sort Binary search
Presentation transcript:

Chapter 4 Divide-and-Conquer Copyright © 2007 Pearson Addison-Wesley. All rights reserved.

4-1 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 Divide-and-Conquer The most-well known algorithm design strategy: 1. Divide instance of problem into two or more smaller instances 2. Solve smaller instances recursively 3. Obtain solution to original (larger) instance by combining these solutions

4-2 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 Divide-and-Conquer Technique (cont.) subproblem 2 of size n/2 subproblem 1 of size n/2 a solution to subproblem 1 a solution to the original problem a solution to subproblem 2 a problem of size n (instance) It general leads to a recursive algorithm!

4-3 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 Divide-and-Conquer Examples b Sorting: mergesort and quicksort b Binary tree traversals b Binary search (?) b Multiplication of large integers b Matrix multiplication: Strassen’s algorithm b Closest-pair and convex-hull algorithms

4-4 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 General Divide-and-Conquer Recurrence T(n) = aT(n/b) + f (n) where f(n)   (n d ), d  0 Master Theorem: If a < b d, T(n)   (n d ) If a = b d, T(n)   (n d log n) If a = b d, T(n)   (n d log n) If a > b d, T(n)   ( n log b a ) If a > b d, T(n)   ( n log b a ) Note: The same results hold with O instead of . Examples: T(n) = 4T(n/2) + n  T(n)  ? T(n) = 4T(n/2) + n 2  T(n)  ? T(n) = 4T(n/2) + n 2  T(n)  ? T(n) = 4T(n/2) + n 3  T(n)  ? T(n) = 4T(n/2) + n 3  T(n)  ?  (n^2)  (n^2log n)  (n^3)

4-5 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 Mergesort b Split array A[0..n-1] into about equal halves and make copies of each half in arrays B and C b Sort arrays B and C recursively b Merge sorted arrays B and C into array A as follows: Repeat the following until no elements remain in one of the arrays:Repeat the following until no elements remain in one of the arrays: –compare the first elements in the remaining unprocessed portions of the arrays –copy the smaller of the two into A, while incrementing the index indicating the unprocessed portion of that array Once all elements in one of the arrays are processed, copy the remaining unprocessed elements from the other array into A.Once all elements in one of the arrays are processed, copy the remaining unprocessed elements from the other array into A.

4-6 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 Pseudocode of Mergesort

4-7 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 Pseudocode of Merge Θ(p+q)Θ(n) Time complexity: Θ(p+q) = Θ(n) comparisons

4-8 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 Mergesort Example The non-recursive version of Mergesort starts from merging single elements into sorted pairs.

4-9 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 Analysis of Mergesort  All cases have same efficiency: Θ (n log n) b Number of comparisons in the worst case is close to theoretical minimum for comparison-based sorting:  log 2 n!  ≈ n log 2 n n  log 2 n!  ≈ n log 2 n n b Space requirement: Θ (n) (not in-place) b Can be implemented without recursion (bottom-up) T(n) = 2T(n/2) + Θ(n), T(1) = 0

4-10 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 Quicksort b Select a pivot (partitioning element) – here, the first element b Rearrange the list so that all the elements in the first s positions are smaller than or equal to the pivot and all the elements in the remaining n-s positions are larger than or equal to the pivot (see next slide for an algorithm) b Exchange the pivot with the last element in the first (i.e.,  ) subarray — the pivot is now in its final position b Sort the two subarrays recursively p A[i]  p A[i]  p

4-11 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 Partitioning Algorithm Θ(r-l) Time complexity: Θ(r-l) comparisons or i > r or j = l <

4-12 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 Quicksort Example

4-13 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 Analysis of Quicksort  Best case: split in the middle — Θ (n log n) b Worst case: sorted array! — Θ (n 2 )  Average case: random arrays — Θ (n log n) b Improvements: better pivot selection: median of three partitioningbetter pivot selection: median of three partitioning switch to insertion sort on small subfilesswitch to insertion sort on small subfiles elimination of recursionelimination of recursion These combine to 20-25% improvement b Considered the method of choice for internal sorting of large files (n ≥ 10000) T(n) = T(n-1) + Θ(n)

4-14 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 Binary Search Very efficient algorithm for searching in sorted array: K vs vs A[0]... A[m]... A[n-1] If K = A[m], stop (successful search); otherwise, continue searching by the same method in A[0..m-1] if K < A[m] and in A[m+1..n-1] if K > A[m] l  0; r  n-1 while l  r do m   (l+r)/2  if K = A[m] return m if K = A[m] return m else if K < A[m] r  m-1 else if K < A[m] r  m-1 else l  m+1 else l  m+1 return -1

4-15 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 Analysis of Binary Search b Time efficiency worst-case recurrence: C w (n) = 1 + C w (  n/2  ), C w (1) = 1 solution: C w (n) =  log 2 (n+1)  This is VERY fast: e.g., C w (10 6 ) = 20worst-case recurrence: C w (n) = 1 + C w (  n/2  ), C w (1) = 1 solution: C w (n) =  log 2 (n+1)  This is VERY fast: e.g., C w (10 6 ) = 20 b Optimal for searching a sorted array b Limitations: must be a sorted array (not linked list) b Bad (degenerate) example of divide-and-conquer because only one of the sub-instances is solved b Has a continuous counterpart called bisection method for solving equations in one unknown f(x) = 0 (see Sec. 12.4)

4-16 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 Binary Tree Algorithms Binary tree is a divide-and-conquer ready structure! Ex. 1: Classic traversals (preorder, inorder, postorder) Algorithm Inorder(T) if T   a a Inorder(T left ) b c b c Inorder(T left ) b c b c print(root of T) d e d e print(root of T) d e d e Inorder(T right ) Inorder(T right ) Efficiency: Θ (n). Why? Each node is visited/printed once.

4-17 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 Binary Tree Algorithms (cont.) Ex. 2: Computing the height of a binary tree h(T) = max{h(T L ), h(T R )} + 1 if T  and h(  ) = -1 h(T) = max{h(T L ), h(T R )} + 1 if T   and h(  ) = -1 Efficiency: Θ(n). Why?

4-18 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 Multiplication of Large Integers Consider the problem of multiplying two (large) n-digit integers represented by arrays of their digits such as: A = B = The grade-school algorithm: a 1 a 2 … a n b 1 b 2 … b n (d 10 ) d 11 d 12 … d 1n (d 20 ) d 21 d 22 … d 2n (d 20 ) d 21 d 22 … d 2n … … … … … … … … … … … … … … (d n0 ) d n1 d n2 … d nn Efficiency: Θ(n 2 ) single-digit multiplications Efficiency: Θ(n 2 ) single-digit multiplications

4-19 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 First Divide-and-Conquer Algorithm A small example: A  B where A = 2135 and B = 4014 A = (21· ), B = (40 · ) So, A  B = (21 · )  (40 · ) = 21  40 · (21   40) ·  14 = 21  40 · (21   40) ·  14 In general, if A = A 1 A 2 and B = B 1 B 2 (where A and B are n-digit, A 1, A 2, B 1, B 2 are n/2-digit numbers), A  B = A 1  B 1 ·10 n + (A 1  B 2 + A 2  B 1 ) ·10 n/2 + A 2  B 2 Recurrence for the number of one-digit multiplications M(n): M(n) = 4M(n/2), M(1) = 1 Solution: M(n) = n 2 M(n) = 4M(n/2), M(1) = 1 Solution: M(n) = n 2

4-20 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 Second Divide-and-Conquer Algorithm A  B = A 1  B 1 ·10 n + (A 1  B 2 + A 2  B 1 ) ·10 n/2 + A 2  B 2 The idea is to decrease the number of multiplications from 4 to 3: (A 1 + A 2 )  (B 1 + B 2 ) = A 1  B 1 + (A 1  B 2 + A 2  B 1 ) + A 2  B 2, I.e., (A 1  B 2 + A 2  B 1 ) = (A 1 + A 2 )  (B 1 + B 2 ) - A 1  B 1 - A 2  B 2, which requires only 3 multiplications at the expense of (4-1) extra add/sub. (A 1 + A 2 )  (B 1 + B 2 ) = A 1  B 1 + (A 1  B 2 + A 2  B 1 ) + A 2  B 2, I.e., (A 1  B 2 + A 2  B 1 ) = (A 1 + A 2 )  (B 1 + B 2 ) - A 1  B 1 - A 2  B 2, which requires only 3 multiplications at the expense of (4-1) extra add/sub. Recurrence for the number of multiplications M(n): M(n) = 3M(n/2), M(1) = 1 Solution: M(n) = 3 log 2 n = n log 2 3 ≈ n What if we count both multiplications and additions?

4-21 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 Example of Large-Integer Multiplication 2135  4014 = (21*10^2 + 35) * (40*10^2 + 14) = (21*40)*10^4 + c1*10^2 + 35*14 where c1 = (21+35)*(40+14) - 21* *14, and 21*40 = (2*10 + 1) * (4*10 + 0) = (2*4)*10^2 + c2*10 + 1*0 where c2 = (2+1)*(4+0) - 2*4 - 1*0, etc. This process requires 9 digit multiplications as opposed to 16.

4-22 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 Conventional Matrix Multiplication b Brute-force algorithm c 00 c 01 a 00 a 01 b 00 b 01 = * = * c 10 c 11 a 10 a 11 b 10 b 11 a 00 * b 00 + a 01 * b 10 a 00 * b 01 + a 01 * b 11 = a 10 * b 00 + a 11 * b 10 a 10 * b 01 + a 11 * b 11 a 10 * b 00 + a 11 * b 10 a 10 * b 01 + a 11 * b 11 8 multiplications 4 additions Efficiency class in general:  (n 3 )

4-23 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 Strassen’s Matrix Multiplication b Strassen’s algorithm for two 2x2 matrices (1969): c 00 c 01 a 00 a 01 b 00 b 01 = * = * c 10 c 11 a 10 a 11 b 10 b 11 m 1 + m 4 - m 5 + m 7 m 3 + m 5 = m 2 + m 4 m 1 + m 3 - m 2 + m 6 m 2 + m 4 m 1 + m 3 - m 2 + m 6 b m 1 = (a 00 + a 11 ) * (b 00 + b 11 ) b m 2 = (a 10 + a 11 ) * b 00 b m 3 = a 00 * (b 01 - b 11 ) b m 4 = a 11 * (b 10 - b 00 ) b m 5 = (a 00 + a 01 ) * b 11 b m 6 = (a 10 - a 00 ) * (b 00 + b 01 ) b m 7 = (a 01 - a 11 ) * (b 10 + b 11 ) 7 multiplications 18 additions

4-24 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 Strassen’s Matrix Multiplication Strassen observed [1969] that the product of two matrices can be computed in general as follows: Strassen observed [1969] that the product of two matrices can be computed in general as follows: C 00 C 01 A 00 A 01 B 00 B 01 = * = * C 10 C 11 A 10 A 11 B 10 B 11 M 1 + M 4 - M 5 + M 7 M 3 + M 5 M 1 + M 4 - M 5 + M 7 M 3 + M 5 = M 2 + M 4 M 1 + M 3 - M 2 + M 6 M 2 + M 4 M 1 + M 3 - M 2 + M 6

4-25 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 Formulas for Strassen’s Algorithm M 1 = (A 00 + A 11 )  (B 00 + B 11 ) M 2 = (A 10 + A 11 )  B 00 M 3 = A 00  (B 01 - B 11 ) M 4 = A 11  (B 10 - B 00 ) M 5 = (A 00 + A 01 )  B 11 M 6 = (A 10 - A 00 )  (B 00 + B 01 ) M 7 = (A 01 - A 11 )  (B 10 + B 11 )

4-26 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 Analysis of Strassen’s Algorithm If n is not a power of 2, matrices can be padded with zeros. Number of multiplications: M(n) = 7M(n/2), M(1) = 1 M(n) = 7M(n/2), M(1) = 1 Solution: M(n) = 7 log 2 n = n log 2 7 ≈ n vs. n 3 of brute-force alg. Algorithms with better asymptotic efficiency are known but they are even more complex and not used in practice. What if we count both multiplications and additions?

4-27 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 Closest-Pair Problem by Divide-and-Conquer Step 0 Sort the points by x (list one) and then by y (list two). Step 1 Divide the points given into two subsets S 1 and S 2 by a vertical line x = c so that half the points lie to the left or on the line and half the points lie to the right or on the line.

4-28 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 Closest Pair by Divide-and-Conquer (cont.) Step 2 Find recursively the closest pairs for the left and right subsets. Step 3 Set d = min{d 1, d 2 } We can limit our attention to the points in the symmetric vertical strip of width 2d as possible closest pair. Let C 1 and C 2 be the subsets of points in the left subset S 1 and of the right subset S 2, respectively, that lie in this vertical strip. The points in C 1 and C 2 are stored in increasing order of their y coordinates, taken from the second list. We can limit our attention to the points in the symmetric vertical strip of width 2d as possible closest pair. Let C 1 and C 2 be the subsets of points in the left subset S 1 and of the right subset S 2, respectively, that lie in this vertical strip. The points in C 1 and C 2 are stored in increasing order of their y coordinates, taken from the second list. Step 4 For every point P(x,y) in C 1, we inspect points in C 2 that may be closer to P than d. There can be no more than 6 such points (because d ≤ d 2 )!

4-29 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 Closest Pair by Divide-and-Conquer: Worst Case The worst case scenario is depicted below:

4-30 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 Efficiency of the Closest-Pair Algorithm Running time of the algorithm (without sorting) is: T(n) = 2T(n/2) + M(n), where M(n)  Θ(n) T(n) = 2T(n/2) + M(n), where M(n)  Θ(n) By the Master Theorem (with a = 2, b = 2, d = 1) T(n)  Θ(n log n) T(n)  Θ(n log n) So the total time is Θ(n log n).

4-31 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 Quickhull Algorithm Convex hull: smallest convex set that includes given points. An O(n^3) bruteforce time is given in Levitin, Ch 3. b Assume points are sorted by x-coordinate values b Identify extreme points P 1 and P 2 (leftmost and rightmost) b Compute upper hull recursively: find point P max that is farthest away from line P 1 P 2find point P max that is farthest away from line P 1 P 2 compute the upper hull of the points to the left of line P 1 P maxcompute the upper hull of the points to the left of line P 1 P max compute the upper hull of the points to the left of line P max P 2compute the upper hull of the points to the left of line P max P 2 b Compute lower hull in a similar manner P1P1 P2P2 P max

4-32 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 4 Efficiency of Quickhull Algorithm b Finding point farthest away from line P 1 P 2 can be done in linear time b Time efficiency: T(n) = T(x) + T(y) + T(z) + T(v) + O(n), where x + y + z +v <= n. worst case: Θ (n 2 ) T(n) = T(n-1) + O(n)worst case: Θ (n 2 ) T(n) = T(n-1) + O(n) average case: Θ (n) (under reasonable assumptions about distribution of points given)average case: Θ (n) (under reasonable assumptions about distribution of points given) b If points are not initially sorted by x-coordinate value, this can be accomplished in O(n log n) time b Several O(n log n) algorithms for convex hull are known