### Similar presentations

2-1 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Analysis of algorithms b Issues: correctnesscorrectness time efficiencytime efficiency space efficiencyspace efficiency optimalityoptimality b Approaches: theoretical analysistheoretical analysis empirical analysisempirical analysis

2-2 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Theoretical analysis of time efficiency Time efficiency is analyzed by determining the number of repetitions of the basic operation as a function of input size b Basic operation: the operation that contributes the most towards the running time of the algorithm T(n) ≈ c op C(n) T(n) ≈ c op C(n) running time execution time for basic operation or cost Number of times basic operation is executed input size Note: Different basic operations may cost differently!

2-3 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Input size and basic operation examples Problem Input size measure Basic operation Searching for key in a list of n items Number of list’s items, i.e. n Key comparison Multiplication of two matrices Matrix dimensions or total number of elements Multiplication of two numbers Checking primality of a given integer n n’size = number of digits (in binary representation) Division Typical graph problem #vertices and/or edges Visiting a vertex or traversing an edge

2-4 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Empirical analysis of time efficiency b Select a specific (typical) sample of inputs b Use physical unit of time (e.g., milliseconds) or or Count actual number of basic operation’s executions Count actual number of basic operation’s executions b Analyze the empirical data

2-5 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Best-case, average-case, worst-case For some algorithms, efficiency depends on form of input: b Worst case: C worst (n) – maximum over inputs of size n b Best case: C best (n) – minimum over inputs of size n b Average case: C avg (n) – “average” over inputs of size n Number of times the basic operation will be executed on typical inputNumber of times the basic operation will be executed on typical input NOT the average of worst and best caseNOT the average of worst and best case Expected number of basic operations considered as a random variable under some assumption about the probability distribution of all possible inputs. So, avg = expected under uniform distribution.Expected number of basic operations considered as a random variable under some assumption about the probability distribution of all possible inputs. So, avg = expected under uniform distribution.

2-6 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Example: Sequential search b Worst case b Best case b Average case n key comparisons 1 comparisons (n+1)/2, assuming K is in A

2-7 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Types of formulas for basic operation’s count b Exact formula e.g., C(n) = n(n-1)/2 e.g., C(n) = n(n-1)/2 b Formula indicating order of growth with specific multiplicative constant e.g., C(n) ≈ 0.5 n 2 e.g., C(n) ≈ 0.5 n 2 b Formula indicating order of growth with unknown multiplicative constant e.g., C(n) ≈ cn 2 e.g., C(n) ≈ cn 2

2-8 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Order of growth b Most important: Order of growth within a constant multiple as n→∞ b Example: How much faster will algorithm run on computer that is twice as fast?How much faster will algorithm run on computer that is twice as fast? How much longer does it take to solve problem of double input size?How much longer does it take to solve problem of double input size?

2-9 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Values of some important functions as n  

2-10 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Asymptotic order of growth A way of comparing functions that ignores constant factors and small input sizes (because?) b O(g(n)): class of functions f(n) that grow no faster than g(n)  Θ (g(n)): class of functions f(n) that grow at same rate as g(n)  Ω (g(n)): class of functions f(n) that grow at least as fast as g(n)

2-14 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Establishing order of growth using the definition Definition: f(n) is in O(g(n)), denoted f(n)  O(g(n)), if order of growth of f(n) ≤ order of growth of g(n) (within constant multiple), i.e., there exist positive constant c and non-negative integer n 0 such that f(n) ≤ c g(n) for every n ≥ n 0 f(n) ≤ c g(n) for every n ≥ n 0Examples: b 10n is in O(n 2 ) b 5n+20 is in O(n)

2-15 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2  -notation b Formal definition A function t(n) is said to be in  (g(n)), denoted t(n)   (g(n)), if t(n) is bounded below by some constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some nonnegative integer n 0 such thatA function t(n) is said to be in  (g(n)), denoted t(n)   (g(n)), if t(n) is bounded below by some constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some nonnegative integer n 0 such that t(n)  cg(n) for all n  n 0 b Exercises: prove the following using the above definition 10n 2   (n 2 )10n 2   (n 2 ) 0.3n 2 - 2n   (n 2 )0.3n 2 - 2n   (n 2 ) 0.1n 3   (n 2 )0.1n 3   (n 2 )

2-16 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2  -notation b Formal definition A function t(n) is said to be in  (g(n)), denoted t(n)   (g(n)), if t(n) is bounded both above and below by some positive constant multiples of g(n) for all large n, i.e., if there exist some positive constant c 1 and c 2 and some nonnegative integer n 0 such thatA function t(n) is said to be in  (g(n)), denoted t(n)   (g(n)), if t(n) is bounded both above and below by some positive constant multiples of g(n) for all large n, i.e., if there exist some positive constant c 1 and c 2 and some nonnegative integer n 0 such that c 2 g(n)  t(n)  c 1 g(n) for all n  n 0 c 2 g(n)  t(n)  c 1 g(n) for all n  n 0 b Exercises: prove the following using the above definition 10n 2   (n 2 )10n 2   (n 2 ) 0.3n 2 - 2n   (n 2 )0.3n 2 - 2n   (n 2 ) (1/2)n(n+1)   (n 2 )(1/2)n(n+1)   (n 2 )

2-17 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2  (g(n)), functions that grow at least as fast as g(n)  (g(n)), functions that grow at the same rate as g(n) O(g(n)), functions that grow no faster than g(n) g(n) >= <= =

2-18 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Theorem b If t 1 (n)  O(g 1 (n)) and t 2 (n)  O(g 2 (n)), then t 1 (n) + t 2 (n)  O(max{g 1 (n), g 2 (n)}). The analogous assertions are true for the  -notation and  -notation.The analogous assertions are true for the  -notation and  -notation. b Implication: The algorithm’s overall efficiency will be determined by the part with a larger order of growth, i.e., its least efficient part. For example, 5n 2 + 3nlogn  O(n 2 )For example, 5n 2 + 3nlogn  O(n 2 ) Proof. There exist constants c 1, c 2, n 1, n 2 such that t 1 (n)  c 1 *g 1 (n), for all n  n 1 t 2 (n)  c 2 *g 2 (n), for all n  n 2 t 2 (n)  c 2 *g 2 (n), for all n  n 2 Define c 3 = c 1 + c 2 and n 3 = max{n 1,n 2 }. Then t 1 (n) + t 2 (n)  c 3 *max{g 1 (n), g 2 (n)}, for all n  n 3 t 1 (n) + t 2 (n)  c 3 *max{g 1 (n), g 2 (n)}, for all n  n 3

2-19 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Some properties of asymptotic order of growth b f(n)  O(f(n)) b f(n)  O(g(n)) iff g(n)  (f(n))  If f (n)  O(g (n)) and g(n)  O(h(n)), then f(n)  O(h(n)) Note similarity with a ≤ b b If f 1 (n)  O(g 1 (n)) and f 2 (n)  O(g 2 (n)), then f 1 (n) + f 2 (n)  O(max{g 1 (n), g 2 (n)}) f 1 (n) + f 2 (n)  O(max{g 1 (n), g 2 (n)}) Also,  1  i  n  (f(i)) =  (  1  i  n f(i)) Also,  1  i  n  (f(i)) =  (  1  i  n f(i)) Exercise: Can you prove these properties?

2-20 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Establishing order of growth using limits lim T(n)/g(n) = 0T(n)g(n) 0 order of growth of T(n) < order of growth of g(n) c > 0T(n)g(n) c > 0 order of growth of T(n) = order of growth of g(n) ∞T(n)g(n) ∞ order of growth of T(n) > order of growth of g(n) Examples: 10n vs. n 2 n(n+1)/2 vs. n 2 n→∞

2-21 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 L’Hôpital’s rule and Stirling’s formula L’Hôpital’s rule: If lim n  f(n) = lim n  g(n) =  and the derivatives f´, g´ exist, then the derivatives f´, g´ exist, then Stirling’s formula: n!  (2  n) 1/2 (n/e) n f(n)f(n)g(n)g(n)f(n)f(n)g(n)g(n)lim n  = f ´(n) g ´(n) lim n  Example: log n vs. n Example: 2 n vs. n!

2-22 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Orders of growth of some important functions b All logarithmic functions log a nbelong to the same class  (log n)no matter what the logarithm’s base a > 1 is because b All logarithmic functions log a n belong to the same class  (log n) no matter what the logarithm’s base a > 1 is because b All polynomials of the same degree k belong to the same class: a k n k + a k-1 n k-1 + … + a 0   (n k ) a k n k + a k-1 n k-1 + … + a 0   (n k ) b Exponential functions a n have different orders of growth for different a’s b order log n 0) 0) < order a n < order n! < order n n

2-23 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Basic asymptotic efficiency classes 1constant log n logarithmic nlinear n log n n-log-n n2n2n2n2quadratic n3n3n3n3cubic 2n2n2n2nexponential n!n!n!n!factorial

2-24 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Time efficiency of nonrecursive algorithms General Plan for Analysis b Decide on parameter n indicating input size b Identify algorithm’s basic operation b Determine worst, average, and best cases for input of size n b Set up a sum for the number of times the basic operation is executed b Simplify the sum using standard formulas and rules (see Appendix A)

2-25 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Useful summation formulas and rules  l  i  n 1 = 1+1+…+1 = n - l + 1 In particular,  l  i  n 1 = n - 1 + 1 = n   (n) In particular,  l  i  n 1 = n - 1 + 1 = n   (n)  1  i  n i = 1+2+…+n = n(n+1)/2  n 2 /2   (n 2 )  1  i  n i 2 = 1 2 +2 2 +…+n 2 = n(n+1)(2n+1)/6  n 3 /3   (n 3 )  0  i  n a i = 1 + a +…+ a n = (a n+1 - 1)/(a - 1) for any a  1 In particular,  0  i  n 2 i = 2 0 + 2 1 +…+ 2 n = 2 n+1 - 1   (2 n ) In particular,  0  i  n 2 i = 2 0 + 2 1 +…+ 2 n = 2 n+1 - 1   (2 n )  (a i ± b i ) =  a i ±  b i  ca i = c  a i  l  i  u a i =  l  i  m a i +  m+1  i  u a i

2-26 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Example 1: Maximum element  1  i  n-1 1 = n-1 =  (n) T(n) =  1  i  n-1 1 = n-1 =  (n) comparisons

2-27 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Example 2: Element uniqueness problem  0  i  n-2 (  i+1  j  n-1 T(n) =  0  i  n-2 (  i+1  j  n-1 1) =  0  i  n-2 =  0  i  n-2 n-i-1 = (n-1+1)(n-1)/2  ( ) =  ( ) comparisons

2-28 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Example 3: Matrix multiplication  0  i  n-1  0  i  n-1 T(n) =  0  i  n-1  0  i  n-1 n  0  i  n-1  ( =  0  i  n-1  ( )  ( =  ( ) multiplications

2-29 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Example 4: Gaussian elimination Algorithm GaussianElimination(A[0..n-1,0..n]) //Implements Gaussian elimination on an n-by-(n+1) matrix A for i  0 to n - 2 do for j  i + 1 to n - 1 do for k  i to n do A[j,k]  A[j,k] - A[i,k]  A[j,i] / A[i,i] A[j,k]  A[j,k] - A[i,k]  A[j,i] / A[i,i] Find the efficiency class and a constant factor improvement. for i  0 to n - 2 do for j  i + 1 to n - 1 do B  A[j,i] / A[i,i] for k  i to n do for k  i to n do A[j,k]  A[j,k] – A[i,k] * B A[j,k]  A[j,k] – A[i,k] * B

2-30 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Example 5: Counting binary digits It cannot be investigated the way the previous examples are. ≤ 1. The halving game: Find integer i such that n/ ≤ 1. Answer: i ≤ log n. So, T(n) =  ( Answer: i ≤ log n. So, T(n) =  (log n) divisions. Another solution: Using recurrence relations.

2-31 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Plan for Analysis of Recursive Algorithms b Decide on a parameter indicating an input’s size. b Identify the algorithm’s basic operation. b Check whether the number of times the basic op. is executed may vary on different inputs of the same size. (If it may, the worst, average, and best cases must be investigated separately.) b Set up a recurrence relation with an appropriate initial condition expressing the number of times the basic op. is executed. b Solve the recurrence (or, at the very least, establish its solution’s order of growth) by backward substitutions or another method.

2-32 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Example 1: Recursive evaluation of n! Definition: n ! = 1  2  …  (n-1)  n for n ≥ 1 and 0! = 1 Recursive definition of n!: F(n) = F(n-1)  n for n ≥ 1 and F(0) = 1 F(0) = 1Size: Basic operation: Recurrence relation: n multiplication M(n) = M(n-1) + 1 M(0) = 0

2-33 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Solving the recurrence for M(n) M(n) = M(n-1) + 1, M(0) = 0 M(n) = M(n-1) + 1 = (M(n-2) + 1) + 1 = M(n-2) + 2 = (M(n-3) + 1) + 2 = M(n-3) + 3 … = M(n-i) + i = M(0) + n = n The method is called backward substitution.

2-34 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Example 2: The Tower of Hanoi Puzzle Recurrence for number of moves: M(n) = 2M(n-1) + 1

2-35 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Solving recurrence for number of moves M(n) = 2M(n-1) + 1, M(1) = 1 M(n) = 2M(n-1) + 1 = 2(2M(n-2) + 1) + 1 = 2^2*M(n-2) + 2^1 + 2^0 = 2^2*(2M(n-3) + 1) + 2^1 + 2^0 = 2^3*M(n-3) + 2^2 + 2^1 + 2^0 = … = 2^(n-1)*M(1) + 2^(n-2) + … + 2^1 + 2^0 = 2^(n-1) + 2^(n-2) + … + 2^1 + 2^0 = 2^n - 1

2-36 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Tree of calls for the Tower of Hanoi Puzzle

2-37 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Example 3: Counting #bits A( ) = A( ) + 1, A( ) = 1 (using the Smoothness Rule) = (A( ) + 1) + 1 = A( ) + 2 = A( ) + i = A( ) + k = k + 0 = A(n) = A( ) + 1, A(1) = 0

2-38 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Smoothness Rule b Let f(n) be a nonnegative function defined on the set of natural numbers. f(n) is call smooth if it is eventually nondecreasing and f(2n) ∈ Θ (f(n)) Functions that do not grow too fast, including logn, n, nlogn, and n  where  >=0 are smooth.Functions that do not grow too fast, including logn, n, nlogn, and n  where  >=0 are smooth. b Smoothness rule Let T(n) be an eventually nondecreasing function and f(n) be a smooth function. If T(n) ∈ Θ (f(n)) for values of n that are powers of b, where b>=2, then T(n) ∈ Θ (f(n)) for any n. T(n) ∈ Θ (f(n)) for any n.

2-39 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Fibonacci numbers The Fibonacci numbers: 0, 1, 1, 2, 3, 5, 8, 13, 21, … The Fibonacci recurrence: F(n) = F(n-1) + F(n-2) F(0) = 0 F(1) = 1 General 2 nd order linear homogeneous recurrence with constant coefficients: aX(n) + bX(n-1) + cX(n-2) = 0 aX(n) + bX(n-1) + cX(n-2) = 0

2-40 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Solving aX(n) + bX(n-1) + cX(n-2) = 0 b Set up the characteristic equation (quadratic) ar 2 + br + c = 0 ar 2 + br + c = 0 b Solve to obtain roots r 1 and r 2 b General solution to the recurrence if r 1 and r 2 are two distinct real roots: X(n) = α r 1 n + β r 2 n if r 1 = r 2 = r are two equal real roots: X(n) = α r n + β nr n b Particular solution can be found by using initial conditions

2-41 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Application to the Fibonacci numbers F(n) = F(n-1) + F(n-2) or F(n) - F(n-1) - F(n-2) = 0 Characteristic equation: Characteristic equation: Roots of the characteristic equation: General solution to the recurrence: Particular solution for F(0) =0, F(1)=1: - r -1 = 0

2-42 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Computing Fibonacci numbers 1. Definition-based recursive algorithm 2. Nonrecursive definition-based algorithm 3. Explicit formula algorithm 4. Logarithmic algorithm based on formula: F(n-1) F(n) F(n) F(n+1) 0 1 1 1 =n for n≥1, assuming an efficient way of computing matrix powers.

2-43 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Important Recurrence Types b Decrease-by-one recurrences A decrease-by-one algorithm solves a problem by exploiting a relationship between a given instance of size n and a smaller size n – 1.A decrease-by-one algorithm solves a problem by exploiting a relationship between a given instance of size n and a smaller size n – 1. Example: n! Example: n! The recurrence equation for investigating the time efficiency of such algorithms typically has the formThe recurrence equation for investigating the time efficiency of such algorithms typically has the form T(n) = T(n-1) + f(n) b Decrease-by-a-constant-factor recurrences A decrease-by-a-constant-factor algorithm solves a problem by dividing its given instance of size n into several smaller instances of size n/b, solving each of them recursively, and then, if necessary, combining the solutions to the smaller instances into a solution to the given instance.A decrease-by-a-constant-factor algorithm solves a problem by dividing its given instance of size n into several smaller instances of size n/b, solving each of them recursively, and then, if necessary, combining the solutions to the smaller instances into a solution to the given instance. Example: binary search.Example: binary search. The recurrence equation for investigating the time efficiency of such algorithms typically has the formThe recurrence equation for investigating the time efficiency of such algorithms typically has the form T(n) = aT(n/b) + f (n)

2-44 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Decrease-by-one Recurrences b One (constant) operation reduces problem size by one. T(n) = T(n-1) + c T(1) = d Solution: b A pass through input reduces problem size by one. T(n) = T(n-1) + c n T(1) = d Solution: T(n) = (n-1)c + d linear T(n) = [n(n+1)/2 – 1] c + d quadratic

2-45 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Decrease-by-a-constant-factor recurrences – The Master Theorem T(n) = aT(n/b) + f (n), where f (n) ∈ Θ(n k ), k>=0 1. a < b k T(n) ∈ Θ(n k ) 2. a = b k T(n) ∈ Θ(n k log n ) 3. a > b k T(n) ∈ Θ(n log a ) b Examples: T(n) = T(n/2) + 1T(n) = T(n/2) + 1 T(n) = 2T(n/2) + nT(n) = 2T(n/2) + n T(n) = 3T(n/2) + nT(n) = 3T(n/2) + n T(n) = T(n/2) + nT(n) = T(n/2) + n Θ(n log 2 3 ) Θ(log n) Θ(nlog n) b Θ(n)

Similar presentations