Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS 340Chapter 2: Algorithm Analysis1 Time Complexity The best, worst, and average-case complexities of a given algorithm are numerical functions of the.

Similar presentations


Presentation on theme: "CS 340Chapter 2: Algorithm Analysis1 Time Complexity The best, worst, and average-case complexities of a given algorithm are numerical functions of the."— Presentation transcript:

1

2 CS 340Chapter 2: Algorithm Analysis1 Time Complexity The best, worst, and average-case complexities of a given algorithm are numerical functions of the size of the instances. It is difficult to work with these functions exactly because they are often very complicated, with many little up and down bumps. Thus it is usually cleaner and easier to talk about upper and lower bounds of such functions. This is where the big Oh notation comes into the picture.

3 CS 340Chapter 2: Algorithm Analysis2 Time Complexity Upper and lower bounds smooth out the behavior of complex functions

4 CS 340Chapter 2: Algorithm Analysis3 Time Complexity - Big-O T(n) = O(f(n)) means c.f(n) is an upper bound on T(n), where there exists some constant c such that T(n) is always <= c.f(n) for large enough n. Example: n 3 + 3n 2 + 6n + 5 is O(n 3 ). (Use c = 15 and n 0 = 1.) Example: n 2 + n logn is O(n 2 ). (Use c = 2 and n 0 = 1.)

5 CS 340Chapter 2: Algorithm Analysis4 Each of the algorithms below has O(n 3 ) time complexity... (In fact, the execution time for Algorithm A is n 3 + n 2 + n, and the execution time for Algorithm B is n 3 + 101n 2 + n.) Demonstrating The Big-O Concept 1,1101,11011,11011,110 1,010,1001,010,1002,010,1002,010,100 1,001,001,0001,001,001,0001,101,001,0001,101,001,000 1,000,100,010,0001,000,100,010,0001,010,100,010,0001,010,100,010,000 1,000,010,000,100,0001,000,010,000,100,0001,001,010,000,100,0001,001,010,000,100,000 1,000,001,000,001,000,0001,000,001,000,001,000,0001,000,101,000,001,000,0001,000,101,000,001,000,000 1010 100100 1,0001,000 10,00010,000 100,000100,000 1,000,0001,000,000 AABB ALGORITHMALGORITHM InputSizenInputSizen

6 CS 340Chapter 2: Algorithm Analysis5 (In fact, the execution time for Algorithm C is n 2 + 2n + 3, and the execution time for Algorithm D is n 2 + 1002n + 3.) A Second Big-O Demonstration Each of the algorithms below has O(n 2 ) time complexity... 12312310,12310,123 10,20310,203110,203110,203 1,002,0031,002,0032,002,0032,002,003 100,020,003100,020,003110,020,003110,020,003 10,000,200,00310,000,200,00310,100,200,00310,100,200,003 1,000,002,000,0031,000,002,000,0031,001,002,000,0031,001,002,000,003 1010 100100 1,0001,000 10,00010,000 100,000100,000 1,000,0001,000,000 CCDDALGORITHMALGORITHMInputSizenInputSizen

7 CS 340Chapter 2: Algorithm Analysis6 (In fact, the execution time for Algorithm E is n logn + 5n, and the execution time for Algorithm F is n logn + 105n. Note that the linear term for Algorithm F will dominate until n reaches 2 105.) One More Big-O Demonstration Each of the algorithms below has O(nlogn) time complexity… 83831,0831,083 1,1641,16411,16411,164 14,96614,966114,966114,966 182,877182,8771,182,8771,182,877 2,160,9642,160,96412,160,96412,160,964 24,931,56924,931,569124,931,569124,931,569 1010 100100 1,0001,000 10,00010,000 100,000100,000 1,000,0001,000,000 EEFFALGORITHMALGORITHMInputSizenInputSizen

8 CS 340Chapter 2: Algorithm Analysis7 Big-O Represents An Upper Bound If T(n) is O(f(n)), then f(n) is basically a cap on how bad T(n) will behave when n gets big. g(n) r(n) b(n) p(n) y(n) v(n) Is g(n) O(r(n))? Is r(n) O(g(n))? Is v(n) O(y(n))? Is y(n) O(v(n))? Is b(n) O(p(n))? Is p(n) O(b(n))?

9 CS 340Chapter 2: Algorithm Analysis8 Function T(n) is said to be  (g(n)) if there are positive constants c and n 0 such that T(n)  c g (n) for any n  n 0 (i.e., T(n) is ultimately bounded below by c g (n)). Function T(n) is said to be  (g(n)) if there are positive constants c and n 0 such that T(n)  c g (n) for any n  n 0 (i.e., T(n) is ultimately bounded below by c g (n)). –Example: n 3 + 3n 2 + 6n + 5 is  (n 3 ). (Use c = 1 and n 0 = 1.) –Example: n 2 + n logn is  (n 2 ). (Use c = 1 and n 0 = 1.) Function T(n) is said to be  (g(n)) if there are positive constants c and n 0 such that T(n)  c g (n) for any n  n 0 (i.e., T(n) is ultimately bounded below by c g (n)). Function T(n) is said to be  (g(n)) if there are positive constants c and n 0 such that T(n)  c g (n) for any n  n 0 (i.e., T(n) is ultimately bounded below by c g (n)). –Example: n 3 + 3n 2 + 6n + 5 is  (n 3 ). (Use c = 1 and n 0 = 1.) –Example: n 2 + n logn is  (n 2 ). (Use c = 1 and n 0 = 1.) Time Complexity Terminology: Big-Omega g(n) r(n) r(n) is not  (g(n)) since for every positive constant c, (c)g(n) ultimately gets bigger than r(n) g(n) is  (r(n)) since g(n) exceeds (1)r(n) for all n-values past n r nrnr nrnr

10 CS 340Chapter 2: Algorithm Analysis9 Function T(n) is said to be  (h(n)) if T(n) is both O(h(n)) and  (h(n)). Function T(n) is said to be  (h(n)) if T(n) is both O(h(n)) and  (h(n)). –Example: n 3 + 3n 2 + 6n + 5 is  (n 3 ). –Example: n 2 + n logn is  (n 2 ). Function T(n) is said to be  (h(n)) if T(n) is both O(h(n)) and  (h(n)). Function T(n) is said to be  (h(n)) if T(n) is both O(h(n)) and  (h(n)). –Example: n 3 + 3n 2 + 6n + 5 is  (n 3 ). –Example: n 2 + n logn is  (n 2 ). Time Complexity Terminology: Big-Theta g(n) r(n) r(n) is  (g(n)) since r(n) is squeezed between (1)g(n) and (2)g(n) once n exceeds n 0 g(n) is  (r(n)) since g(n) is squeezed between (½)r(n) and (1)r(n) once n exceeds n 0 n0n0 n0n0

11 CS 340Chapter 2: Algorithm Analysis10 Function T(n) is said to be o(p (n)) if T(n) is O(p (n)) but not  (p (n)). Function T(n) is said to be o(p (n)) if T(n) is O(p (n)) but not  (p (n)). –Example: n 3 + 3n 2 + 6n + 5 is O(n 4 ). (Use c = 15 and n 0 = 1.) However, n 3 + 3n 2 + 6n + 5 is not  (n 4 ). Proof (by contradiction): Assume that there are positive constants c and n 0 such that n 3 + 3n 2 + 6n + 5  c n 4 for all n  n 0. Then dividing by n 4 on both sides yields the fact that (1/n)+(3/n 2 )+(6/n 3 )+(5/n 4 )  c, for all n  n 0. Since lim n  ((1/n)+(3/n 2 )+(6/n 3 )+(5/n 4 )) = 0, we must conclude that 0  c, which contradicts the fact that c must be a positive constant. Function T(n) is said to be o(p (n)) if T(n) is O(p (n)) but not  (p (n)). Function T(n) is said to be o(p (n)) if T(n) is O(p (n)) but not  (p (n)). –Example: n 3 + 3n 2 + 6n + 5 is O(n 4 ). (Use c = 15 and n 0 = 1.) However, n 3 + 3n 2 + 6n + 5 is not  (n 4 ). Proof (by contradiction): Assume that there are positive constants c and n 0 such that n 3 + 3n 2 + 6n + 5  c n 4 for all n  n 0. Then dividing by n 4 on both sides yields the fact that (1/n)+(3/n 2 )+(6/n 3 )+(5/n 4 )  c, for all n  n 0. Since lim n  ((1/n)+(3/n 2 )+(6/n 3 )+(5/n 4 )) = 0, we must conclude that 0  c, which contradicts the fact that c must be a positive constant. Time Complexity Terminology: Little-O

12 CS 340Chapter 2: Algorithm Analysis11 To formally analyze the performance of algorithms, we will use a computational model with a couple of simplifying assumptions: –Each simple instruction (assignment, comparison, addition, multiplication, memory access, etc.) is assumed to execute in a single time unit. –Memory is assumed to be limitless, so there is always room to store whatever data is needed. The size of the input, n, will normally be used as our main variable, and we’ll primarily be interested in “worst case” scenarios. To formally analyze the performance of algorithms, we will use a computational model with a couple of simplifying assumptions: –Each simple instruction (assignment, comparison, addition, multiplication, memory access, etc.) is assumed to execute in a single time unit. –Memory is assumed to be limitless, so there is always room to store whatever data is needed. The size of the input, n, will normally be used as our main variable, and we’ll primarily be interested in “worst case” scenarios. Computational Model For Algorithm Analysis

13 CS 340Chapter 2: Algorithm Analysis12 Rule One: Loops The running time of a loop is at most the running time of the statements inside the loop, multiplied by the number of iterations. Rule One: Loops The running time of a loop is at most the running time of the statements inside the loop, multiplied by the number of iterations. General Rules For Running Time Calculation Example: for (i = 0; i < n; i++) // n iterations for (i = 0; i < n; i++) // n iterations A[i] = (1-t)*X[i] + t*Y[i]; // 12 time units A[i] = (1-t)*X[i] + t*Y[i]; // 12 time units // per iteration // per iterationExample: for (i = 0; i < n; i++) // n iterations for (i = 0; i < n; i++) // n iterations A[i] = (1-t)*X[i] + t*Y[i]; // 12 time units A[i] = (1-t)*X[i] + t*Y[i]; // 12 time units // per iteration // per iteration (Retrieving X[i] requires one addition and one memory access, as does retrieving Y[i]; the calculation involves a subtraction, two multiplications, and an addition; assigning A[i] requires one addition and one memory access; and each loop iteration requires a comparison and either an assignment or an increment, thus totals twelve primitive operations.) Thus, the total running time is 12n time units, i.e., this part of the program is O(n).

14 CS 340Chapter 2: Algorithm Analysis13 Rule Two: Nested Loops The running time of a nested loop is at most the running time of the statements inside the innermost loop, multiplied by the product of the number of iterations of all of the loops. Rule Two: Nested Loops The running time of a nested loop is at most the running time of the statements inside the innermost loop, multiplied by the product of the number of iterations of all of the loops. Example: for (i = 0; i < n; i++)// n iterations. 2 ops each for (i = 0; i < n; i++)// n iterations. 2 ops each for (j = 0; j < n; j++)// n iterations, 2 ops each for (j = 0; j < n; j++)// n iterations, 2 ops each C[i,j] = j*A[i] + i*B[j];// 10 time units/iteration C[i,j] = j*A[i] + i*B[j];// 10 time units/iteration (2 for retrieving A[i], 2 for retrieving B[j], 3 for the RHS arithmetic, 3 for assigning C[i,j].) Total running time: ((10+2)n+2)n = 12n 2 +2n time units, which is O(n 2 ). Example: for (i = 0; i < n; i++)// n iterations. 2 ops each for (i = 0; i < n; i++)// n iterations. 2 ops each for (j = 0; j < n; j++)// n iterations, 2 ops each for (j = 0; j < n; j++)// n iterations, 2 ops each C[i,j] = j*A[i] + i*B[j];// 10 time units/iteration C[i,j] = j*A[i] + i*B[j];// 10 time units/iteration (2 for retrieving A[i], 2 for retrieving B[j], 3 for the RHS arithmetic, 3 for assigning C[i,j].) Total running time: ((10+2)n+2)n = 12n 2 +2n time units, which is O(n 2 ). More complex example (ignoring for loop time): for (i = 0; i < n; i++)// n iterations for (i = 0; i < n; i++)// n iterations for (j = i; j < n; j++)// n-i iterations for (j = i; j < n; j++)// n-i iterations C[j,i] = C[i,j] = j*A[i]+i*B[j];// 13 time units/iter C[j,i] = C[i,j] = j*A[i]+i*B[j];// 13 time units/iter Total running time:  i=0,n-1 (  j=i, n-1 13) =  i=0,n-1 (13(n-i)) = 13(  i=0,n-1 n -  i=0,n-1 i) = 13(n 2 - ½n(n-1)) = 6.5n 2 + 6.5n time units, which is also O(n 2 ). More complex example (ignoring for loop time): for (i = 0; i < n; i++)// n iterations for (i = 0; i < n; i++)// n iterations for (j = i; j < n; j++)// n-i iterations for (j = i; j < n; j++)// n-i iterations C[j,i] = C[i,j] = j*A[i]+i*B[j];// 13 time units/iter C[j,i] = C[i,j] = j*A[i]+i*B[j];// 13 time units/iter Total running time:  i=0,n-1 (  j=i, n-1 13) =  i=0,n-1 (13(n-i)) = 13(  i=0,n-1 n -  i=0,n-1 i) = 13(n 2 - ½n(n-1)) = 6.5n 2 + 6.5n time units, which is also O(n 2 ).

15 CS 340Chapter 2: Algorithm Analysis14 Rule Three: Consecutive Statements The running time of a sequence of statements is merely the sum of the running times of the individual statements. Rule Three: Consecutive Statements The running time of a sequence of statements is merely the sum of the running times of the individual statements. Example: for (i = 0; i < n; i++) { // 22n time units A[i] = (1-t)*X[i] + t*Y[i]; // for this B[i] = (1-s)*X[i] + s*Y[i]; // entire loop } for (i = 0; i < n; i++)// (12n+2)n time for (j = 0; j < n; j++)// units for this C[i,j] = j*A[i] + i*B[j];// nested loop C[i,j] = j*A[i] + i*B[j];// nested loopExample: for (i = 0; i < n; i++) { // 22n time units A[i] = (1-t)*X[i] + t*Y[i]; // for this B[i] = (1-s)*X[i] + s*Y[i]; // entire loop } for (i = 0; i < n; i++)// (12n+2)n time for (j = 0; j < n; j++)// units for this C[i,j] = j*A[i] + i*B[j];// nested loop C[i,j] = j*A[i] + i*B[j];// nested loop Total running time: 12n 2 +24n time units, i.e., this code is O(n 2 ).

16 CS 340Chapter 2: Algorithm Analysis15 Rule Four: Conditional Statements The running time of an if-else statement is at most the running time of the conditional test, added to the maximum of the running times of the if and else blocks of statements. Rule Four: Conditional Statements The running time of an if-else statement is at most the running time of the conditional test, added to the maximum of the running times of the if and else blocks of statements. Example: if (amt > cost + tax)//2 time units if (amt > cost + tax)//2 time units { count = 0;//1 time unit count = 0;//1 time unit while ((count cost+tax))//4 TUs per iter while ((count cost+tax))//4 TUs per iter {//At most n iter {//At most n iter amt -= (cost + tax);//3 time units amt -= (cost + tax);//3 time units count++;//2 time units count++;//2 time units } cout << “CAPACITY:” << count;//2 time units cout << “CAPACITY:” << count;//2 time units }else cout << “INSUFFICIENT FUNDS”;//1 time unit cout << “INSUFFICIENT FUNDS”;//1 time unitExample: if (amt > cost + tax)//2 time units if (amt > cost + tax)//2 time units { count = 0;//1 time unit count = 0;//1 time unit while ((count cost+tax))//4 TUs per iter while ((count cost+tax))//4 TUs per iter {//At most n iter {//At most n iter amt -= (cost + tax);//3 time units amt -= (cost + tax);//3 time units count++;//2 time units count++;//2 time units } cout << “CAPACITY:” << count;//2 time units cout << “CAPACITY:” << count;//2 time units }else cout << “INSUFFICIENT FUNDS”;//1 time unit cout << “INSUFFICIENT FUNDS”;//1 time unit Total running time: 2 + max(1 + (4 + 3 + 2)n + 2, 1) = 9n + 5 time units, i.e., this code is O(n).

17 CS 340Chapter 2: Algorithm Analysis16 int binsrch(const etype A[], const etype x, const int n) { int low = 0, high = n-1; // 3 time units int low = 0, high = n-1; // 3 time units int middle; // 0 time units int middle; // 0 time units while (low <= high) // 1 time unit while (low <= high) // 1 time unit { middle = (low + high)/2; // 3 time units middle = (low + high)/2; // 3 time units if (A[middle] < x) // 2 TU | <-- Worst Case if (A[middle] < x) // 2 TU | <-- Worst Case low = middle + 1; // 2 TU | low = middle + 1; // 2 TU | else if (A[middle] > x) // 2 TU | x) // 2 TU | <-- Worst Case high = middle - 1; // 2 TU | <-- Worst Case high = middle - 1; // 2 TU | <-- Worst Case else // 0 TU | else // 0 TU | return middle; // 1 TU | return middle; // 1 TU | } return -1; // If search is unsuccessful; 1 time unit. return -1; // If search is unsuccessful; 1 time unit.} int binsrch(const etype A[], const etype x, const int n) { int low = 0, high = n-1; // 3 time units int low = 0, high = n-1; // 3 time units int middle; // 0 time units int middle; // 0 time units while (low <= high) // 1 time unit while (low <= high) // 1 time unit { middle = (low + high)/2; // 3 time units middle = (low + high)/2; // 3 time units if (A[middle] < x) // 2 TU | <-- Worst Case if (A[middle] < x) // 2 TU | <-- Worst Case low = middle + 1; // 2 TU | low = middle + 1; // 2 TU | else if (A[middle] > x) // 2 TU | x) // 2 TU | <-- Worst Case high = middle - 1; // 2 TU | <-- Worst Case high = middle - 1; // 2 TU | <-- Worst Case else // 0 TU | else // 0 TU | return middle; // 1 TU | return middle; // 1 TU | } return -1; // If search is unsuccessful; 1 time unit. return -1; // If search is unsuccessful; 1 time unit.} Complete Analysis Of Binary Search Function In the worst case, the loop will keep dividing the distance between the low and high indices in half until they are equal, iterating at most logn times. Thus, the total running time is: 10logn + 4 time units, which is O(logn).

18 CS 340Chapter 2: Algorithm Analysis17 etype SuperFreq(const etype A[], const int n) { etype bestElement = A[0]; // 3 time units etype bestElement = A[0]; // 3 time units int bestFreq = 0; // 1 time unit int bestFreq = 0; // 1 time unit int currFreq; // 0 time units int currFreq; // 0 time units for (i = 0; i < n; i++) // n iterations; 2 TUs each for (i = 0; i < n; i++) // n iterations; 2 TUs each { currFreq = 0; // 1 time unit currFreq = 0; // 1 time unit for (j = i; j < n; j++) // n-i iterations; 2 TUs each for (j = i; j < n; j++) // n-i iterations; 2 TUs each if (A[i] == A[j]) // 3 time units if (A[i] == A[j]) // 3 time units currFreq++; // 2 time units currFreq++; // 2 time units if (currFreq > bestFreq) // 1 time unit if (currFreq > bestFreq) // 1 time unit bestElement = A[i]; // 3 time units bestElement = A[i]; // 3 time units } return bestElement; // 1 time unit return bestElement; // 1 time unit} etype SuperFreq(const etype A[], const int n) { etype bestElement = A[0]; // 3 time units etype bestElement = A[0]; // 3 time units int bestFreq = 0; // 1 time unit int bestFreq = 0; // 1 time unit int currFreq; // 0 time units int currFreq; // 0 time units for (i = 0; i < n; i++) // n iterations; 2 TUs each for (i = 0; i < n; i++) // n iterations; 2 TUs each { currFreq = 0; // 1 time unit currFreq = 0; // 1 time unit for (j = i; j < n; j++) // n-i iterations; 2 TUs each for (j = i; j < n; j++) // n-i iterations; 2 TUs each if (A[i] == A[j]) // 3 time units if (A[i] == A[j]) // 3 time units currFreq++; // 2 time units currFreq++; // 2 time units if (currFreq > bestFreq) // 1 time unit if (currFreq > bestFreq) // 1 time unit bestElement = A[i]; // 3 time units bestElement = A[i]; // 3 time units } return bestElement; // 1 time unit return bestElement; // 1 time unit} Analysis Of Another Function: SuperFreq Note that the function is obviously O(n 2 ) due to its familiar nested loop structure. Specifically, its worst-case running time is ½(7n 2 + 21n + 10).

19 CS 340Chapter 2: Algorithm Analysis18 humongInt pow(const humongInt &val, const humongInt &n) { if (n == 0) if (n == 0) return humongInt(0); return humongInt(0); if (n == 1) if (n == 1) return val; return val; if (n % 2 == 0) if (n % 2 == 0) return pow(val*val, n/2); return pow(val*val, n/2); return pow(val*val, n/2) * val; return pow(val*val, n/2) * val;} humongInt pow(const humongInt &val, const humongInt &n) { if (n == 0) if (n == 0) return humongInt(0); return humongInt(0); if (n == 1) if (n == 1) return val; return val; if (n % 2 == 0) if (n % 2 == 0) return pow(val*val, n/2); return pow(val*val, n/2); return pow(val*val, n/2) * val; return pow(val*val, n/2) * val;} What About Recursion? The worst-case running time would require all 3 conditions to be checked, and to fail (taking 4 time units). The last return statement requires 3 time units each time it’s executed, which happens logn times (since it halves n with each execution, until it reaches a value of 1). When the parameterized n-value finally reaches 1, two last operations are performed. Thus, the worst-case running time is 7logn + 2. The worst-case running time would require all 3 conditions to be checked, and to fail (taking 4 time units). The last return statement requires 3 time units each time it’s executed, which happens logn times (since it halves n with each execution, until it reaches a value of 1). When the parameterized n-value finally reaches 1, two last operations are performed. Thus, the worst-case running time is 7logn + 2.

20 CS 340Chapter 2: Algorithm Analysis19 int powerOf2(const int &n) { if (n == 0) if (n == 0) return 1; return 1; return powerOf2(n-1) + powerOf2(n-1); return powerOf2(n-1) + powerOf2(n-1);} int powerOf2(const int &n) { if (n == 0) if (n == 0) return 1; return 1; return powerOf2(n-1) + powerOf2(n-1); return powerOf2(n-1) + powerOf2(n-1);} Recurrence Relations To Evaluate Recursion Assume that there is a function T(n) such that it takes T(k) time to execute powerOf2(k). Examining the code allows us to conclude the following: T(0) = 2 T(k) = 5 + 2T(k-1) for all k > 0 Assume that there is a function T(n) such that it takes T(k) time to execute powerOf2(k). Examining the code allows us to conclude the following: T(0) = 2 T(k) = 5 + 2T(k-1) for all k > 0 The second fact tells us that: T(n) = 5 + 2T(n-1) = 5 + 2(5 + 2T(n-2)) = 5 + 2(5 + 2(5 + 2(T(n-3)))) = … = 5(1 + 2 + 2 2 + 2 3 + … + 2 n-1 ) + 2 n T(0) = 5(2 n -1) + 2 n (2) = 7(2 n ) - 5, which is O(2 n ). The second fact tells us that: T(n) = 5 + 2T(n-1) = 5 + 2(5 + 2T(n-2)) = 5 + 2(5 + 2(5 + 2(T(n-3)))) = … = 5(1 + 2 + 2 2 + 2 3 + … + 2 n-1 ) + 2 n T(0) = 5(2 n -1) + 2 n (2) = 7(2 n ) - 5, which is O(2 n ).

21 CS 340Chapter 2: Algorithm Analysis20 int alternatePowerOf2(const int &n) { if (n == 0) if (n == 0) return 1; return 1; return 2*alternatePowerOf2(n-1); return 2*alternatePowerOf2(n-1);} int alternatePowerOf2(const int &n) { if (n == 0) if (n == 0) return 1; return 1; return 2*alternatePowerOf2(n-1); return 2*alternatePowerOf2(n-1);} Another Recurrence Relation Example The second fact tells us that: T(n) = 4 + T(n-1) = 4 + (4 + T(n-2)) = + (4 + (4 + (T(n-3)))) = … + (4 + (4 + (T(n-3)))) = … = 4n + T(0) = 4n + 2, which is O(n). The second fact tells us that: T(n) = 4 + T(n-1) = 4 + (4 + T(n-2)) = + (4 + (4 + (T(n-3)))) = … + (4 + (4 + (T(n-3)))) = … = 4n + T(0) = 4n + 2, which is O(n). Assume that there is a function T(n) such that it takes T(k) time to execute alternatePowerOf2(k). Examining the code allows us to conclude the following: T(0) = 2 T(k) = 4 + T(k-1) for all k > 0 Assume that there is a function T(n) such that it takes T(k) time to execute alternatePowerOf2(k). Examining the code allows us to conclude the following: T(0) = 2 T(k) = 4 + T(k-1) for all k > 0


Download ppt "CS 340Chapter 2: Algorithm Analysis1 Time Complexity The best, worst, and average-case complexities of a given algorithm are numerical functions of the."

Similar presentations


Ads by Google