Presentation is loading. Please wait.

Presentation is loading. Please wait.

Algorithm Analysis: Running Time Big O and omega ( 

Similar presentations


Presentation on theme: "Algorithm Analysis: Running Time Big O and omega ( "— Presentation transcript:

1 Algorithm Analysis: Running Time Big O and omega ( 

2 Introduction An algorithm analysis of a program is a step by step procedure for accomplishing that program In order to learn about an algorithm, we need to analyze it This means we need to study the specification of the algorithm and draw conclusion about the implementation of that algorithm (the program) will perform in general

3 The issues that should be considered in analyzing an algorithm are: The running time of a program as a function of its inputs The total or maximum memory space needed for program data The total size of the program code Whether the program correctly computes the desired result The complexity of the program. For example, how easy it is to read, understand, and modify the program The robustness of the program. For example, how well does it deal with unexpected or erroneous inputs

4 In this course, we consider the running time of the algorithm. The main factors that effect the running time are the algorithm itself, input data, the computer system, etc. The performance of a computer is determined by The hardware The programming language used and The operating system To calculate the running time of a general C++ program, we first need to define a few rules. In our rules, we are going to assume that the effect of hardware and software systems used in the machines are independent of the running time of our C++ program

5 Rule 1: The time required to fetch an integer from memory is a constant t(fetch), The time required to store an integer in memory is also a constant t(store) For example the running time of x = y is: t(fetch) + t(store) because we need to fetch y from memory store it into x Similarly the running time of x = 1 is also t(fetch) + t(store) because typically any constant is stored in the memory before it is fetched.

6 Rule 2 The time required to perform elementary operations on integers, such as addition t(+), subtraction t(-), multiplication t(*), division t(/), and comparison t(cmp), are all constants. For Example the running time of y= x+1 is: 2t(fetch) + t(store) + t (+) because you need to fetch x and 1:2*t(fetch) then add them together:t(+) and place the result into y:t(store)

7 Rule 3: The time required to call a function is a constant, t(call) And the time required to return a function is a constant, t(return) Rule 4: The time required to pass an integer argument to a function or procedure is the same as the time required to store an integer in memory, t(store)

8 For example the running time of y = f(x) is: t(fetch) + 2t(store) + t(call) + t(f(x)) Because you need To fetch the value of x:t (fetch) Pass x to the function and store it into parameter:t (store) Call the function f(x):t (call) Run the function:t (f(x)) Store the returned result into y:t (store)

9 Rule 5: The time required for the address calculation implied by an array subscripting operation like a[i] is a constant, t([ ]). This time does not include the time to compute the subscript expression, nor does it include the time to access (fetch or store) the array element For example, the running time of y = a[i] is: 3t(fetch) + t([ ]) + t(store) Because you need To fetch the value of i:t(fetch) To fetch the value of a:t(fetch) To find the address of a[i]:t([ ]) To fetch the value of a[i]:t(fetch) To store the value of a[i] into y:t(store)

10 Rule 6: The time required to calculate a fixed amount of storage from the heap using operator new is a constant, t(new) This time does not include any time required for initialization of the storage (calling a constructor). Similarly, the time required to return a fixed amount of storage to the heap using operator delete is a constant, t(delete). This time does not include any time spent cleaning up the storage before it is returned to the heap (calling destructor) For example the running time of int* ptr = new int; is: t(new) + t(store) Because you need To allocate a memory: t(new) And to store its address into ptr: t(store) For example the running time of delete ptr; is: t(fetch ) + t(delete) Because you need To fetch the address from ptr : t(fetch) And delete specifies location: t(delete)

11 1.int Sum (int n) 2.{ 3. int result =0; 4. for (int i=1; i<=n; ++i) 5. result += i; 6. return result 7.} [6t(fetch) +2t(store) + t(cmp) + 2t(+)]*n + [5t(fetch) + 2t(store) + t(cmp) + t(return) ] Total Return resultt(fetch)+ t(return)6 Result +=i(2t(fetch)+t(+) +t(store)) *n5 ++i(2t(fetch)+t(+) +t(store)) *n4c i<=n(2t(fetch)+t(cmp)) * (n+1)4b i = 1t(fetch) + t(store)4a result = 0t(fetch)+ t(store)3 CodeTimeStatement

12 1.int func (int a[ ], int n, int x) 2.{ 3. int result = a[n]; 4. for (int i=n-1; i>=0; --i) 5. result =result *x + a[i]; 6. return result 7.} [(9t(fetch) +2t(store) + t(cmp) +t([]) + t(*) + t(-)]*n + [(8t(fetch) + 2t(store) + t([]) + t(-) +t(cmp) + t (return) )] Total t(fetch)+ t(return)6 (5t(fetch)+t([ ])+t(+)+t(*)+t(store)) *n5 (2t(fetch)+t(-) +t(store)) *n4c (2t(fetch)+t(cmp)) * (n+1)4b 2t(fetch) + t(-) + t(store)4a 3t(fetch)+ t([ ]) + t(store)3 Time Statement

13 Using constant times such as t(fetch), t(store), t(delete), t(new), t(+), …, ect makes our running time accurate However, in order to make life simple, we can consider the approximate running time of any constant to be the same time as t(1). For example, the running time of y = x + 1 is 3 because it includes two “fetches” and one “store” in which all are constants For a loop there are two cases: If we know the exact number of iterations, the running time becomes constant t(1) If we do not know the exact number of iterations, the running time becomes t(n) where n is the number of iterations

14 1.int Geometric (int x, int n) 2.{ 3. int sum = 0; 4. for (int i=0; i<=n; ++i) 5. { 6. int prod = 1; 7. for (int j=0; j<i; ++j) 8. prod = prod * x; 9. sum = sum + prod; 10. } 11. return result 12.} 4(n+1)9 8 7c 7b 2(n+1)7a 2(n+1)6 (11/2)n 2 + (47/2)n + 27Total 211 4(n+1)4c 3(n+2)4b 24a 23 TimeStatement  i=0 n 4 i  n 4 i  n 3 i+1

15 1.int Power (int x, int n) 2.{ 3. if (n= =0) 4. return 1; 5. else if (n%2 = = 0) // n is even 6. return Power (x*x, n/2); 7. else // n is odd 8. return x* Power (x*x, n/2); 9.} 5 - - - 2 3 n=0 18 + T( n/2 ) - 10 + T( n/2 ) 5 - 3 n>0 (n is even) 12 + T( n/2 )8 20 + T( n/2 )Total -6 55 -4 33 n>0 (n is odd)Statement

16 5 for n=0 T(n) = 18+T( n/2 ) for n>0 and n is even 20+T( n/2 )for n> 0 and n is odd Suppose n = 2 k for some k>0. Obviously 2 k is an even number, we get T(2 k ) = 18 + T(2 k-1 ) = 18 + 18 + T(2 k-2 ) = 18 + 18 + 18 + T(2 k-3 ) = ….. = 18k + T(2 k-k ) = 18k + T(2 0 ) = 18k + T(1)

17 Since T(1) is add number, the running time of T(1) is: T(1) = 20 + T(0) = 20 + 5 = 25 Therefore, T(2 k ) = 18k + 25 If n = 2 k, then log n = log2 k indicating that k = log n Therefore, T(n) = 18log n + 25

18 Asymptotic Notation Suppose the running time of two algorithms A and B are T A (n) and T B (n), respectively where n is the size of the problem How can we determine T A (n) is better than T B (n)? One way to do that is if we know the size n ahead of time for some n=n o. Then we may say that algorithm A is performing better than algorithm B for n= n o But this is a special case for n=n o. How about n = n 1, or n=n 2 ? Is A better than B for other cases too? Unfortunately, this is not an easy answer. We cannot expect that the size of n to be known ahead of time. But we may be able to say that under certain situations T A (n) is better than T B (n) for all n >= n 1

19 To understand the running times of the algorithms we need to make some definitions: Definition: Consider a function f(n) which is non-negative for all integers n>=0. We say that “f(n) is big oh of g(n)” which we write (f(n) is O(g(n)) if there exists an integer n o and a constant c > 0 such that for all integers n >=n o, f(n) <=c g(n) Example: Show that f(n) = 8n + 128 is O(n 2 ) 8n + 128 <= c n 2 (lets set c = 1) 0 <= cn 2 -8n -128 0 <= (n-16) (n+8) Thus we can say that for constant c =1 and n >= 16, f(n) is O(n 2 )

20 g 3 (n)=n 2 g 2 (n)=2n 2 g 1 (n)=4n 2 f(n)=8n+128 5 10 1520 200 400 n f(n)

21 Theorem: If f 1 (n) is O(g 1 (n)) and f 2 (n) is O(g 2 (n)), then f 1 (n) + f 2 (n) = O (max(g 1 (n), g 2 (n))) Proof: If f 1 (n) is O(g 1 (n)) then f 1 (n) = n 1 If f 2 (n) is O(g 2 (n)) then f 2 (n) = n 2 Let n o = max(n 1, n 2 ) and c o = 2(max(c 1, c 2 )), consider the sum f 1 (n) + f 2 (n) for some n >= n o f 1 (n) + f 2 (n) < =c 1 g 1 (n) + c 2 g 2 (n) < = c o (g 1 (n) + g 2 (n) )/2 < =c o (max (g 1 (n), g 2 (n)) Therefore, f 1 (n) + f 2 (n) is O (max(g 1 (n), g 2 (n)) )

22 Theorem: If f 1 (n) is O(g 1 (n)) and f 2 (n) is O(g 2 (n)), then f 1 (n) * f 2 (n) = O(g 1 (n)*g 2 (n) ) Proof: If f 1 (n) is O(g 1 (n)) then f 1 (n) =n 1 If f 2 (n) is O(g 2 (n)) then f 2 (n) =n 2 Let n o = max(n 1, n 2 ) and c o = c 1 *c 2, consider the product of f 1 (n)*f 2 (n) for some n>=n o f 1 (n) * f 2 (n) <=c 1 g 1 (n) * c 2 g 2 (n) < = c o (g 1 (n) * g 2 (n) ) Therefore, f 1 (n) * f 2 (n) is O (g 1 (n)*g 2 (n) )

23 Theorem: If f(n) is O(g(n)) and g(n) is O(h(n)), then f(n) is O(h(n)) Proof: If f(n) is O(g(n)) then f(n) =n 1 If g(n) is O(h(n)) then g(n) =n 2 Let n o = max(n1, n2) and c o = c1*c2, then f(n) <=c 1 g 1 (n) <= c 1 c 2 h(n) <= c o h(n) Therefore, f(n) is O (h(n))

24 The names of common big O expressions QuadraticO(n 2 ) nlognO (n*log n) LinearO (n) log squaredO(log 2 n) logarithmicO (log n) ConstantO(1) exponential Cubic Name O(2 n ) O(n 3 ) Expression

25 Conventions for writing Big Oh Expression Certain conventions have evolved which concern how big oh expression normally written: First, it is common practice when writing big oh expression to drop all but the most significant items. Thus instead of O(n 2 + nlogn + n) we simply write O(n 2 ) Second, it is common practice to drop constant coefficients. Thus, instead of O(3n 2 ), we write O(n 2 ). As a special case of this rule, if the function is a constant, instead of, say O(1024), we simply write O(1)

26 Asymptotic Lower Bound (  ) Definition: Consider a function f(n), which is non-negative for all integers n>=0. We say that “f(n) is omega of g(n)” which we write (f(n) is  (g(n)), if there exists an integer n o and a constant c > 0 such that for all integers n>= n o, f(n) >= c g(n) Example: Show that f(n) = 5n 2 - 64n + 256 is  (n 2 ) 5n 2 - 64n + 256 >= cn 2 let c=1 5n 2 - 64n + 256 >= n 2 5n 2 - 64n + 256 –n 2 >= 0 4n 2 - 64n + 256>= 0 4(n-8) 2 >= 0 Let n o = 8, we can say that for c=1 and n o >=8 f(n) is  (n 2 )

27 f(n)=n 2 f(n)=2n 2 f(n)=5n2-64n+256 5 10 152025 500 1000 1500

28 Other definitions Definition: Consider a function f(n) which is non-negative for all integers n>=0. We say that “f(n) is theta of g(n)” which we write (f(n) is  (g(n)) if and only if f(n) is O(g(n)) and f(n) is  (g(n)) Definition: Consider a function f(n) which is non-negative for all integers n>=0. We say that “f(n) is little o of g(n)” which we write (f(n) is o(g(n)) if and only if f(n) is O(g(n)) and f(n) is not  (g(n)) Now lets consider some of the previous examples in terms of the big O notations:

29 1.int func (int a[ ], int n, int x) 2.{ 3. int result = a[n]; 4. for (int i=n-1; i>=0; --i) 5. result =result *x + a[i]; 6. return result 7.} 16n + 14 2 9n 4n 3n + 3 4 5 Simple Time model O(n)Total O(1) 6 O(n) 5 4c O(n) 4b O(1) 4a O(1) 3 Big OStatement The total running time is: O(16n + 14) = O(max(16n, 14)) = O(16n) = O(n)

30 1.int PrefixSums (int a[ ], int n) 2.{ 3. for (int j=n-1; i>=0; --j) 4. { 5. int sum = 0; 6. for (int i=0; i<=j; ++i) 7. sum = sum + a[i]; 8. a[j] = sum; 9. } 10. return result 11.} O(1)*O(n 2 ) 6b O(1)*O(n) 6a O(1)*O(n) 5 O(n 2 )Total O(1)*O(n) 9 O(1)*O(n 2 ) 7 6c O(1)*O(n) 3c O(1)*O(n) 3a O(1) 3a Big OStatement


Download ppt "Algorithm Analysis: Running Time Big O and omega ( "

Similar presentations


Ads by Google