Presentation is loading. Please wait.

Presentation is loading. Please wait.

Discrete Maths Objective to describe the Big-Oh notation for estimating the running time of programs 242-213, Semester 2, 2013-2014 10. Running Time of.

Similar presentations


Presentation on theme: "Discrete Maths Objective to describe the Big-Oh notation for estimating the running time of programs 242-213, Semester 2, 2013-2014 10. Running Time of."— Presentation transcript:

1 Discrete Maths Objective to describe the Big-Oh notation for estimating the running time of programs 242-213, Semester 2, 2013-2014 10. Running Time of Programs 1

2 Overview 1. Running Time 2. Big-Oh and Approximate Running Time 3. Big-Oh for Programs 4. Analyzing Function Calls 5. Analyzing Recursive Functions 6. Further Information 2

3 1. Running Time What is the running time of this program? void main() { int i, n; scanf("%d", &n); for(i=0; i<n; i++) printf("%d"\n", i); } continued 3

4 There is no single answer! the running time depends on the size of the n value Instead of a time answer in seconds, we want a time answer which is related to the size of the input. continued 4

5 For example: programTime(n) = constant * n this means that as n gets bigger, so does the program time running time is linearly related to the input size of n running time constant * n 5

6 Running Time Theory A program/algorithm has a running time T(n) n is some measure of the input size T(n) is the largest amount of time the program takes on any input of size n Time units are left unspecified. continued 6

7 A typical result is: T(n) = c*n, where c is some constant but often we just ignore c this means the program has linear running time T(n) values for different programs can be used to compare their relative running times selection sort: T(n) = n 2 merge sort: T(n) = n log n so, merge sort is better for larger n sizes 7

8 1.1. Different Kinds of Running Time Usually T(n) is the worst-case running time the maximum running time on any input of size n T avg (n) is the average running time of the program over all inputs of size n more realistic very hard to calculate not considered by us 8

9 1.2. T(n) Example Loop fragment for finding the index of the smallest value in A[] array of size n: (2)small = 0; (3)for(j = 1; j < n; j++) (4) if (A[j] < A[small]) (5) small = j; Count each assignment and test as 1 time unit. 9

10 Calculation The for loop executes n-1 times each loop carries out (in the worse case) 4 ops test of j < n, if test, small assign, j increment total loop time = 4(n-1) plus 3 ops at start and end small assign (line 2), init of j (line 3), final j < n test Total time T(n) = 4(n-1) + 3 = 4n -1 running time is linear with the size of the array 10

11 1.3. Comparing Different T()s If input size < 50, program B is faster. But for large ns, which are more common in real code, program B gets worse and worse. 5000 10000 15000 20000 T(n) value 20406080100 input size n T a (n) = 100n T b (n) = 2n 2 11

12 1.4. Common Growth Formulae & Names Formula (n = input size) Name nlinear n 2 quadratic n 3 cubic n m polynomial, e.g. n 10 m n ( m >= 2)exponential, e.g. 5 n n!factorial 1constant log nlogarithmic n log n log log n 12

13 1.5. Execution Times 3950100100010 6 n39501001ms1sec n 2 9812.5ms10ms1sec12 days n 3 27729125ms1sec16.7 min31,710yr 2 n 851236yr4*10 16 yr3*10 287 yr3*10 301016 yr log n23671020 n (no. of instructions) growth formula T() if n is 50, you will wait 36 years for an answer! Assume 1 instruction takes 1 microsec (10 -6 secs) to execute. How long will n instructions take? 13

14 Notes Logarithmic running times are best. Polynomial running times are acceptable, if the power isnt too big e.g. n 2 is ok, n 100 is terrible Exponential times mean sloooooooow code. some size problems may take longer to finish than the lifetime of the universe! 14

15 1.6. Why use T(n)? T() can guide our choice of which algorithm to implement, or program to use e.g. selection sort or merge sort? T() helps us look for better algorithms in our own code, without expensive implementation, testing, and measurement. 15

16 2. Big-Oh and Approximate Running Time Big-Oh mathematical notation simplifies the process of estimating the running time of programs it uses T(n), but ignores constant factors which depend on compiler/machine behaviour continued 16

17 The Big-Oh value specifies running time independent of: machine architecture e.g. dont consider the running speed of individual machine operations machine load (usage) e.g. time delays due to other users compiler design effects e.g. gcc versus Borland C 17

18 Example In the code fragment example on slide 9, we assumed that assigment and testing takes 1 time unit. This means: T(n) = 4n -1 The Big-Oh value, O(), uses the T(n) value but ignores constants (which will actually vary from machine to machine). This means: T(n) is O(n) we say "T(n) is order n" 18

19 More Examples T(n) valueBig Oh value: O() 10n 2 + 50n+100 O(n 2 ) (n+1) 2 O(n 2 ) n 10 O(2 n ) 5n 3 + 1 O(n 3 ) These simplifications have a mathematical reason, which is explained in section 2.2. hard to understand 19

20 2.1. Is Big-Oh Useful? O() ignores constant factors, which means it is a more reliable measure across platforms/compilers. It can be compared with Big-Oh values for other algorithms. i.e. linear is better than polynomial and exponential, but worse than logarithmic 20

21 2.2. Definition of Big-Oh The connection between T() and O() is: when T(n) is O( f(n) ), it means that f(n) is the most important thing in T() when n is large More formally, for some integer n 0 and constant c > 0 T(n) is O( f(n) ), if for all integers n >= n 0, T(n) <= c*f(n) n 0 and c are called witnesses to the relationship: T(n) is O( f(n) ) 21

22 Example 1 T(n) = 10n 2 + 50n + 100 which allows that T(n) is O(n 2 ) Why? Witnesses:n 0 = 1, c = 160 thenT(n) = 1 so10n 2 + 50n + 100 <= 160 n 2 since 10n 2 + 50n + 100 <= 10n 2 + 50n 2 + 100n 2 <= 160 n 2 Informally, the n 2 part is the most important thing in the T() function 22

23 Example 2 T(n) = (n+1) 2 which allows that T(n) is O(n 2 ) Why? Witnesses:n 0 = 1, c = 4 thenT(n) = 1 so(n+1) 2 <= 4n 2 since n 2 + 2n + 1 <= n 2 + 2n 2 + n 2 <= 4n 2 23

24 Example 3 T(n) = n 10 which allows that T(n) is O(2 n ) Why? Witnesses:n 0 = 64, c = 1 thenT(n) = 64 son 10 = 64 (10*log 2 64 == 10*6; 60 <= 64) 24

25 2.4. Some Observations about O() When choosing an O() approximation to T(), remember that: constant factors do not matter e.g. T(n) = (n+1) 2 is O(n 2 ) low-order terms do not matter e.g. T(n) = 10n 2 + 50n + 100 is O(n 2 ) there are many possible witnesses 25

26 3. Big-Oh for Programs First decide on a size measure for the data in the program. This will become the n. Data TypePossible Size Measure integerits value stringits length arrayits length 26

27 3.1. Building a Big-Oh Result The Big-Oh value for a program is built up inductively by: 1) Calculate the Big-Ohs for all the simple statements in the program e.g. assignment, arithmetic 2) Then use those value to obtain the Big-Ohs for the complex statements e.g. blocks, for loops, if-statements 27

28 Simple Statements (in C) We assume that simple statements always take a constant amount of time to execute written as O(1) Kinds of simple statements: assignment, break, continue, return, all library functions (e.g. putchar(),scanf()), arithmetic, boolean tests, array indexing 28

29 Complex Statements The Big-Oh value for a complex statement is a combination of the Big-Oh values of its component simple statements. Kinds of complex statements: blocks {... } conditionals: if-then-else, switch loops: for, while, do-while continued 29

30 3.2. Structure Trees The easiest way to see how complex statement timings are based on simple statements (and other complex statements) is by drawing a structure tree for the program. 30

31 Example: binary conversion void main() { int i; (1) scanf(%d, &i); (2) while (i > 0) { (3) putchar(0 + i%2); (4) i = i/2; } (5) putchar(\n); } 31

32 Structure Tree for Example block 1-5 1 1 5 5 while 2-4 block 3-4 3 3 4 4 the time for this is the time for (3) + (4) 32

33 3.3. Details for Complex Statements Blocks: Running time bound = summation of the bounds of its parts. The summation rule means that only the largest Big-Oh value is considered. "summation" means 'add' 33

34 Block Calculation Graphically O( f 1 (n) ) O( f 2 (n) ) O( f k (n) ) O( f 1 (n) + f 2 (n) +... + f k (n)) In other words: O( largest f i (n) ) summation rule 34

35 Block Summation Rule Example First block's time T1(n) = O(n 2 ) Second block's time T2(n) = O(n) Total running time = O(n 2 + n) = O(n 2 ) the largest part 35

36 Conditionals Conditionals: Running time bound = the cost of the if-test + larger of the bounds for the if- and else- parts When the if-test is a simple statement (a boolean test), it is O(1). e.g. if statements, switches 36

37 Conditional Graphically Test Else Part If Part O(1) O( max( f 1 (n), f 2 (n)) +1 ) which is the same as O( max( f 1 (n), f 2 (n)) ) O( f 1 (n) )O( f 2 (n) ) 37

38 If Example Code fragment: if (x < y)// O(1) foo(x);// O(n) else bar(y);// O(n 2 ) Total running time = O( max(n,n 2 ) + 1) = O(n 2 + 1) = O(n 2 ) 38

39 Loops Loops: Running time bound is usually = the max. number of times round the loop * the time to execute the loop body once But we must include O(1) for the increment and test each time around the loop. Must also include the initialization and final test costs (both O(1)). 39

40 While Graphically Test Body O(1) O( f(n) ) At most g(n) times around O( g(n)*f(n) ) Altogether this is: O( g(n)*(f(n)+1) + 1 ) which can be simplified to: O( g(n)*f(n) ) 40

41 While Loop Example Code fragment: x = 0; while (x < n) {// O(1) for test foo(x, n);// O(n 2 ) x++;// O(1) } Total running time of loop: = O( n*( 1 + n 2 + 1) + 1 ) = O(n 3 + 2n + 1) = O(n 3 ) 41

42 For-loop Graphically Test Body Increment Initialize O(1) O( f(n) ) At most g(n) times around O( g(n)*(f(n)+1+1) + 1) which can be simplified to: O( g(n)*f(n) ) O(1) 42

43 For Loop Example Code Fragment: for (i=0; i < n; i++) foo(i, n);// O(n 2 ) It helps to rewrite this as a while loop: i=0;// O(1) while (i < n) {// O(1) for test foo(i, n);// O(n 2 ) i++;// O(1) } continued 43

44 Running time for the for loop: = O( 1 + n*( 1 + n 2 + 1) + 1 ) = O( 2 + n 3 + 2n ) = O(n 3 ) 44

45 3.4.1. Example: nested loops (1)for(i=0; i < n; i++)_ (2) for (j = 0; j < n; j++) (3) A[i][j] = 0; line (3) is a simple op - takes O(1) line (2) is a loop carried out n times takes O(n *1) = O(n) line (1) is a loop carried out n times takes O(n * n) = O(n 2 ) 45

46 3.4.2. Example: if statement (1)if (A[0][0] == 0) { (2) for(i=0; i < n; i++)_ (3) for (j = 0; j < n; j++) (4) A[i][j] = 0; } (5)else { (6) for (i=0; i < n; i++) (7) A[i][i] = 1; } continued 46

47 The if-test takes O(1); the if block takes O(n 2 ); the else block takes O(n). Total running time: = O(1) + O( max(n 2, n) ) = O(1) + O(n 2 ) = O(n 2 )// using the summation rule 47

48 3.4.3. Time for a Binary Conversion void main() { int i; (1) scanf(%d, &i); (2) while (i > 0) { (3) putchar(0 + i%2); (4) i = i/2; } (5) putchar(\n); } continued 48

49 Lines 1, 2, 3, 4, 5: each O(1) Block of 3-4 is O(1) + O(1) = O(1) While of 2-4 loops at most (log 2 i)+1 times total running time = O(1 * log 2 i+1) = O(log 2 i) Block of 1-5: = O(1) + O(log 2 i) + O(1) = O(log 2 i) why? 49

50 Why (log 2 i)+1 ? Assume i = 2 k Start 1 st iteration, i = 2 k Start 2 nd iteration, i = 2 k-1 Start 3 rd iteration, i = 2 k-2 Start k th iteration, i = 2 k-(k-1) = 2 1 = 2 Start k+1 th iteration, i = 2 k-k = 2 0 = 1 the while will terminate after this iteration Since 2 k = i, so k = log 2 i So k+1, the no. of iterations, = (log 2 i)+1 50

51 Using a Structure Tree block 1-5 1 1 5 5 while 2-4 block 3-4 3 3 4 4 O(1) O(log 2 i) 51

52 3.4.4. Time for a Selection Sort void selectionSort(int A[], int n) { int i, j, small, temp; (1) for (i=0; i < n-1; i++) { (2) small = i; (3) for( j= i+1; j < n; j++) (4) if (A[j] < A[small]) (5) small = j; (6) temp = A[small]; (7) A[small] = A[i]; (8) A[i] = temp; } } 52

53 Selection Sort Structure Tree for 1-8 6 6 7 7 block 2-8 for 3-5 2 2 5 5 if 4-5 8 8 53

54 Lines 2, 5, 6, 7, 8: each is O(1) If of 4-5 is O(max(1,0)+1) = O(1) For of 3-5 is O( (n-(i+1))*1) = O(n-i-1) = O(n), simplified Block of 2-8 = O(1) + O(n) + O(1) + O(1) + O(1) = O(n) For of 1-8 is: = O( (n-1) * n) = O(n 2 - n) = O(n 2 ), simplified if partelse part 54

55 4. Analyzing Function calls In this section, we assume that the functions are not recursive we add recursion in section (5) Size measures for all the functions must be similar, so they can be combined to give the programs Big-Oh value. 55

56 Example Program #include int bar(int x, int n); int foo(int x, int n): void main() { int a, n; (1) scanf(%d, &n); (2) a = foo(0, n); (3) printf(%d\n, bar(a,n)); } continued 56

57 int bar(int x, int n) { int i; (4) for(i = 1; i <= n; i++) (5) x += i; (6) return x; } int foo(int x, int n) { int i; (7) for(i = 1; i <= n; i++) (8) x += bar(i, n); (9) return x; } 57

58 Calling Graph main foo bar 58

59 Calculating Times with a Calling Graph 1. Calculate times for Group 0 functions those that call no other user functions 2. Calculate times for Group 1 functions those that call Group 0 functions only 3. Calculate times for Group 2 functions those that call Group 0 and Group 1 functions only 4. Continue until the time for main() is obtained. 59

60 Example Program Analysis Group 0: bar() is O(n) Group 1: foo() is O( n * n) = O(n 2 ) Group 2: main() is = O(1) + O(n 2 ) + O(1) + O(n) = O(n 2 ) bar() in body 60

61 5. Analyzing Recursive Functions Recursive functions call themselves with a smaller size argument, and terminate by calling a base case. int factorial(int n) { if (n <= 1) return 1; else return n*factorial(n-1); } 61

62 Running Time for a Recursive Function 1. Develop basis and inductive statements for the running time. 2. Solve the corresponding recurrence relation. this usually requires the Big-Oh notation to be rewritten as constants and multiples of n e.g. O(1) becomes a, O(n) becomes b*n, O(n 2 ) becomes c*n 2, etc. continued 62

63 3. Translate the solved relation back into Big- Oh notation rewrite the remaining constants back into Big-Oh form e.g. a becomes O(1), b*n becomes O(n) 63

64 5.1. Factorial Running Time Step 1. Basis: T(1) = O(1) Induction: T(n) = O(1) + T(n-1), for n > 1 Step 2. Simplify the relation by replacing the O() notation with constants. Basis: T(1) = a Induction: T(n) = b + T(n-1), for n > 1 64

65 The simplest way to solve T(n) is to calculate it for some values of n, and then guess the general expression. T(1) = a T(2) = b + T(1)= b + a T(3) = b + T(2)= 2b + a T(4) = b + T(3)= 3b + a Obviously, the general form is: T(n) = ((n-1)*b) + a = bn + (a-b) continued 65

66 Step 3. Translate back: T(n) = bn + (a-b) Replace constants by Big-Oh notation: T(n) = O(n) + O(1) = O(n) The running time for recursive factorial is O(n). That is fast. 66

67 5.2. Recursive Selection Sort void rSSort(int A[], int n) { int imax, i; if (n == 1) return; else { imax = 0; /* A[0] is biggest */ for (i=1; i A[imax]) imax = i; swap(A, n-1, imax); rSSort(A, n-1); } } 67

68 Running Time Step 1. Basis: T(1) = O(1) Induction: T(n) = O(n-1) + T(n-1), for n > 1 Step 2. Basis: T(1) = a Induction: T(n) = b(n-1) + T(n-1), for n > 1 continued multiple of n-1 the loop call to rSSort() Assume swap() is O(1), so ignore n == the size of the array 68

69 Solve the relation: T(1) = a T(2) = b + T(1)= b + a T(3) = 2b + T(2)= 2b + b + a T(4) = 3b + T(3)= 3b + 2b + b + a General Form: T(n) = (n-1)b +... + b + a = a + b(n-1)n/2 continued 69

70 Step 3. Translate back: T(n) = a + b(n-1)n/2 Replace constants by Big-Oh notation: T(n) = O(1) + O(n 2 ) + O(n) = O(n 2 ) The running time for recursive selection sort is O(n 2 ). That is slow for large arrays. 70

71 6. Further Information Discrete Mathematics and its Applications Kenneth H. Rosen McGraw Hill, 2007, 7th edition chapter 3, sections 3.2 – 3.3 71


Download ppt "Discrete Maths Objective to describe the Big-Oh notation for estimating the running time of programs 242-213, Semester 2, 2013-2014 10. Running Time of."

Similar presentations


Ads by Google