Algorithm Analysis: Running Time Big O and omega ( 

Slides:



Advertisements
Similar presentations
MATH 224 – Discrete Mathematics
Advertisements

CMSC 341 Asymptotic Analysis. 2 Mileage Example Problem: John drives his car, how much gas does he use?
CSCE 2100: Computing Foundations 1 Running Time of Programs
Reference: Tremblay and Cheston: Section 5.1 T IMING A NALYSIS.
Chapter 1 – Basic Concepts
Fall 2006CENG 7071 Algorithm Analysis. Fall 2006CENG 7072 Algorithmic Performance There are two aspects of algorithmic performance: Time Instructions.
Introduction to Analysis of Algorithms
Complexity Analysis (Part I)
Not all algorithms are created equally Insertion of words from a dictionary into a sorted list takes a very long time. Insertion of the same words into.
Analysis of Algorithms1 Estimate the running time Estimate the memory space required. Time and space depend on the input size.
Cmpt-225 Algorithm Efficiency.
Algorithm Analysis CS 201 Fundamental Structures of Computer Science.
 Last lesson  Arrays for implementing collection classes  Performance analysis (review)  Today  Performance analysis  Logarithm.
The Efficiency of Algorithms
Data Structures and Algorithms1 Basics -- 2 From: Data Structures and Their Algorithms, by Harry R. Lewis and Larry Denenberg (Harvard University: Harper.
DATA STRUCTURES AND ALGORITHMS Lecture Notes 1 Prepared by İnanç TAHRALI.
1 Chapter 2 Program Performance – Part 2. 2 Step Counts Instead of accounting for the time spent on chosen operations, the step-count method accounts.
Algorithm Analysis. Algorithm Def An algorithm is a step-by-step procedure.
Chapter 1 Algorithm Analysis
Lecture 2 We have given O(n 3 ), O(n 2 ), O(nlogn) algorithms for the max sub-range problem. This time, a linear time algorithm! The idea is as follows:
Time Complexity Dr. Jicheng Fu Department of Computer Science University of Central Oklahoma.
Algorithm Analysis & Complexity We saw that a linear search used n comparisons in the worst case (for an array of size n) and binary search had logn comparisons.
Program Performance & Asymptotic Notations CSE, POSTECH.
CSC 201 Analysis and Design of Algorithms Lecture 03: Introduction to a CSC 201 Analysis and Design of Algorithms Lecture 03: Introduction to a lgorithms.
Chapter 2.6 Comparison of Algorithms modified from Clifford A. Shaffer and George Bebis.
Week 2 CS 361: Advanced Data Structures and Algorithms
C. – C. Yao Data Structure. C. – C. Yao Chap 1 Basic Concepts.
Lecture 8. How to Form Recursive relations 1. Recap Asymptotic analysis helps to highlight the order of growth of functions to compare algorithms Common.
For Wednesday Read Weiss chapter 3, sections 1-5. This should be largely review. If you’re struggling with the C++ aspects, you may refer to Savitch, chapter.
1 Recursion Algorithm Analysis Standard Algorithms Chapter 7.
Lecture 2 Computational Complexity
Algorithm Efficiency CS 110: Data Structures and Algorithms First Semester,
Mathematics Review and Asymptotic Notation
Analysis of Algorithms1 Running Time Pseudo-Code Analysis of Algorithms Asymptotic Notation Asymptotic Analysis Mathematical facts.
DISCRETE MATHEMATICS I CHAPTER 11 Dr. Adam Anthony Spring 2011 Some material adapted from lecture notes provided by Dr. Chungsim Han and Dr. Sam Lomonaco.
Complexity Analysis Chapter 1.
Analysis of Algorithms
1 COMP3040 Tutorial 1 Analysis of algorithms. 2 Outline Motivation Analysis of algorithms Examples Practice questions.
Coursenotes CS3114: Data Structures and Algorithms Clifford A. Shaffer Department of Computer Science Virginia Tech Copyright ©
Analysis of Algorithms CSCI Previous Evaluations of Programs Correctness – does the algorithm do what it is supposed to do? Generality – does it.
Algorithms Growth of Functions. Some Notation NNatural numbers RReal numbers N + Positive natural numbers R + Positive real numbers R * Non-negative real.
Asymptotic Notation (O, Ω, )
Program Efficiency & Complexity Analysis. Algorithm Review An algorithm is a definite procedure for solving a problem in finite number of steps Algorithm.
Algorithm Analysis Part 2 Complexity Analysis. Introduction Algorithm Analysis measures the efficiency of an algorithm, or its implementation as a program,
Algorithm Efficiency There are often many approaches (algorithms) to solve a problem. How do we choose between them? At the heart of a computer program.
Chapter 5 Algorithms (2) Introduction to CS 1 st Semester, 2015 Sanghyun Park.
CSC310 © Tom Briggs Shippensburg University Fundamentals of the Analysis of Algorithm Efficiency Chapter 2.
DATA STRUCTURES AND ALGORITHMS Lecture Notes 2 Prepared by İnanç TAHRALI.
Chapter 2 Computational Complexity. Computational Complexity Compares growth of two functions Independent of constant multipliers and lower-order effects.
Time Complexity. Solving a computational program Describing the general steps of the solution –Algorithm’s course Use abstract data types and pseudo code.
1/6/20161 CS 3343: Analysis of Algorithms Lecture 2: Asymptotic Notations.
Data Structures and Algorithm Analysis Algorithm Analysis and Sorting
© Haluk Bingöl v2.23 Data Structures and Algorithms - 01 Algorithms Analysis Dr. Haluk Bingöl BÜ - CmpE BU-SWE.
1 Chapter 2 Algorithm Analysis All sections. 2 Complexity Analysis Measures efficiency (time and memory) of algorithms and programs –Can be used for the.
Program Performance 황승원 Fall 2010 CSE, POSTECH. Publishing Hwang’s Algorithm Hwang’s took only 0.1 sec for DATASET1 in her PC while Dijkstra’s took 0.2.
Announcement We will have a 10 minutes Quiz on Feb. 4 at the end of the lecture. The quiz is about Big O notation. The weight of this quiz is 3% (please.
1 Chapter 2 Algorithm Analysis Reading: Chapter 2.
Ch03-Algorithms 1. Algorithms What is an algorithm? An algorithm is a finite set of precise instructions for performing a computation or for solving a.
GC 211:Data Structures Week 2: Algorithm Analysis Tools Slides are borrowed from Mr. Mohammad Alqahtani.
Algorithm Analysis 1.
Chapter 2 Algorithm Analysis
What is an Algorithm? Algorithm Specification.
Complexity Analysis.
CS 201 Fundamental Structures of Computer Science
Advanced Analysis of Algorithms
Algorithm Analysis Bina Ramamurthy CSE116A,B.
At the end of this session, learner will be able to:
CS210- Lecture 2 Jun 2, 2005 Announcements Questions
Estimating Algorithm Performance
Presentation transcript:

Algorithm Analysis: Running Time Big O and omega ( 

Introduction An algorithm analysis of a program is a step by step procedure for accomplishing that program In order to learn about an algorithm, we need to analyze it This means we need to study the specification of the algorithm and draw conclusion about the implementation of that algorithm (the program) will perform in general

The issues that should be considered in analyzing an algorithm are: The running time of a program as a function of its inputs The total or maximum memory space needed for program data The total size of the program code Whether the program correctly computes the desired result The complexity of the program. For example, how easy it is to read, understand, and modify the program The robustness of the program. For example, how well does it deal with unexpected or erroneous inputs

In this course, we consider the running time of the algorithm. The main factors that effect the running time are the algorithm itself, input data, the computer system, etc. The performance of a computer is determined by The hardware The programming language used and The operating system To calculate the running time of a general C++ program, we first need to define a few rules. In our rules, we are going to assume that the effect of hardware and software systems used in the machines are independent of the running time of our C++ program

Rule 1: The time required to fetch an integer from memory is a constant t(fetch), The time required to store an integer in memory is also a constant t(store) For example the running time of x = y is: t(fetch) + t(store) because we need to fetch y from memory store it into x Similarly the running time of x = 1 is also t(fetch) + t(store) because typically any constant is stored in the memory before it is fetched.

Rule 2 The time required to perform elementary operations on integers, such as addition t(+), subtraction t(-), multiplication t(*), division t(/), and comparison t(cmp), are all constants. For Example the running time of y= x+1 is: 2t(fetch) + t(store) + t (+) because you need to fetch x and 1:2*t(fetch) then add them together:t(+) and place the result into y:t(store)

Rule 3: The time required to call a function is a constant, t(call) And the time required to return a function is a constant, t(return) Rule 4: The time required to pass an integer argument to a function or procedure is the same as the time required to store an integer in memory, t(store)

For example the running time of y = f(x) is: t(fetch) + 2t(store) + t(call) + t(f(x)) Because you need To fetch the value of x:t (fetch) Pass x to the function and store it into parameter:t (store) Call the function f(x):t (call) Run the function:t (f(x)) Store the returned result into y:t (store)

Rule 5: The time required for the address calculation implied by an array subscripting operation like a[i] is a constant, t([ ]). This time does not include the time to compute the subscript expression, nor does it include the time to access (fetch or store) the array element For example, the running time of y = a[i] is: 3t(fetch) + t([ ]) + t(store) Because you need To fetch the value of i:t(fetch) To fetch the value of a:t(fetch) To find the address of a[i]:t([ ]) To fetch the value of a[i]:t(fetch) To store the value of a[i] into y:t(store)

Rule 6: The time required to calculate a fixed amount of storage from the heap using operator new is a constant, t(new) This time does not include any time required for initialization of the storage (calling a constructor). Similarly, the time required to return a fixed amount of storage to the heap using operator delete is a constant, t(delete). This time does not include any time spent cleaning up the storage before it is returned to the heap (calling destructor) For example the running time of int* ptr = new int; is: t(new) + t(store) Because you need To allocate a memory: t(new) And to store its address into ptr: t(store) For example the running time of delete ptr; is: t(fetch ) + t(delete) Because you need To fetch the address from ptr : t(fetch) And delete specifies location: t(delete)

1.int Sum (int n) 2.{ 3. int result =0; 4. for (int i=1; i<=n; ++i) 5. result += i; 6. return result 7.} [6t(fetch) +2t(store) + t(cmp) + 2t(+)]*n + [5t(fetch) + 2t(store) + t(cmp) + t(return) ] Total Return resultt(fetch)+ t(return)6 Result +=i(2t(fetch)+t(+) +t(store)) *n5 ++i(2t(fetch)+t(+) +t(store)) *n4c i<=n(2t(fetch)+t(cmp)) * (n+1)4b i = 1t(fetch) + t(store)4a result = 0t(fetch)+ t(store)3 CodeTimeStatement

1.int func (int a[ ], int n, int x) 2.{ 3. int result = a[n]; 4. for (int i=n-1; i>=0; --i) 5. result =result *x + a[i]; 6. return result 7.} [(9t(fetch) +2t(store) + t(cmp) +t([]) + t(*) + t(-)]*n + [(8t(fetch) + 2t(store) + t([]) + t(-) +t(cmp) + t (return) )] Total t(fetch)+ t(return)6 (5t(fetch)+t([ ])+t(+)+t(*)+t(store)) *n5 (2t(fetch)+t(-) +t(store)) *n4c (2t(fetch)+t(cmp)) * (n+1)4b 2t(fetch) + t(-) + t(store)4a 3t(fetch)+ t([ ]) + t(store)3 Time Statement

Using constant times such as t(fetch), t(store), t(delete), t(new), t(+), …, ect makes our running time accurate However, in order to make life simple, we can consider the approximate running time of any constant to be the same time as t(1). For example, the running time of y = x + 1 is 3 because it includes two “fetches” and one “store” in which all are constants For a loop there are two cases: If we know the exact number of iterations, the running time becomes constant t(1) If we do not know the exact number of iterations, the running time becomes t(n) where n is the number of iterations

1.int Geometric (int x, int n) 2.{ 3. int sum = 0; 4. for (int i=0; i<=n; ++i) 5. { 6. int prod = 1; 7. for (int j=0; j<i; ++j) 8. prod = prod * x; 9. sum = sum + prod; 10. } 11. return result 12.} 4(n+1)9 8 7c 7b 2(n+1)7a 2(n+1)6 (11/2)n 2 + (47/2)n + 27Total 211 4(n+1)4c 3(n+2)4b 24a 23 TimeStatement  i=0 n 4 i  n 4 i  n 3 i+1

1.int Power (int x, int n) 2.{ 3. if (n= =0) 4. return 1; 5. else if (n%2 = = 0) // n is even 6. return Power (x*x, n/2); 7. else // n is odd 8. return x* Power (x*x, n/2); 9.} n= T( n/2 ) T( n/2 ) n>0 (n is even) 12 + T( n/2 ) T( n/2 )Total n>0 (n is odd)Statement

5 for n=0 T(n) = 18+T( n/2 ) for n>0 and n is even 20+T( n/2 )for n> 0 and n is odd Suppose n = 2 k for some k>0. Obviously 2 k is an even number, we get T(2 k ) = 18 + T(2 k-1 ) = T(2 k-2 ) = T(2 k-3 ) = ….. = 18k + T(2 k-k ) = 18k + T(2 0 ) = 18k + T(1)

Since T(1) is add number, the running time of T(1) is: T(1) = 20 + T(0) = = 25 Therefore, T(2 k ) = 18k + 25 If n = 2 k, then log n = log2 k indicating that k = log n Therefore, T(n) = 18log n + 25

Asymptotic Notation Suppose the running time of two algorithms A and B are T A (n) and T B (n), respectively where n is the size of the problem How can we determine T A (n) is better than T B (n)? One way to do that is if we know the size n ahead of time for some n=n o. Then we may say that algorithm A is performing better than algorithm B for n= n o But this is a special case for n=n o. How about n = n 1, or n=n 2 ? Is A better than B for other cases too? Unfortunately, this is not an easy answer. We cannot expect that the size of n to be known ahead of time. But we may be able to say that under certain situations T A (n) is better than T B (n) for all n >= n 1

To understand the running times of the algorithms we need to make some definitions: Definition: Consider a function f(n) which is non-negative for all integers n>=0. We say that “f(n) is big oh of g(n)” which we write (f(n) is O(g(n)) if there exists an integer n o and a constant c > 0 such that for all integers n >=n o, f(n) <=c g(n) Example: Show that f(n) = 8n is O(n 2 ) 8n <= c n 2 (lets set c = 1) 0 <= cn 2 -8n <= (n-16) (n+8) Thus we can say that for constant c =1 and n >= 16, f(n) is O(n 2 )

g 3 (n)=n 2 g 2 (n)=2n 2 g 1 (n)=4n 2 f(n)=8n n f(n)

Theorem: If f 1 (n) is O(g 1 (n)) and f 2 (n) is O(g 2 (n)), then f 1 (n) + f 2 (n) = O (max(g 1 (n), g 2 (n))) Proof: If f 1 (n) is O(g 1 (n)) then f 1 (n) = n 1 If f 2 (n) is O(g 2 (n)) then f 2 (n) = n 2 Let n o = max(n 1, n 2 ) and c o = 2(max(c 1, c 2 )), consider the sum f 1 (n) + f 2 (n) for some n >= n o f 1 (n) + f 2 (n) < =c 1 g 1 (n) + c 2 g 2 (n) < = c o (g 1 (n) + g 2 (n) )/2 < =c o (max (g 1 (n), g 2 (n)) Therefore, f 1 (n) + f 2 (n) is O (max(g 1 (n), g 2 (n)) )

Theorem: If f 1 (n) is O(g 1 (n)) and f 2 (n) is O(g 2 (n)), then f 1 (n) * f 2 (n) = O(g 1 (n)*g 2 (n) ) Proof: If f 1 (n) is O(g 1 (n)) then f 1 (n) =n 1 If f 2 (n) is O(g 2 (n)) then f 2 (n) =n 2 Let n o = max(n 1, n 2 ) and c o = c 1 *c 2, consider the product of f 1 (n)*f 2 (n) for some n>=n o f 1 (n) * f 2 (n) <=c 1 g 1 (n) * c 2 g 2 (n) < = c o (g 1 (n) * g 2 (n) ) Therefore, f 1 (n) * f 2 (n) is O (g 1 (n)*g 2 (n) )

Theorem: If f(n) is O(g(n)) and g(n) is O(h(n)), then f(n) is O(h(n)) Proof: If f(n) is O(g(n)) then f(n) =n 1 If g(n) is O(h(n)) then g(n) =n 2 Let n o = max(n1, n2) and c o = c1*c2, then f(n) <=c 1 g 1 (n) <= c 1 c 2 h(n) <= c o h(n) Therefore, f(n) is O (h(n))

The names of common big O expressions QuadraticO(n 2 ) nlognO (n*log n) LinearO (n) log squaredO(log 2 n) logarithmicO (log n) ConstantO(1) exponential Cubic Name O(2 n ) O(n 3 ) Expression

Conventions for writing Big Oh Expression Certain conventions have evolved which concern how big oh expression normally written: First, it is common practice when writing big oh expression to drop all but the most significant items. Thus instead of O(n 2 + nlogn + n) we simply write O(n 2 ) Second, it is common practice to drop constant coefficients. Thus, instead of O(3n 2 ), we write O(n 2 ). As a special case of this rule, if the function is a constant, instead of, say O(1024), we simply write O(1)

Asymptotic Lower Bound (  ) Definition: Consider a function f(n), which is non-negative for all integers n>=0. We say that “f(n) is omega of g(n)” which we write (f(n) is  (g(n)), if there exists an integer n o and a constant c > 0 such that for all integers n>= n o, f(n) >= c g(n) Example: Show that f(n) = 5n n is  (n 2 ) 5n n >= cn 2 let c=1 5n n >= n 2 5n n –n 2 >= 0 4n n + 256>= 0 4(n-8) 2 >= 0 Let n o = 8, we can say that for c=1 and n o >=8 f(n) is  (n 2 )

f(n)=n 2 f(n)=2n 2 f(n)=5n2-64n

Other definitions Definition: Consider a function f(n) which is non-negative for all integers n>=0. We say that “f(n) is theta of g(n)” which we write (f(n) is  (g(n)) if and only if f(n) is O(g(n)) and f(n) is  (g(n)) Definition: Consider a function f(n) which is non-negative for all integers n>=0. We say that “f(n) is little o of g(n)” which we write (f(n) is o(g(n)) if and only if f(n) is O(g(n)) and f(n) is not  (g(n)) Now lets consider some of the previous examples in terms of the big O notations:

1.int func (int a[ ], int n, int x) 2.{ 3. int result = a[n]; 4. for (int i=n-1; i>=0; --i) 5. result =result *x + a[i]; 6. return result 7.} 16n n 4n 3n Simple Time model O(n)Total O(1) 6 O(n) 5 4c O(n) 4b O(1) 4a O(1) 3 Big OStatement The total running time is: O(16n + 14) = O(max(16n, 14)) = O(16n) = O(n)

1.int PrefixSums (int a[ ], int n) 2.{ 3. for (int j=n-1; i>=0; --j) 4. { 5. int sum = 0; 6. for (int i=0; i<=j; ++i) 7. sum = sum + a[i]; 8. a[j] = sum; 9. } 10. return result 11.} O(1)*O(n 2 ) 6b O(1)*O(n) 6a O(1)*O(n) 5 O(n 2 )Total O(1)*O(n) 9 O(1)*O(n 2 ) 7 6c O(1)*O(n) 3c O(1)*O(n) 3a O(1) 3a Big OStatement