Presentation is loading. Please wait.

Presentation is loading. Please wait.

CS210- Lecture 2 Jun 2, 2005 Announcements Questions

Similar presentations


Presentation on theme: "CS210- Lecture 2 Jun 2, 2005 Announcements Questions"— Presentation transcript:

1 CS210- Lecture 2 Jun 2, 2005 Announcements Questions
Course webpage has been posted at I will post Assignment 1 this weekend. Questions 5/16/2019 CS210-Summer 2005, Lecture 2

2 Agenda Analysis of Algorithms Asymptotic Notations Examples 5/16/2019
CS210-Summer 2005, Lecture 2

3 Algorithm An algorithm is a well defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output. In other words an algorithm is a sequence of computational steps that transform the input into the output. 5/16/2019 CS210-Summer 2005, Lecture 2

4 Motivation to Analyze Algorithms
Understanding the resource requirements of an algorithm Time Space Running time analysis estimates the time required of an algorithm as a function of the input size. Usages: Estimate growth rate as input grows. Allows to choose between alternative algorithms. 5/16/2019 CS210-Summer 2005, Lecture 2

5 Three Running Times for an Algorithm
Best Case: When the situation is ideal. For e.g. : Already sorted input for a sorting algorithm. Not an interesting case to study. Lower bound on running time Worst case: When situation is worst. For e.g.: Reverse sorted input for a sorting algorithm. Interesting to study since then we can say that no matter what the input is, algorithm will never take any longer. Upper bound on running time for any input. 5/16/2019 CS210-Summer 2005, Lecture 2

6 Three Running Times for an Algorithm
Average case: Any random input. Often roughly as bad as worst case. Problem with this case is that it may not be apparent what constitutes an “average” input for a particular problem. 5/16/2019 CS210-Summer 2005, Lecture 2

7 Experimental Studies Write a program implementing the algorithm
Run the program with inputs of varying size and composition Use a method like System.currentTimeMillis() to get an accurate measure of the actual running time Plot the results 5/16/2019 CS210-Summer 2005, Lecture 2

8 Limitations of Experiments
It is necessary to implement the algorithm, which may be difficult Results may not be indicative of the running time on other inputs not included in the experiment. In order to compare two algorithms, the same hardware and software environments must be used 5/16/2019 CS210-Summer 2005, Lecture 2

9 Theoretical Analysis Uses a high-level description of the algorithm instead of an implementation Characterizes running time as a function of the input size, n. Takes into account all possible inputs Allows us to evaluate the speed of an algorithm independent of the hardware/software environment 5/16/2019 CS210-Summer 2005, Lecture 2

10 Analysis and measurements
Performance measurement (execution time): machine dependent. Performance analysis: machine independent. How do we analyze a program independent of a machine? Counting the number steps. 5/16/2019 CS210-Summer 2005, Lecture 2

11 RAM Model of Computation
In RAM model of computation instructions are executed one after another. Assumption: Basic operations (steps) take 1 time unit. What are basic operations? Arithmetic operations, comparisons, assignments, etc. Library routines such as sort should not be considered basic. Use common sense 5/16/2019 CS210-Summer 2005, Lecture 2

12 Pseudo code High-level description of an algorithm
More structured than English prose Less detailed than a program Preferred notation for describing algorithms Hides program design issues By inspecting the pseudo code, we can determine the maximum number of primitive operations executed by an algorithm, as a function of the input size. 5/16/2019 CS210-Summer 2005, Lecture 2

13 Example 1: Input size: n (number of array elements)
Algorithm arraySum (A, n) Input array A of n integers Output Sum of elements of A # operations sum  for i  0 to n  1 do n+1 sum  sum + A [i] n return sum Example: Find sum of array elements Input size: n (number of array elements) Total number of steps: 2n + 3 5/16/2019 CS210-Summer 2005, Lecture 2

14 Example 2 Input size: n (number of array elements)
Algorithm arrayMax(A, n) Input array A of n integers Output maximum element of A # operations currentMax  A[0] for i  1 to n  1 do n if A [i]  currentMax then n -1 currentMax  A [i] n -1 return currentMax Example: Find max element of an array Input size: n (number of array elements) Total number of steps: 3n 5/16/2019 CS210-Summer 2005, Lecture 2

15 Seven Growth Functions
Seven functions that often appear in algorithm analysis: Constant  1 Logarithmic  log n Linear  n Log Linear  n log n Quadratic  n2 Cubic  n3 Exponential  2n 5/16/2019 CS210-Summer 2005, Lecture 2

16 The Constant Function f(n) = c for some fixed constant c.
The growth rate is independent of the input size. Most fundamental constant function is g(n) = 1 f(n) = c can be written as f(n) = cg(n) Characterizes the number of steps needed to do a basic operation on a computer. 5/16/2019 CS210-Summer 2005, Lecture 2

17 The Linear and Quadratic Functions
Linear Function f(n) = n For example comparing a number x to each element of an array of size n will require n comparisons. Quadratic Function f(n) = n2 Appears when there are nested loops in algorithms 5/16/2019 CS210-Summer 2005, Lecture 2

18 Example 3 Total number of steps: 2n2 + 2n + 2 Algorithm Mystery(n)
sum  # operations for i  0 to n  1 do n + 1 for j  0 to n  1 do n (n + 1) sum  sum n .n Total number of steps: 2n2 + 2n + 2 5/16/2019 CS210-Summer 2005, Lecture 2

19 The Cubic functions and other polynomials
f(n) = n3 Polynomials f(n) = a0 + a1n+ a2n2+ …….+adnd d is the degree of the polynomial a0,a1….... ad are called coefficients. 5/16/2019 CS210-Summer 2005, Lecture 2

20 The Exponential Function
f(n) = bn b is the base n is the exponent For example if we have a loop that starts by performing one operation and then doubles the number of operations performed in the nth iteration is 2n . Exponent rules: (ba)c = bac babc = ba+c ba/bc = ba-c 5/16/2019 CS210-Summer 2005, Lecture 2

21 The Logarithm Function
f(n) = logbn b is the base x = logbn if and only if bx = n Logarithm Rules logbac = logba + logbc Logba/c = logba – logbc logbac = clogba logba = (logda)/ logdb b log d a = a log d b 5/16/2019 CS210-Summer 2005, Lecture 2

22 The N-Log-N Function f(n) = nlogn
Function grows little faster than linear function and a lot slower than the quadratic function. 5/16/2019 CS210-Summer 2005, Lecture 2

23 Growth rates Compared n=1 n=2 n=4 n=8 n=16 n=32 1 logn 2 3 4 5 n 8 16
2 3 4 5 n 8 16 32 nlogn 24 64 160 n2 256 1024 n3 512 4096 32768 2n 235 65536 5/16/2019 CS210-Summer 2005, Lecture 2

24 Asymptotic Notation Although we can sometimes determine the exact running time of an algorithm, the extra precision is not usually worth the effort of computing it. For large enough inputs, the multiplicative constants and lower order terms of an exact running time are dominated by the effects of the input size itself. 5/16/2019 CS210-Summer 2005, Lecture 2

25 Asymptotic Notation When we look at input sizes large enough to make only the order of growth of the running time relevant we are studying the asymptotic efficiency of algorithms. Three notations Big-Oh (O) notation Big-Omega(Ω) notation Big-Theta(Θ) notation 5/16/2019 CS210-Summer 2005, Lecture 2

26 Big-Oh Notation A standard for expressing upper bounds
Definition: f(n) = O (g(n)) if there exist constant c > 0 and n0 >=1 such that f(n) ≤ cgn) for all n ≥ n0 We say: f(n) is big-O of g(n), or The time complexity of f(n) is g(n). Intuitively, an algorithm A is O(g(n)) means that, if the input is of size n, the algorithm will stop after g(n) time. The running time of Example 1 is O(n), i.e., ignore constant 2 and value 3 (f(n)= 2n + 3). because f(n) ≤ 3n for n ≥ (c = 3, and n0 = 10) 5/16/2019 CS210-Summer 2005, Lecture 2

27 Example 4 Definition does not require upper bound to be tight, though we would prefer as tight as possible What is Big-Oh of f(n) = 3n+3 Let g(n) = n, c = 6 and n0 = 1; f(n) = O(g(n)) = O(n) because 3n+3 ≤ 6g(n) if n ≥ 1 Let g(n) = n, c = 4 and n0 = 3; f(n) = O(g(n)) = O(n) because 3n+3 ≤ 4g(n) if n ≥ 3 Let g(n) = n2, c = 1 and n0 = 5; f(n) = O(g(n)) = O(n2) because 3n+3 ≤ (g(n))2 if n ≥ 5 We certainly prefer O(n). 5/16/2019 CS210-Summer 2005, Lecture 2

28 Example 5 What is Big-Oh for f(n) = n2 + 5n – 3?
Let g(n) = n2, c = 2 and n0 = 6. Then f(n) = O(g(n)) = O(n2) because f(n) ≤ 2 g(n) if n ≥ n0. i.e., n2 + 5n – 3 ≤ 2n2 if n ≥ 6 5/16/2019 CS210-Summer 2005, Lecture 2

29 More about Big-Oh notation
Asymptotic: Big-Oh is meaningful only when n is sufficiently large n ≥ n0 means that we only care about large size problems. Growth rate: A program with O(g(n)) is said to have growth rate of g(n). It shows how fast the running time grows when n increases. 5/16/2019 CS210-Summer 2005, Lecture 2

30 More nested loops for (i=0; i<n; i++) for (j=i; j<n; j++) k++;
1 int k = 0; for (i=0; i<n; i++) for (j=i; j<n; j++) k++; Running time 5/16/2019 CS210-Summer 2005, Lecture 2

31 Big-Omega Notation A standard for expressing lower bounds
Definition (omega): f(n) = Ω(g(n)) if there exist some constants c >0 and n0 >= 1 such that f(n) ≥ cg(n) for all n ≥ n0 5/16/2019 CS210-Summer 2005, Lecture 2

32 Big-Theta Notation A standard for expressing exact bounds
Definition (Theta): f(n) = Θ(g(n)) if and only if f(n) =O(g(n)) and f(n) = Ω(g(n)). An algorithm is Θ(g(n)) means that g(n) is a tight bound (as good as possible) on its running time. On all inputs of size n, time is ≤ g(n) On all inputs of size n, time is ≥ g(n) 5/16/2019 CS210-Summer 2005, Lecture 2

33 Computing Fibonacci numbers
We write the following program: a recursive program 1 long int fib(n) { 2 if (n <= 1) 3 return 1; 4 else return fib(n-1) + fib(n-2) Let us analyze the running time. 5/16/2019 CS210-Summer 2005, Lecture 2

34 fib(n) runs in exponential time
Let T denote the running time. T(0) = T(1) = c T(n) = T(n-1) + T(n-2) + 2 where 2 accounts for line 2 plus the addition at line 3. It can be shown that the running time is Ω((3/2)n). So the running time grows exponentially. 5/16/2019 CS210-Summer 2005, Lecture 2

35 Efficient Fibonacci numbers
Avoid recomputation Solution with linear running time int fib(int n) { int fibn=0, fibn1=0, fibn2=1; if (n < 2) return n else { for( int i = 2; i <= n; i++ ) { fibn = fibn1 + fibn2; fibn1 = fibn2; fibn2 = fibn; } return fibn; }} 5/16/2019 CS210-Summer 2005, Lecture 2

36 Summary: lower vs. upper bounds
This section gives some ideas on how to analyze the complexity of programs. We have focused on worst case analysis. Upper bound O(g(n)) means that for sufficiently large inputs, running time f(n) is bounded by a multiple of g(n). Lower bound Ω(g(n)) means that for sufficiently large n, there is at least one input of size n such that running time is at least a fraction of g(n) We also touch the “exact” bound Θ(g(n)). 5/16/2019 CS210-Summer 2005, Lecture 2

37 Summary: algorithms vs. Problems
Running time analysis establishes bounds for individual algorithms. Upper bound O(g(n)) for a problem: there is some O(g(n)) algorithms to solve the problem. Lower bound Ω(g(n)) for a problem: every algorithm to solve the problem is Ω(g(n)). 5/16/2019 CS210-Summer 2005, Lecture 2


Download ppt "CS210- Lecture 2 Jun 2, 2005 Announcements Questions"

Similar presentations


Ads by Google