UNIT-I FUNDAMENTALS OF THE ANALYSIS OF ALGORITHM EFFICIENCY ANALYSIS AND DESIGN OF ALGORITHMS CHAPTER 2:

Slides:



Advertisements
Similar presentations
Introduction to Algorithms Rabie A. Ramadan rabieramadan.org 2 Some of the sides are exported from different sources.
Advertisements

Analysis of Algorithms
Chapter 1 – Basic Concepts
Fall 2006CENG 7071 Algorithm Analysis. Fall 2006CENG 7072 Algorithmic Performance There are two aspects of algorithmic performance: Time Instructions.
CSC401 – Analysis of Algorithms Lecture Notes 1 Introduction
Chapter 2: Fundamentals of the Analysis of Algorithm Efficiency
Introduction to Analysis of Algorithms
Complexity Analysis (Part I)
Analysis of Algorithms (Chapter 4)
Chapter 2 Fundamentals of the Analysis of Algorithm Efficiency Copyright © 2007 Pearson Addison-Wesley. All rights reserved.
Analysis of Algorithms1 Estimate the running time Estimate the memory space required. Time and space depend on the input size.
Analysis of Algorithms
Design and Analysis of Algorithms - Chapter 21 Analysis of Algorithms b Issues: CorrectnessCorrectness Time efficiencyTime efficiency Space efficiencySpace.
Algorithm Analysis CS 201 Fundamental Structures of Computer Science.
Liang, Introduction to Java Programming, Eighth Edition, (c) 2011 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
Analysis of Algorithms COMP171 Fall Analysis of Algorithms / Slide 2 Introduction * What is Algorithm? n a clearly specified set of simple instructions.
Design and Analysis of Algorithms Chapter Analysis of Algorithms Dr. Ying Lu August 28, 2012
Analysis of Performance
Liang, Introduction to Java Programming, Seventh Edition, (c) 2009 Pearson Education, Inc. All rights reserved Chapter 23 Algorithm Efficiency.
Analysis and Design of Algorithms. According to math historians the true origin of the word algorism: comes from a famous Persian author named ál-Khâwrázmî.
Program Performance & Asymptotic Notations CSE, POSTECH.
1 Chapter 24 Developing Efficient Algorithms. 2 Executing Time Suppose two algorithms perform the same task such as search (linear search vs. binary search)
Analysis Tools Jyh-Shing Roger Jang ( 張智星 ) CSIE Dept, National Taiwan University.
Lecture 2 Computational Complexity
Algorithm Efficiency CS 110: Data Structures and Algorithms First Semester,
Mathematics Review and Asymptotic Notation
CS 3343: Analysis of Algorithms
Algorithm Analysis An algorithm is a clearly specified set of simple instructions to be followed to solve a problem. Three questions for algorithm analysis.
Design and Analysis of Algorithms - Chapter 21 Analysis of Algorithms b Issues: CorrectnessCorrectness Time efficiencyTime efficiency Space efficiencySpace.
Analysis of Algorithms
2IL50 Data Structures Fall 2015 Lecture 2: Analysis of Algorithms.
Analysis of Algorithm Efficiency Dr. Yingwu Zhu p5-11, p16-29, p43-53, p93-96.
CS 221 Analysis of Algorithms Instructor: Don McLaughlin.
1 COMP3040 Tutorial 1 Analysis of algorithms. 2 Outline Motivation Analysis of algorithms Examples Practice questions.
Asymptotic Analysis-Ch. 3
Program Efficiency & Complexity Analysis. Algorithm Review An algorithm is a definite procedure for solving a problem in finite number of steps Algorithm.
Chapter 5 Algorithms (2) Introduction to CS 1 st Semester, 2015 Sanghyun Park.
CSC310 © Tom Briggs Shippensburg University Fundamentals of the Analysis of Algorithm Efficiency Chapter 2.
Algorithmic Analysis Charl du Plessis and Robert Ketteringham.
Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 1 Chapter.
Algorithm Analysis Part of slides are borrowed from UST.
Page 1 adapted after Dr. Menezes Analysis of Algorithms What does it mean to analyze ? Strictly speaking analysis is the separation of an intellectual.
Analysis of algorithms. What are we going to learn? Need to say that some algorithms are “better” than others Criteria for evaluation Structure of programs.
1/6/20161 CS 3343: Analysis of Algorithms Lecture 2: Asymptotic Notations.
2-0 Copyright © 2007 Pearson Addison-Wesley. All rights reserved. A. Levitin “ Introduction to the Design & Analysis of Algorithms, ” 2 nd ed., Ch. 2 Theoretical.
A. Levitin “Introduction to the Design & Analysis of Algorithms,” 3rd ed., Ch. 2 ©2012 Pearson Education, Inc. Upper Saddle River, NJ. All Rights Reserved.
E.G.M. PetrakisAlgorithm Analysis1  Algorithms that are equally correct can vary in their utilization of computational resources  time and memory  a.
Big O David Kauchak cs302 Spring Administrative Assignment 1: how’d it go? Assignment 2: out soon… Lab code.
1 Chapter 2 Algorithm Analysis All sections. 2 Complexity Analysis Measures efficiency (time and memory) of algorithms and programs –Can be used for the.
Lecture 4 Jianjun Hu Department of Computer Science and Engineerintg University of South Carolina CSCE350 Algorithms and Data Structure.
1 Chapter 2 Algorithm Analysis Reading: Chapter 2.
LECTURE 2 : fundamentals of analysis of algorithm efficiency Introduction to design and analysis algorithm 1.
Algorithm Analysis 1.
Theoretical analysis of time efficiency
Analysis of Algorithms
Analysis of algorithms
Design and Analysis of Algorithms Chapter -2
Introduction to the Design and Analysis of Algorithms
Introduction to Algorithms
Analysis of algorithms
Chapter 2 Fundamentals of the Analysis of Algorithm Efficiency
CS 3343: Analysis of Algorithms
Analysis of algorithms
CS 201 Fundamental Structures of Computer Science
Chapter 2.
Fundamentals of the Analysis of Algorithm Efficiency
At the end of this session, learner will be able to:
Analysis of algorithms
Fundamentals of the Analysis of Algorithm Efficiency
Presentation transcript:

UNIT-I FUNDAMENTALS OF THE ANALYSIS OF ALGORITHM EFFICIENCY ANALYSIS AND DESIGN OF ALGORITHMS CHAPTER 2:

2 Analysis Framework - Measuring an Input’s Size - Units for Measuring Running Time - Orders of Growth - Worst – Case, Best – Case, and Average – Case Efficiencies - Recapitulation of the Analysis Framework Asymptotic Notations and Basic Efficiency Classes - Informal Introduction - O – Notation - Ω - Notation - Θ – Notation - Useful Property Involving the Asymptotic Notations - Using Limits for Comparing Orders of Growth - Basic Efficiency Classes Mathematical Analysis of Nonrecursive Algorithms Mathematical Analysis of Recursive Algorithms OUTLINE :

3 Analysis Framework Analysis of algorithms means investigation of an algorithm’s efficiency with respect to two resources: running time and memory space. i.e, Analysis refers to the task of determining how much computing time and storage an algorithm requires. Space complexity of an algorithm is the amount of memory it needs to run to completion. Time complexity of an algorithm is the amount of computer time it needs to run to completion. Performance evaluation can be divided into two major phases: - a priori estimates (Performance analysis) - a posteriori testing (Performance measurements)

4 Measuring an Input’s Size:  Time complexity depends on the number of inputs also. i.e, the running time of an algorithm increases with the input size. Example : It takes longer time to sort larger arrays, to multiply larger matrices and so on.  Therefore, it is logical to investigate an algorithm’s efficiency as a function of some parameter n indicating the algorithm’s input size.  In most cases, selecting parameter n is straight forward. Example : For problems of sorting, searching, finding largest element etc., the size metric will be the size of the list.

5 Measuring an Input’s Size: For problem of evaluating a polynomial P(x) = a n x n a 0 of degree n, the input size metric will be the polynomial’s degree or the number of coefficients, which is larger by one than its degree. For problem of computing the product of two n x n matrices, the natural size measures are frequently used matrix order n and the total number of elements N in the matrices being multiplied. The algorithm’s efficiency will be qualitatively different depending on which of the two measures we use. The choice of the appropriate size metric can be influenced by operations of the algorithm. Example: Consider a spell-checking algorithm:

6 Measuring an Input’s Size: Example: Consider a spell-checking algorithm : If the algorithm examines individual characters then the size measure will be the number of characters in the input. If the algorithm works by processing words, then the size measure will be the number of words in the input. The Input size metric for algorithms involving properties of number (e.g., checking whether a given integer n is prime) will be the number b of bits in the n’s binary representation: b = log 2 n + 1

7 Units for Measuring Running Time:  Some standard unit of time measurements – a second, a millisecond, and so on can be used to measure the running time of a program implementing the algorithm.  The obvious drawbacks to above approach are: dependence on the speed of a particular computer, dependence on the quality of a program implementing the algorithm, compiler used in generating the machine code, difficulty of clocking the actual running time of the program.  For measuring the algorithm’s efficiency, we need to have a metric that does not depend on these hardware and software factors.

8 Units for Measuring Running Time: To measure an algorithm’s efficiency:  One possible approach is to count the number of times each of the algorithm’s operations are executed. This approach is both excessively difficult and usually unnecessary.  Another approach is to identify the basic operation(primitive operation), i.e., the operation contributing the most to the total running time, and compute the number of times the basic operation is executed on inputs of size n. Example: Sorting algorithms works by comparing elements of a list being sorted with each other. For such algorithms, basic operation is a key comparison. Matrix multiplication and polynomial evaluation requires two arithmetic operations: multiplication and addition. On most computers, multiplication of two numbers takes longer time than addition. Hence, the basic operation considered is multiplication.

9 INPUT SIZE AND BASIC OPERATION EXAMPLES ProblemInput Size Measure Basic Operation Search for a key in a list of n items Number of items in the list, n Key comparison Sort the list of n itemsNumber of items in the list, n Key comparison Add two n by n matrices Dimension of matrices, n Addition Polynomial Evaluation Order of the Polynomial, n Multiplication Multiply two n by n matrices Dimension of matrices, n Multiplication

10 Units for Measuring Running Time:  The established framework for the analysis of an algorithm’s time efficiency suggests measuring time efficiency by counting the number of times the algorithm’s basic operation is executed on input’s of size n.  Let C op be the execution time of an algorithm’s basic operation on a particular computer. C(n) be the number of times this basic operation needs to be executed for this algorithm. Then, the running time T(n) can be estimated by T(n) ≈ C op. C(n) Note: C op is an approximation & C(n) does not contain any information about operations that are not basic. Therefore, above formula can give a reasonable estimate of the algorithm’s running time.

11 Units for Measuring Running Time: Problem: Assuming C(n) = 1 n (n – 1), how much longer will the algorithm 2 run if we double its input size? Solution: C(n) = 1 n (n – 1) = 1 n n ≈ 1 n Therefore, T(2n) ≈ C op. C(2n) ≈ ½(2n) 2 = 4 T(n) C op. C(n) ½ n 2 Note: The Efficiency analysis framework ignores multiplicative constants and concentrates on the basic operation count’s order of growth for large-size inputs.

12 ORDERS OF GROWTH  For GCD(m, n) of two small numbers, it is not clear how much more efficient Euclid’s algorithm is, compared to other two algorithms.  It is only when we have to find the GCD of two large numbers (eg., GCD(31415, 14142)) that the difference in algorithm efficiencies becomes both clear and important.  For large values of n, it is the function’s order of growth that counts.

13 ORDERS OF GROWTH nlog 2 n n n log 2 n n 2 n 3 2 n n! x x x x x x x x x Table: Values (some approximate) of several functions important for analysis of algorithms.

14  The function growing the slowest among these is the logarithmic function.  A program implementing an algorithm with a logarithmic basic- operation count runs practically instantaneously on inputs of all realistic sizes.  Exponential function 2 n and the factorial function n! grows so fast that their values become large even for small values of n.  Algorithms that require an exponential number of operations are practical for solving only problems of very small sizes. ORDERS OF GROWTH

15  Another way to appreciate the qualitative difference among the orders of growth of the functions is to consider how they react to, say, a twofold increase in the value of their argument n :  The logarithmic function log 2 n increases in value by just 1. (Because, log 2 2n = log log 2 n = 1 + log 2 n)  The linear function n increases twofold.  The “n-log-n” function n log 2 n increases slightly more than twofold.  The quadratic function n 2 increases fourfold. (Because, (2n) 2 = 4n 2 ).  The cubic function n 3 increases eightfold. (Because, (2n) 3 = 8n 3 ).  The value of exponential function 2 n gets squared. (Because, 2 2n = (2 n ) 2 ).  The factorial function n! increases much more than value of 2 n. ORDERS OF GROWTH

16 Worst-Case, Best-Case and Average-Case Efficiencies:   There are many algorithms for which running time depends not only on an input’s size, but also on the specifics of a particular input. ALGORITHM SequentialSearch (A[0…n-1], k) //Searches for a given value in a given array by sequential search //Input: An array A[0…n-1] and a search key k //Output: The index of the first element of A that matches k or -1 if // there are no matching elements. i ← 0 while i < n and A[i] ≠ k do i ← i + 1 if i < n return i else return -1   Running Time of this algorithm can be quite different for the same list of size n.

17 Worst-Case, Best-Case and Average-Case Efficiencies: Worst – Case Efficiency:  When there are no matching elements or the first matching element happens to be the last one in the list, the algorithm makes the largest number of key comparisons. C worst (n) = n  The Worst – Case Efficiency of an algorithm is its efficiency for the worst – case input of size n, which is an input for which the algorithm runs the longest among all possible inputs of that size.  Analyze the algorithm to see what kind of inputs yields the largest value of the basic operation’s count C(n) among all possible inputs of size n and then compute C worst (n).

18 Worst-Case, Best-Case and Average-Case Efficiencies: Best – Case Efficiency:  The Best – Case Efficiency of an algorithm is its efficiency for the best – case input of size n, which is an input for which the algorithm runs the fastest among all possible inputs of that size.  Analyze the algorithm to see what kind of inputs yields the smallest value of the basic operation’s count C(n) among all possible inputs of size n and then compute C best (n).  For Sequential search, best-case inputs are lists of size n with their first elements equal to a search key. C best (n) = 1 for successful search. C best (n) = n for unsuccessful search.

19 Worst-Case, Best-Case and Average-Case Efficiencies: Average – Case Efficiency:  Neither the worst-case analysis nor its best-case counterpart yields the necessary information about an algorithm’s behavior on a typical or random input. This is provided by Average-case analysis. Assumptions for Sequential Search: (a) The probability of a successful search is equal to P (0 ≤ P ≤ 1). (b) Probability of first match occurring at i th position is same for every i. Now, find the average number of key comparisons C avg (n) as follows:  In case of successful search, the probability of the first match occurring in ith position of the list is p/n for every i and number of comparisons is i.

20 Worst-Case, Best-Case and Average-Case Efficiencies: Average – Case Efficiency:  In case of unsuccessful search, the number of comparisons is n, with the probability of such a search being (1-P). C avg (n) = [1. P + 2. P i. P n. P] + n. (1 - P) n n n n For Successful search For Unsuccessful search = P[ n] + n(1 – P) n = P n(n + 1) + n(1 – P) = P(n + 1) + n(1 – P) n 2 2 If P = 1 (i.e., successful search) then C avg (n) = (n+1)/2 If P=0 ( i.e., Unsuccessful search) then C avg (n) = n.

21 RECAPITULATION OF THE ANALYSIS FRAMEWORK  Both time and space efficiencies are measured as functions of the algorithm’s input size.  Time efficiency is measured by counting the number of times the algorithm’s basic operation is executed.  The efficiencies of some algorithms may differ significantly for inputs of the same size. For such algorithms, we need to distinguish between the worst- case, average-case, and best-case efficiencies.  The framework’s primary interest lies in the order of growth of the algorithm’s running time as its input size goes to infinity.

22 ASYMPTOTIC NOTATIONS   The Efficiency analysis framework concentrates on the order of growth of an algorithm’s basic operation count as the principal indicator of the algorithm’s efficiency.   To Compare and rank the order of growth of algorithm’s basic operation count, computer scientists use three notations: O( Big oh), Ω( Big omega), and Θ(Big theta).   In the following discussion, t(n) and g(n) can be any two nonnegative functions defined on the set of natural numbers.   t(n) will be an algorithm’s running time (usually indicated by its basic operation count C(n)), and g(n) will be some simple function to compare the count with.

23 INFORMAL INTRODUCTION TO ASYMPTOTIC NOTATIONS  Informally, O(g(n)) is the set of all functions with a smaller or same order of growth as g(n) (to within a constant multiple, as n goes to infinity). Examples:  n Є O(n 2 ), 100n + 5 Є O(n 2 ) The above two functions are linear and hence have a smaller order of growth than g(n) = n 2.  1 n(n – 1) Є O(n 2 ) 2 The above function is quadratic and hence has the same order of growth as g(n) = n 2. Note: n 3 Є O(n 2 ), n 3 Є O(n 2 ), n 4 + n + 1 Є O(n 2 ). Functions n 3 and n 3 are both cubic and hence have a higher order of growth than n 2. Fourth-degree polynomial n 4 + n + 1 also has higher order of growth than n 2.

24 INFORMAL INTRODUCTION TO ASYMPTOTIC NOTATIONS  Informally, Ω(g(n)) stands for the set of all functions with a larger or same order of growth as g(n) (to within a constant multiple, as n goes to infinity). Examples:  n 3 Є Ω(n 2 ) The above function is cubic and hence has a larger order of growth than g(n) = n 2.  1 n(n – 1) Є Ω(n 2 ) 2 The above function is quadratic and hence has the same order of growth as g(n) = n 2. Note: 100n + 5 Є Ω(n 2 ) Function 100n + 5 is linear and hence has a smaller order of growth than n 2.

25 INFORMAL INTRODUCTION TO ASYMPTOTIC NOTATIONS  Informally, Θ(g(n)) is the set of all functions that have the same order of growth as g(n) (to within a constant multiple, as n goes to infinity). Example:  Every quadratic function ax 2 + bx + c with a > 0 is in Θ(n 2 ). Note:  100n + 5 Є Θ(n 2 ) Function 100n + 5 is linear and hence has a smaller order of growth than n 2.  n 3 Є Θ(n 2 ) Function n 3 is cubic and hence has a larger order of growth than n 2.

26 ASYMPTOTIC NOTATIONS Θ(…) is an asymptotically tight bound. O(…) is an asymptotic upper bound. Ω(…) is an asymptotic lower bound. Other asymptotic notations: o(…) is little-oh notation. “ grows strictly slower than ” ω(…) is little-omega notation. “ grows strictly faster than ” “asymptotically equal” “asymptotically smaller or equal” “asymptotically greater or equal”

27 Problem: Using informal definitions of O, Θ and Ω notations, determine whether the following assertions are true or false: 1) 2n 2 Є O(n 3 ) True 7) n Є Θ(n 2 ) False 2) n 2 Є O(n 2 ) True 8) n (n + 1)/2 Є O(n 3 ) True 3) n 3 Є O(nlogn) False 9) n (n + 1)/2 Є O(n 2 ) True 4) n Є Ω(log n) True 10) n (n + 1)/2 Є Θ(n 3 ) False 5) n Є Ω(n 2 ) False 11) n (n +1)/2 Є Ω(n) True 6) n 2 /4 - n/2 Є Θ(n 2 ) True Note: If the order of growth of algorithm1’s basic operation count is higher than the order of growth of algorithm2’s basic operation count then algorithm2 can run faster than algorithm1. One algorithm is more efficient than another algorithm if its worst-case running time has a lower order of growth.

A function t(n) is said to be in O(g(n)), denoted t(n) Є O(g(n)), if t(n) is bounded above by some constant multiple of g(n) for all large n, i.e., if there exist some positive constant c and some nonnegative integer n 0 such that 0 ≤ t(n) ≤ c.g(n) for all n ≥ n 0 O – Notation: n n0n0 cg(n) t(n) 0 Notation: t(n) = O(g(n)) Formal Definition of Asymptotic Notations Input Size Running Time

29   Example2: n 2 Є O ( n ) Proof: n 2  cn n  c The above inequality cannot be satisfied since c must be a constant  Example1: 2 n + 10 Є O ( n ) Proof: 2 n + 10  cn cn  2 n  10 ( c  2) n  10 n  10/( c  2) Pick c = 3 and n 0 = 10 O – Notation:

30 f(n) is  (g(n)) if there is a constant c > 0 and an integer constant n 0  1 such that f(n)  cg(n) for n  n 0 let c = 5 and n 0 = 1 5n 2 is  (n 2 ) Ω-notation n n0n0 cg(n) f(n) 0 Notation: f(n) = Ω(g(n))

31 n n0n0 c 1 g(n) c 2 g(n) f(n) 0 Notation: f(n) = Θ(g(n)) Θ-notation

32 Useful Property Involving the Asymptotic Notations The following property is useful in analyzing algorithms that comprise two consecutively executed parts. Theorem: If t 1 (n) Є O(g 1 (n)) and t 2 (n) Є O(g 2 (n)) then t 1 (n) + t 2 (n) Є O(max{g 1 (n), g 2 (n)}). Proof: a 1, b 1, a 2, & b 2 are four arbitrary real numbers. If a 1 ≤ b 1 & a 2 ≤ b 2 then a 1 + a 2 ≤ 2 max{b 1, b 2 }. Since t 1 (n) Є O(g 1 (n)), there exists some positive constant c 1 and some nonnegative integer n 1 such that t 1 (n) ≤ c 1 g 1 (n) for all n ≥ n 1 Similarly, since t 2 (n) Є O(g 2 (n)), t 2 (n) ≤ c 2 g 2 (n) for all n ≥ n 2 Let us denote c 3 =max{c 1,c 2 } & consider n ≥ max {n 1,n 2 } t 1 (n) + t 2 (n) ≤ c 1 g 1 (n) + c 2 g 2 (n) ≤ c 3 g 1 (n) + c 3 g 2 (n) = c 3 [g 1 (n) + g 2 (n)] ≤ c 3 2 max{g 1 (n), g 2 (n)} Hence, t 1 (n) + t 2 (n) Є O(max{g 1 (n), g 2 (n)}) with c = 2c 3 =2max{c 1, c 2 } and n 0 =max{n 1,n 2 } Note: Algorithm’s overall efficiency is determined by the part with a larger order of growth i.e, its least efficient part.

33 Using Limits for Comparing Orders of Growth

34 Basic Efficiency Classes Class Name Comments Short of Best-case efficiencies 1 constant Short of Best-case efficiencies Typically a result of cutting a problem’s size by a constant log n logarithmic Typically a result of cutting a problem’s size by a constant factor on each iteration of the algorithm factor on each iteration of the algorithm Algorithms that scan a list of size n belong to this class n linear Algorithms that scan a list of size n belong to this class Many divide-and-conquer algorithms in average case fall n log n “n-log-n” Many divide-and-conquer algorithms in average case fall into this category into this category Typically, characterizes efficiency of algorithms with two n 2 quadratic Typically, characterizes efficiency of algorithms with two embedded loops embedded loops Typically, characterizes efficiency of algorithms with n 3 cubic Typically, characterizes efficiency of algorithms with three embedded loops three embedded loops Typical for algorithms that generate all subsets of an 2 n exponential Typical for algorithms that generate all subsets of an n-element set. n-element set. Typical for algorithms that generate all permutations of n! factorial Typical for algorithms that generate all permutations of an n-element set an n-element set

35 Input Size Running Time n

36 General plan for Analyzing Time Efficiency of Nonrecursive Algorithms:   Decide on a parameter(s) n indicating input’s size.   Identify algorithm’s basic operation.   Check whether the number of times the basic operation is executed depends only on the input size n. If it also depends on the type of input, investigate worst, average, and best case efficiency separately.   Set up summation for C(n) reflecting the number of times the algorithm’s basic operation is executed.   Simplify summation using standard formulas and establish the count’s order of growth. Mathematical Analysis of Nonrecursive Algorithms

37 Two Basic Rules of Sum Manipulation : u u ∑ ca i = c ∑ a i (R 1 ) i=l u u u ∑ (a i ± b i ) = ∑ a i ± ∑ b i (R 2 ) i=l i=l i=l Two Summation Formulas : u ∑ 1 = u – l + 1 where l ≤ u are some lower and i=l upper integer limits (S 1 ) n n ∑ i = ∑ i = … + n = n(n + 1) ≈ 1 n 2 Є Θ(n 2 ) (S 2 ) i=0 i=1 2 2

38 Example 1: To find the largest element in a list of n numbers. ALGORITHM MaxElement (A[0…n-1]) //Determines the value of the largest element in a given array //Input: An array A[0…n-1] of real numbers //Output: The value of the largest element in A maxval ← A[0] for i ← 1 to n-1 do if A[i] > maxval maxval ← A[i] return maxval

39 Example 2: To Check whether all the elements in a given array are distinct or not. ALGORITHM UniqueElements (A[0…n-1]) //Determines whether all the elements in a given array are distinct //Input: An array A[0…n-1] //Output: Returns “true” if all the elements in A are distinct // and “false” otherwise for i ← 0 to n - 2 do for j ← i+1 to n - 1 do if A[i] = A[j] return false return true

40 Example 3: Given two n-by-n matrices A and B, compute their product C=AB. ALGORITHM MatrixMultiplication (A[0…n-1, 0…n-1], B[0…n-1, 0…n-1]) //Multiplies two n-by-n matrices //Input: Two n-by-n matrices A and B //Output: Matrix C = AB for i ← 0 to n - 1 do for j ← 0 to n - 1 do C[i,j] ← 0 for k ← 0 to n - 1 do C[i,j] ← C[i,j] + A[i,k] * B[k,j] return C

41 General plan for Analyzing Time Efficiency of Recursive Algorithms:   Decide on a parameter(s) n indicating input’s size.   Identify algorithm’s basic operation.   Check whether the number of times the basic operation is executed can vary on different inputs of same size; If it can, the worst-case, average-case, and best- case efficiencies must be investigated separately.   Set up a recurrence relation, with an appropriate initial condition, for the number of times the basic operation is executed.   Solve the recurrence or at least ascertain the order of growth of its solution. Mathematical Analysis of Recursive Algorithms

42 End of Chapter 2