Presentation is loading. Please wait.

Presentation is loading. Please wait.

Copyright © Cengage Learning. All rights reserved. CHAPTER 11 ANALYSIS OF ALGORITHM EFFICIENCY ANALYSIS OF ALGORITHM EFFICIENCY.

Similar presentations


Presentation on theme: "Copyright © Cengage Learning. All rights reserved. CHAPTER 11 ANALYSIS OF ALGORITHM EFFICIENCY ANALYSIS OF ALGORITHM EFFICIENCY."— Presentation transcript:

1 Copyright © Cengage Learning. All rights reserved. CHAPTER 11 ANALYSIS OF ALGORITHM EFFICIENCY ANALYSIS OF ALGORITHM EFFICIENCY

2 Copyright © Cengage Learning. All rights reserved. Application: Analysis of Algorithm Efficiency I SECTION 11.3

3 3 The Sequential Search Algorithm

4 4 The object of a search algorithm is to hunt through an array of data in an attempt to find a particular item x. In a sequential search, x is compared to the first item in the array, then to the second, then to the third, and so on. The search is stopped if a match is found at any stage. On the other hand, if the entire array is processed without finding a match, then x is not in the array.

5 5 The Sequential Search Algorithm An example of a sequential search is shown diagrammatically in Figure 11.3.1. Sequential Search of a[1], a[2],..., a[7] for x where x = a[5] Figure 11.3.1

6 6 Example 4 – Best- and Worst-Case Orders for Sequential Search Find best- and worst-case orders for the sequential search algorithm from among the set of power functions. Solution: Suppose the sequential search algorithm is applied to an input array a[1], a[2],..., a[n] to find an item x. In the best case, the algorithm requires only one comparison between x and the items in a[1], a[2],..., a[n].

7 7 Example 4 – Solution This occurs when x is the first item in the array. Thus in the best case, the sequential search algorithm is (1). (Note that ) In the worst case, however, the algorithm requires n comparisons. This occurs when x = a[n] or when x does not appear in the array at all. Thus in the worst case, the sequential search algorithm is (n). cont’d

8 8 The Insertion Sort Algorithm

9 9 Insertion sort is an algorithm for arranging the items in an array into ascending order. Initially, the second item is compared to the first. If the second item is less than the first, their values are interchanged, and as a result the first two array items are in ascending order. The idea of the algorithm is gradually to lengthen the section of the array that is known to be in ascending order by inserting each subsequent array item into its correct position relative to the preceding ones. When the last item has been placed, the entire array is in ascending order.

10 10 The Insertion Sort Algorithm Figure 11.3.2 illustrates the action of step k of insertion sort on an array a[1], a[2], a[3],..., a[n]. Step k of Insertion Sort Figure 11.3.2

11 11 The Insertion Sort Algorithm Two aspects of algorithm efficiency are important: the amount of time required to execute the algorithm and the amount of memory space needed when it is run. Occasionally, one algorithm may make more efficient use of time but less efficient use of memory space than another, forcing a trade-off based on the resources available to the user.

12 12 Time Efficiency of an Algorithm

13 13 Time Efficiency of an Algorithm How can the time efficiency of an algorithm be calculated? The answer depends on several factors. One is the size of the set of data that is input to the algorithm; for example, it takes longer for a sort algorithm to process 1,000,000 items than 100 items. Consequently, the execution time of an algorithm is generally expressed as a function of its input size.

14 14 Time Efficiency of an Algorithm Another factor that may affect the run time of an algorithm is the nature of the input data. For instance, a program that searches sequentially through a list of length n to find a data item requires only one step if the item is first on the list, but it uses n steps if the item is last on the list. Thus algorithms are frequently analyzed in terms of their “best case,” “worst case,” and “average case” performances for an input of size n.

15 15 Time Efficiency of an Algorithm Roughly speaking, the analysis of an algorithm for time efficiency begins by trying to count the number of elementary operations that must be performed when the algorithm is executed with an input of size n (in the best case, worst case, or average case). What is classified as an “elementary operation” may vary depending on the nature of the problem the algorithms being compared are designed to solve.

16 16 Time Efficiency of an Algorithm For instance, to compare two algorithms for evaluating a polynomial, the crucial issue is the number of additions and multiplications that are needed, whereas to compare two algorithms for searching a list to find a particular element, the important distinction is the number of comparisons that are required. As is common, we will classify the following as elementary operations: addition, subtraction, multiplication, division, and comparisons that are indicated explicitly in an if-statement using one of the relational symbols, , =, or ≠.

17 17 Time Efficiency of an Algorithm When algorithms are implemented in a particular programming language and run on a particular computer, some operations are executed faster than others, and, of course, there are differences in execution times from one machine to another. In certain practical situations these factors are taken into account when we decide which algorithm or which machine to use to solve a particular problem. In other cases, however, the machine is fixed, and rough estimates are all that we need to determine the clear superiority of one algorithm over another.

18 18 Time Efficiency of an Algorithm Since each elementary operation is executed in time no longer than the slowest, the time efficiency of an algorithm is approximately proportional to the number of elementary operations required to execute the algorithm.

19 19 Time Efficiency of an Algorithm Some of the orders most commonly used to describe algorithm efficiencies are shown in Table 11.3.1. As you see from the table, differences between the orders of various types of algorithms are more than astronomical. Time Comparisons of Some Algorithm Orders Table 11.3.1

20 20 Time Efficiency of an Algorithm The time required for an algorithm of order 2 n to operate on a data set of size 100,000 is approximately 10 30,076 times the estimated 15 billion years since the universe began (according to one theory of cosmology). On the other hand, an algorithm of order log 2 n needs at most a fraction of a second to process the same data set. The next example looks at an algorithm segment that contains a nested loop.

21 21 Example 2 – An Order for an Algorithm with a Nested Loop Assume n is a positive integer and consider the following algorithm segment: a. Compute the actual number of additions, subtractions, and multiplications that must be performed when this algorithm segment is executed.

22 22 Example 2 – An Order for an Algorithm with a Nested Loop b. Use the theorem on polynomial orders to find an order for this algorithm segment. Solution: a. There are two additions, one multiplication, and one subtraction for each iteration of the inner loop, so the total number of additions, multiplications, and subtractions is four times the number of iterations of the inner loop. cont’d

23 23 Example 2 – Solution Now the inner loop is iterated You can see this easily if you construct a table that shows the values of i and j for which the statements in the inner loop are executed. cont’d

24 24 Example 2 – Solution There is one iteration for each column in the table. cont’d

25 25 Example 2 – Solution Hence the total number of iterations of the inner loop is and so the number of additions, subtractions, and multiplications is cont’d

26 26 Example 2 – Solution An alternative method for computing the number of columns of the table: Observe that the number of columns in the table is the same as the number of ways to place two  ’s in n categories, 1, 2,..., n, where the location of the  ’s indicates the values of i and j with j  i. cont’d

27 27 Example 2 – Solution By Theorem 9.6.1, this number is cont’d

28 28 Example 2 – Solution Although, for this example, the alternative method is more complicated than the one preceding it, it is simpler when the number of loop nestings exceeds two. b. By the theorem on polynomial orders,, and so this algorithm segment is. cont’d

29 29 Time Efficiency of an Algorithm The following is a formal algorithm for insertion sort. Algorithm 11.3.1 Insertion Sort [The aim of this algorithm is to take an array a[1], a[2], a[3],..., a[n], where n  1, and reorder it. The output array is also denoted a[1], a[2], a[3],..., a[n]. It has the same values as the input array, but they are in ascending order. In the kth step, a[1], a[2], a[3],..., a[k – 1] is in ascending order, and a[k] is inserted into the correct position with respect to it.]

30 30 Time Efficiency of an Algorithm Input: n [a positive integer], a[1], a[2], a[3],..., a[n] [an array of data items capable of being ordered] Algorithm Body: for to n [Compare a[k] to previous items in the array a[1], a[2], a[3],..., a[k – 1], starting from the largest and moving downward. Whenever a[k] is less than a preceding array item, increment the index of the preceding item to move it one position to the right. As soon as a[k] is greater than or equal to an array item, insert the value of a[k] to the right of that item. If a[k] is greater than or equal to a[k – 1], then leave the value of a[k] unchanged.]

31 31 Time Efficiency of an Algorithm Output: a[1], a[2], a[3],..., a[n] [in ascending order]

32 32 Example 6 – Finding a Worst-Case Order for Insertion Sort a. What is the maximum number of comparisons that are performed when insertion sort is applied to the array a[1], a[2], a[3],..., a[n]? b. Use the theorem on polynomial orders to find a worst- case order for insertion sort. Solution: a. In each attempted iteration of the while loop, two explicit comparisons are made: one to test whether j ≠ 0 and the other to test whether a[ j ] > x.

33 33 Example 6 – Solution During the time that a[k] is put into position relative to a[1], a[2],..., a[k – 1], the maximum number of attempted iterations of the while loop is k. This happens when a[k] is less than every a[1], a[2],..., a[k – 1]; on the kth attempted iteration, the condition of the while loop is not satisfied because j = 0. Thus the maximum number of comparisons for a given value of k is 2k. cont’d

34 34 Example 6 – Solution Because k goes from 2 to n, it follows that the maximum total number of comparisons occurs when the items in the array are in reverse order, and it equals cont’d

35 35 Example 6 – Solution cont’d

36 36 Example 6 – Solution b. By the theorem on polynomial orders,, and so the insertion sort algorithm has worst-case order. cont’d


Download ppt "Copyright © Cengage Learning. All rights reserved. CHAPTER 11 ANALYSIS OF ALGORITHM EFFICIENCY ANALYSIS OF ALGORITHM EFFICIENCY."

Similar presentations


Ads by Google