Download presentation
Presentation is loading. Please wait.
Published byἈριστοφάνης Βέργας Modified over 6 years ago
1
Algorithm Efficiency: Searching and Sorting Algorithms
Recursion and Searching Problems Execution time of Algorithms Algorithms growth rate Order-of-Magnitude Analysis and Big O Notation The efficiency of Searching Algorithms Sorting Algorithms and their Efficiency
2
Recursion and Search Problems
Problem: Look for the word ‘vademecum’ in a dictionary! Solution 1: Start at the beginning of the dictionary and look at every word in order until you find ‘vademecum’ -- sequential search You want a faster way to perform the search? Solution 2: Open the dictionary probably to a point near its middle and glance at the page, determine which “half” of the dictionary contains the desired word -- binary search.
3
Pseudocode: A binary search of a dictionary
//search a dictionary for a word by using a recursive binary search if( the dictionary contains only one page) { scan the page for the word } else { open the dictionary to a point near the middle Determine which half of the dictionary contains the word if (the word is in the first half of the dictionary) { Search the first half of the dictionary for the word Search the second half of the dictionary for the word } // end if } // end if
4
Missing from our solution …
How do you scan a page? How do you find the middle of the dictionary? Once the middle is found, how do you determine which half contains the word? Search dictionary Search first half of dictionary Search second half of dictionary Recursive Solution
5
A binary search uses divide-and-Conquer strategy
After you have divided the dictionary so many times that you are left with only single page, halving ceases Now the problem is sufficiently small that you can solve it directly by scanning the single page that remains for the word – base case This strategy is one of Divide-and-conquer: You solve the dictionary search problem by first dividing the dictionary into two halves and then conquering the appropriate half
6
A more rigorous formulation
search(theDictionary, aWord) if( theDictionary is one page in size) { scan the page for aWord } else { Open theDictionary to a point near the middle Determine which half of theDictionary contains aWord if (aWord is in the first half of theDictionary) { search(first half of theDictionary, aWord) search(second half of theDictionary, aWord) } // end if } // end if
7
Determining the Efficiency of Algorithms
Comparison of Algorithms is a topic that is central to computer Science The choice of algorithm for a given application often has a great impact Responsive word processors, automatic teller machines, video games, and life support systems all depend on efficient algorithms Consider efficiency when selecting algorithms The analysis of algorithms is an area of computer science that provide tools for contrasting the efficiency of different methods of solutions An analysis should focus on gross differences in the efficiency of algorithms that are likely to dominate the overall cost of a solution
8
Determining the Efficiency of Algorithms
The efficiency of both time and memory is important, but the emphasis will be on time. Three difficulties with comparing programs instead of algorithms How are the algorithms coded? What computer should you use? What data the programs use? Algorithm analysis should be independent of specific implementations, computers, and data Computer scientists employ mathematical techniques that analyze algorithms independently of specific implementations, computers, or data. Begin the analysis by counting the number of significant operations in a particular solution
9
The Execution time of Algorithms
An algorithm’s execution time is related to the number of operations it requires Counting an algorithm’s operations – if possible - is a way to assess its efficiency Traversal of a linked list: Displaying contents of linked list Node curr = head; 1 assignment while (curr != null) { n +1 comparisons System.out.println(curr.getItem()); n writes Curr.setNext(curr.getNext()); n assignments } // end while
10
The Execution time of Algorithms
If each assignment, comparison, and write operation requires, respectively a, c, and w time units, the statements require (n+1) * (a + c) + n * w time units. Displaying the data in a linked list of n nodes requires time proportional to n. Intuitively it takes longer to display, or traverse, a linked list of 100 items than it does a liked list of 10 items
11
The Execution time of Algorithms
Nested loops: for (i = 1 through n) { for ( j = 1 through i) { for (k = 1 through 5) { Task T } // end for If task T requires t time units, the innermost loop on k requires 5 * t time units. The loop on j requires 5 * t * i time units, and the outermost loop on i requires ∑ni=1 (5*t*i) = 5 *t*(1+2+…+n)=5*t*n*(n+1)/2 time units.
12
Algorithm Growth Rates
Measure an algorithm's time requirements as a function of the problem size Problem size examples: number of nodes in a linked list, the number of disks in the Tower of Hanoi problem, the size of an array, the number of items in a stack, etc. Thus, we reached conclusions such as Algorithm A requires n2/5 time units to solve a problem of size n Algorithm B requires 5 * n time units to solve a problem of size n The time units in these statements must be the same before you can compare the efficiency of the two algorithms. Perhaps we should have written Algorithm A requires n2/5 seconds to solve a problem of size n
13
Algorithm Growth Rates
But, preceding statement is inherent to the following difficulties On what computer does the algorithm require n2/5 seconds? What implementation of the algorithm requires n2/5 seconds? What data caused the algorithm to require n2/5 seconds? Most important thing to learn is how quickly the algorithm’s time requirement grows as a function of the problem size Statements such as Algorithm A requires time proportional to n2 Algorithm B requires time proportional to n Each express an algorithm’s proportional time requirement, or growth rate, and enable you to compare algorithm A with another algorithm B
14
Algorithm Growth Rates
Compare algorithm efficiencies for large problems Although you cannot determine the exact time requirement for either algorithm A or algorithm B from these statements, you can determine that for large problems, B will require significantly less time than A. B’s time requirement as a function of the problem size n increases at a slower rate than A’s time requirement, because n increases at a slower rate than n2 Even if B actually requires 5*n seconds and actually A requires n2/5 seconds, B eventually will require significantly less time than A, as n increases.
15
The time requirements as function of the problem size n
Algorithm A requires n2/5 seconds Algorithm B requires 5*n seconds seconds n 25 Note: A’s time requirement does not exceed B’s until n exceeds 25!
16
Order-of-Magnitude Analysis and Big O notation
If Algorithm A requires time proportional to f(n), Algorithm A is said to be order f(n), which is denoted as O(f(n)). The function f(n) is called the algorithm’s growth-rate function Because the notation uses the capital letter O to denote order, it is called the Big O notation If a problem of size n requires time that is directly proportional to n, the problem is O(n)—that is, order n. If the time requirement is directly proportional to n2, the problem is O(n2), and so on.
17
Order-of-Magnitude Analysis and Big O notation
Definition of the Order of an Algorithm: An algorithm A is order f(n)—denoted O(f(n))—if constants k and n0 exist such that A requires no more than k*f(n) time units to solve a problem of size n ≥ n0 The requirement n ≥ n0 in the definition of O(f(n)) formalizes the notion of sufficiently large problems. In general, many values of k and n can satisfy the definition
18
Definition of the Order of an Algorithm: Examples
Suppose that an algorithm requires n2–3 * n + 10 seconds to solve a problem of size n. If constants k and n0 exist such that k*n2 > n2 -3*n +10 for all n ≥ n0 the algorithm is order n2. In fact, if k is 3 and n0 is 2, 3*n2 > n2-3*n+10 for all n ≥ 2 3*n2 n2 – 3*n + 10 Seconds When n ≥ 2, 3*n2 exceeds n2 – 3*n + 10; thus the algorithm requires no more than k*n2 time units for n > n0, and so is O(n2). 1 2 3 n
19
Definition of the Order of an Algorithm: Examples
We found that displaying a linked list’s first n items requires (n+1)*(a+c) + n*w time units. Since 2*n ≥ n +1 for n ≥ 1, (2*a + 2*c + w)*n ≥ (n+1)*(a+c) + n*w for n ≥1 Thus, this task is O(n). Here k is 2*a + 2*c + w, and n0 is 1
20
Order of growth of some common functions
O(1) < O(log2n) < O(n) < O(n*log2n) < O(n2) < O(n3) < O(2n) Function 10 100 1,000 10,000 100,000 1,000,000 1 log2n 3 6 9 13 16 19 n 102 103 104 105 106 n*log2n 30 664 9,965 107 n2 108 1010 1012 n3 109 1015 1018 2n 1030 10301 103,101 1030,103 10301,030
21
Graphical comparison of growth rate function (f(n)=1 is ommitted)
Value of growth-rate function 2n 100 n3 n2 75 n*log2n 50 n 25 log2n 1 5 10 15 20 n
22
Order of Growth of some Common Functions
The table demonstrates the relative speed at which the values of the function grow The growth-rate functions are also depicted graphically If algorithm A requires time that is proportional to function f and algorithm B requires time that is proportional to a slower-growing function g, it is apparent that B will always be significantly more efficient than A for large enough problems For large enough problems, the proportional growth rate dominates all other factors in determining an algorithm’s efficiency
23
Order of Growth of some Common Functions
Some properties of growth-rate functions: You can ignore low-order terms in an algorithm’s growth-rate function. E.g. if an algorithm is O(n3+4*n2+3), it is also O(n3). You can ignore a multiplicative constant in the high-order term of an algorithm’s growth-rate function. e.g. if an algorithm is O(5*n3), it is also O(n3) O(f(n)) + O(g(n)) = O(f(n) + g(n)). E.g. if an algorithm is O(n2) + O(n), it is also O(n2 + n), which is simply O(n2).
24
Worst-case and Average-case Analyses
Algorithms can require different times to solve different problems of the same size E.g. the time that an algorithm requires to search n items might depend on the nature of the items Maximum amount of time that an algorithm can require to solve a problem of size n—is the worst case An average-case analysis attempts to determine the average amount of time that an algorithm requires to solve problems of size n. Worst-case analysis is easier to calculate and is thus more common
25
The efficiency of Searching Algorithms
Order-of-magnitude analysis: - Efficiency of Sequential Search and Binary Search of an array Sequential search:- to search from an array of n items, you look each item in turn, starting with the first one, until either you find the desired item or you search to the end of the data collection Worst case: O(n) Average case: O(n) Best case: O(1) Does the algorithm order depend on whether or not the initial data is sorted?
26
The efficiency of Searching Algorithms
Binary search:-Searches a sorted array for a particular item by repeatedly dividing the array in half—the binary search algorithm searches successively smaller arrays. The size of a given array is approximately one-half the size of the array previously searched At each division, the algorithm makes a comparison. The number of comparison is equal to the number of times that the algorithm divides the array in half
27
The efficiency of Searching Algorithms – Binary Search
Suppose that n = 2k for some k The search requires the following steps: Inspect the middle item of an array of size n Inspect the middle item of an array of size n/2 Inspect the middle item of an array of size n/22, and so on, until only one item remains. You will have performed k divisions (n/2k = 1) In the worst case, the algorithm performs k divisions and, therefore, k comparisons. Because n=2k, k = log2n. Thus, the algorithm is O(log2n) in the worst case when n = 2k.
28
The efficiency of Searching Algorithms – Binary Search
What if n is not a power of 2? Find the smallest k such that 2k-1 < n < 2k The algorithm still requires at most k divisions to obtain a sub-array with one item. It follows that k-1 < log2n < k k < 1 + log2n < k+1 K = 1 + log2n rounded down Thus the algorithm is still O(log2n) in the worst case when n ≠2k. How does binary search compare to sequential search? E.g. log21,000,000 = 19, so one million sorted items can require one million comparisons with SS but at most 20 with BS! Note: Maintaining the array in sorted order requires an overhead cost, which can be substantial!!
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.