# ALG0183 Algorithms & Data Structures Lecture 3 Algorithm Analysis 8/25/20091 ALG0183 Algorithms & Data Structures by Dr Andy Brooks Weiss Chapter 5 Sahni.

## Presentation on theme: "ALG0183 Algorithms & Data Structures Lecture 3 Algorithm Analysis 8/25/20091 ALG0183 Algorithms & Data Structures by Dr Andy Brooks Weiss Chapter 5 Sahni."— Presentation transcript:

ALG0183 Algorithms & Data Structures Lecture 3 Algorithm Analysis 8/25/20091 ALG0183 Algorithms & Data Structures by Dr Andy Brooks Weiss Chapter 5 Sahni Chapter 2

Running time The running time of a piece of software depends on the speed of the machine, the quality of the compiler, the quality of the program, and on the algorithms made use of. – and possibly on the quality of any network and caching mechanisms Sorting 100,000 records takes longer than sorting 10 records. The running time of an algorithm is a function of the size of the input. Different algorithms to solve the same problem can vary dramatically in terms of running time. The most efficient algorithms are those whose running times grow linearily with the size of input. 8/25/2009 ALG0183 Algorithms & Data Structures by Dr Andy Brooks 2

Figure 5.1 Running times for small inputs. ©Addison Wesley 8/25/2009 ALG0183 Algorithms & Data Structures by Dr Andy Brooks 3

Figure 5.1 Running times for small inputs. Small values of N are generally not important. For N=20, the example algorithms all terminate within 5 ms. – “less than a blink of the eye” For N=25, the example algorithms all terminate within 10ms. Notice how the quadratic algorithm is better at small N than O(NlogN), but that it loses its advantage when N>50. 8/25/2009 ALG0183 Algorithms & Data Structures by Dr Andy Brooks 4

Figure 5.1 Running times for small inputs. Constants in growth functions can play a significant part when N is small. – E.g. The function T(N) = N + 2,500 is larger than N 2 when N is less than 50. Weiss: “Consequently, when input sizes are very small, a good rule of thumb is to use the simplest algorithm.” – A complex algorithm is more likely to be incorrectly implemented. 8/25/2009 ALG0183 Algorithms & Data Structures by Dr Andy Brooks 5

Figure 5.2 Running times for moderate inputs. ©Addison Wesley 8/25/2009 ALG0183 Algorithms & Data Structures by Dr Andy Brooks 6

Figure 5.2 Running times for moderate inputs. For an input size of 10,000 records, O(NlogN) takes roughly 10 times as much time as a linear algorithm, O(N). – The actual time difference depends on the constants in the growth functions. – log 2 (10,000) = 13.2877, log 2 (1,000,000) = 19.9316 Weiss: “Quadaratic algorithms are almost always impractical when the input size is more than a few thousand, and cubic algorithms are impractical for input sizes as small as a few hundred.” – Simple sorting algorithms such as bubble sort are quadratic. Weiss: “... before we waste effort attempting to optimize code, we need to optimize the algorithm.” – Can a quadratic algorithm (O(N 2 )) be made sub-quadratic? 8/25/2009 ALG0183 Algorithms & Data Structures by Dr Andy Brooks 7

Dominant terms in growth functions A cubic function is a function whose dominant term is some constant times N 3. 10N 3 + N 2 + 40N + 80 is a cubic function. – Ignoring the special case when N=1. – For N = 1,000 the function has a value of 10,001,040,080 of which 10,000,000,000 is due to the 10N 3 term. Lower terms represent only 0.01 percent of the total. For sufficiently large N, the value of a growth function is determined by its dominant term. The Big-Oh notation is used to describe the dominant term in a growth function. 8/25/2009 ALG0183 Algorithms & Data Structures by Dr Andy Brooks 8

Figure 5.3 © AddisonWesley Functions in order of increasing growth rates. 8/25/2009 ALG0183 Algorithms & Data Structures by Dr Andy Brooks 9

Minimum element in an array given an array of N items, find the smallest item variable min stores the minimum. min is initialized to the first item perform a sequential scan through the array and update min when appropriate 8/25/2009 ALG0183 Algorithms & Data Structures by Dr Andy Brooks 10 examples of algorithm running times A fixed amount of work is performed for each element in the array so the running time of the algorithm is linear or O(N).

Sequential search for an element in an array given an array of N items, find the position of the item specified In the best-case, the item searched for will be the first item in the array. – O(1) In the worst-case, the item searched for will not be present in the array. – O(N) 8/25/2009 ALG0183 Algorithms & Data Structures by Dr Andy Brooks 11 examples of algorithm running times

Figure 2.9 Best-case step count for sequential search by Sartaj Sahni ©McGraw-Hill 8/25/2009 ALG0183 Algorithms & Data Structures by Dr Andy Brooks 12 s/e is the number of steps per execution of the statement. Frequency is how often each statement is executed. The time complexity is estimated as 4 steps. examples of algorithm running times

Figure 2.10 Worst-case step count for sequential search by Sartaj Sahni ©McGraw-Hill 8/25/2009 ALG0183 Algorithms & Data Structures by Dr Andy Brooks 13 s/e is the number of steps per execution of the statement. Frequency is how often each statement is executed. The time complexity is estimated as n + 3 steps. Since i has to be incremented to n before the loop terminates, the frequency of the for statement is n+1. examples of algorithm running times

Figure 2.11 Typical-case (x=a[j]) step count for sequential search by Sartaj Sahni ©McGraw-Hill 8/25/2009 ALG0183 Algorithms & Data Structures by Dr Andy Brooks 14 s/e is the number of steps per execution of the statement. Frequency is how often each statement is executed. The time complexity is estimated as j + 4 steps. examples of algorithm running times

Average-case analysis for sequential search by Sartaj Sahni Assume all the values in the array are distinct. Assume x, the item being searched for, has an equal probability of being any of the values. The average step count for a successful search will be the sum of the step counts for the n possible searches divided by n. 8/25/2009 ALG0183 Algorithms & Data Structures by Dr Andy Brooks 15 Slightly more than half the the step count for an unsuccessful search. examples of algorithm running times

Closest pair of points Given n points in the x-y plane, what pair of points are closest together? A brute force solution requires calculating the distance between every pair of points and updating the current minimum distance as necessary. Each of the n points can be paired with n-1 points so the number of unique pairs is n(n-1)/2. A brute force solution takes quadratic time, O(N 2 ). (Other algorithms exist which take less than time.) 8/25/2009 ALG0183 Algorithms & Data Structures by Dr Andy Brooks 16 examples of algorithm running times

In general, the time complexity of a loop is O(N), the time complexity of a loop within a loop is O(N 2 ), and the time complexity of a loop within a loop within a loop is O(N 3 ). 8/25/2009 ALG0183 Algorithms & Data Structures by Dr Andy Brooks 17 Closest pair of points Given n points in the x-y plane, what pair of points are closest together? pseudocode (http://en.wikipedia.org/wiki/Closest_pair_of_points_problem) minDist = infinity for each p in P: for each q in P: if p ≠ q and dist(p,q) < minDist: minDist = dist(p,q) closestPair = (p,q) return closestPair examples of algorithm running times

Collinear points Given n points in the x-y plane, do any three form a straight line? A brute force solution requires examining all groups of three points. The number of different groups is n(n-1)(n-2)/6. A brute force solution takes cubic time, O(N 3 ). (Other algorithms exist which take less time.) 8/25/2009 ALG0183 Algorithms & Data Structures by Dr Andy Brooks 18 examples of algorithm running times

Download ppt "ALG0183 Algorithms & Data Structures Lecture 3 Algorithm Analysis 8/25/20091 ALG0183 Algorithms & Data Structures by Dr Andy Brooks Weiss Chapter 5 Sahni."

Similar presentations