Presentation is loading. Please wait.

Presentation is loading. Please wait.

Mon 29 Sep 2014Lecture 4 1. Running Time Performance analysis Techniques until now: Experimental Cost models counting execution of operations or lines.

Similar presentations


Presentation on theme: "Mon 29 Sep 2014Lecture 4 1. Running Time Performance analysis Techniques until now: Experimental Cost models counting execution of operations or lines."— Presentation transcript:

1 Mon 29 Sep 2014Lecture 4 1

2 Running Time Performance analysis Techniques until now: Experimental Cost models counting execution of operations or lines of code. under some assumptions only some operations count cost of each operation = 1 time unit Tilde notation: T(n) ~ (5/3)n 2 Today: Θ-notation Examples: insertionSort & binarySearch Mon 29 Sep 2014Lecture 4 2

3 InsertionSort – Pseudocode Mon 29 Sep 2014Lecture 4 3 Algorithm (in pseudocode) 1. for (j = 1; j<A.length; j++) { 2. //shift A[j] into the sorted A[0..j-1] 3. i=j-1 4. while i>=0 and A[i]>A[i+1] { 5. swap A[i], A[i+1] 6. i=i-1 7. }} 8. return A

4 Worst Case Mon 29 Sep 2014Lecture 4 4 costno of times 1. for (j = 1; j<A.length; j++) {1n 2. //shift A[j] into the sorted A[0..j-1] 3. i=j-11n-1 4. while i>=0 and A[i]>A[i+1] {12+…+n 5. swap A[i], A[i+1]11+…+(n-1) 6. i=i-111+…+(n-1) 7. }} 8. return A11 In the worst case the array is in reverse sorted order. T(n) = n + n-1 + Sum x=2..n (x) + 2Sum x=1..n-1 (x-1) + 1 = n + n-1 + (n(n+1)/2 - 1) + 2n(n-1)/2 + 1 = (3/2)n 2 + (3/2)n - 1 We also saw best- and average-case

5 Mon 29 Sep 2014Lecture 4 5 How fast is T(n) = (3/2)n 2 + (3/2)n - 1 ? Fast computer vs. Slow computer

6 Mon 29 Sep 2014Lecture 4 6 T1(n) = (3/2)n 2 + (3/2)n - 1T2(n) = (3/2)n - 1 Fast Computer vs. Smart Programmer

7 Mon 29 Sep 2014Lecture 4 7 T1(n) = (3/2)n 2 + (3/2)n - 1T2(n) = (3/2)n - 1 Fast Computer vs Smart Programmer (rematch!)

8 A smart programmer with a better algorithm always beats a fast computer with a worst algorithm for sufficiently large inputs. Mon 29 Sep 2014Lecture 4 8

9 At large enough input sizes only the rate of growth of an algorithm’s running time matters. That’s why we dropped the lower-order terms with the tilde notation: When T(n) = (3/2)n 2 + (3/2)n -1 we write: T(n) ~ (3/2)n 2 However: to calculate (3/2)n 2 we need to first calculate (3/2)n 2 + (3/2)n -1 It is not possible to calculate the coefficient 3/2 without the complete polynomials. Mon 29 Sep 2014Lecture 4 9

10 Simpler approach It turns out that even the coefficient of the highest order term of polynomials is not all that important for large enough inputs. This leads us to Asymptotic running time: T(n) = (3/2)n 2 + (3/2)n - 1 = Θ(n 2 ) We ignore everything except for the most significant growth function Even with such a simplification, we can compare algorithms to discover the best ones Sometimes constants matter in the real-world performance of algorithms, but this is rare. Mon 29 Sep 2014Lecture 4 10

11 Important Growth Functions From better to worse: Function fName 1 constant log n logarithmic n linear n. log n n 2 quadratic n 3 cubic … 2 n exponential... Mon 29 Sep 2014Lecture 4 11

12 Important Growth Functions From better to worse: Function fName 1 constant log n logarithmic n linear n. log n n 2 quadratic n 3 cubic … 2 n exponential... Mon 29 Sep 2014Lecture 4 12 The first 4 are practically fast (most commercial programs run in such Θ-time) Anything less than exponential is theoretically fast (P vs NP)

13 Important Growth Functions From better to worse: Function fName Problem size solved in mins (today) 1 constantany log n logarithmicany n linearbillions n. log nhundreds of millions n 2 quadratictens of thousands n 3 cubicthousands … 2 n exponential100... Mon 29 Sep 2014Lecture 4 13

14 Growth Functions From better to worse: Function fNameExample code of Θ(f): 1 constantswap A[i], A[j] log n logarithmicj=n; while(j>0){ …; j=j/2} n linearfor(j=1; j<n; j++){ … } n. log n[best sorting algorithms] n 2 quadraticfor(j=1; j<n; j++){ for(i=1; i<j; i++){ … }} n 3 cubic[3 nested for-loops] … 2 n exponential[brute-force password breaking tries all combinations] Mon 29 Sep 2014Lecture 4 14

15 Asymptotic Running Time Θ(f(n)) It has useful operations: Θ(n) + Θ(n 2 ) = Θ(n 2 ) Θ(n) × Θ(n 2 ) = Θ(n 3 ) Θ(n) × Θ(log n) = Θ(n. log n) Θ(f(n)) + Θ(g(n)) = Θ(g(n))if Θ(f(n)) ≤ Θ(g(n)) Θ(f(n)) × Θ(g(n)) = Θ(f(n) × g(n)) If f(n) = Θ(g(n))and g(n) = Θ(h(n))then f(n) = Θ(h(n)) Mon 29 Sep 2014Lecture 4 15

16 InsertionSort – asymptotic worst-case analysis Mon 29 Sep 2014Lecture 4 16 asymptotic cost (LoC model) 1. for j = 1 to A.length { 2. //shift A[j] into the sorted A[0..j-1] 3. i=j-1 4. while i>=0 and A[i]>A[i+1] { 5. swap A[i], A[i+1] 6. i=i-1 7. }} 8. return A T(n) =

17 InsertionSort – asymptotic worst-case analysis Mon 29 Sep 2014Lecture 4 17 asymptotic cost (LoC model) 1. for j = 1 to A.length {Θ(n) 2. //shift A[j] into the sorted A[0..j-1] 3. i=j-1Θ(n) 4. while i>=0 and A[i]>A[i+1] {Θ(n 2 ) 5. swap A[i], A[i+1]Θ(n 2 ) 6. i=i-1Θ(n 2 ) 7. }} 8. return AΘ(1) T(n) = Θ(n) + Θ(n) + Θ(n 2 ) + Θ(n 2 ) + Θ(n 2 ) + Θ(1) = Θ(n 2 )

18 More Asymptotic Notation O, Ω When we are giving exact bounds we write: T(n) = Θ(f(n)) When we are giving upper bounds we write: Τ(n) ≤ Θ(f(n)) or alternatively T(n) = Ο(f(n)) When we are giving lower bounds we write: Τ(n) ≥ Θ(f(n)) or alternatively T(n) = Ω(f(n)) Mon 29 Sep 2014Lecture 4 18

19 Examples Θ, O, Ω 3n 2. log n + n 2 + 4n - 2 = ? 3n 2. log n + n 2 + 4n - 2 = O(n 2. log n) 3n 2. log n + n 2 + 4n - 2 = O(n 3 ) 3n 2. log n + n 2 + 4n - 2 = O(2 n ) 3n 2. log n + n 2 + 4n - 2 ≠ O(n 2 ) Mon 29 Sep 2014Lecture 4 19

20 Examples Θ, O, Ω 3n 2. log n + n 2 + 4n - 2 = Θ(n 2. log n) 3n 2. log n + n 2 + 4n - 2 = Ω(n 2. log n) 3n 2. log n + n 2 + 4n - 2 = Ω(n 2 ) 3n 2. log n + n 2 + 4n - 2 = Ω(1) 3n 2. log n + n 2 + 4n - 2 ≠ Ω(n 3. log n) Mon 29 Sep 2014Lecture 4 20

21 Examples (comparisons) Θ(n logn) =?= Θ(n) Mon 29 Sep 2014Lecture 4 21

22 Examples (comparisons) Θ(n logn) > Θ(n) Θ(n 2 + 3n – 1) =?= Θ(n 2 ) Mon 29 Sep 2014Lecture 4 22

23 Examples (comparisons) Θ(n logn) > Θ(n) Θ(n 2 + 3n – 1) = Θ(n 2 ) Mon 29 Sep 2014Lecture 4 23

24 Examples (comparisons) Θ(n log(n)) > Θ(n) Θ(n 2 + 3n – 1) = Θ(n 2 ) Θ(1) =?= Θ(10) Θ(5n) =?= Θ(n 2 ) Θ(n 3 + log(n)) =?= Θ(100n 3 + log(n)) Write all of the above in order, writing = or < between them Mon 29 Sep 2014Lecture 4 24

25 Principle Θ bounds are the most precise asymptotic performance bounds we can give O/Ω bounds may be imprecise Mon 29 Sep 2014Lecture 4 25

26 One more example: BinarySeach Specification: Input: array a[0..n-1], integer key Input property: a is sorted Output: integer pos Output property: if key==a[i] then pos==i Trivial? First binary search published in 1946 First bug-free binary search published in 1962 Bug in Java’s Arrays.binarySearch() found in 2006 Mon 29 Sep 2014Lecture 4 26

27 BinarySearch – pseudocode Mon 29 Sep 2014Lecture 4 27 1. lo = 0, hi = a.length-1 2. while (lo <= hi) { 3. int mid = lo + (hi - lo) / 2 4. if (key < a[mid]) then hi = mid - 1 5. else if (key > a[mid]) then lo = mid + 1 6. else return mid 7. } 8. return -1 Note: here array indices start from 0 and go up to length-1.

28 BinarySearch – Loop Invariant Mon 29 Sep 2014Lecture 4 28 1. lo = 0, hi = a.length-1 2. while (lo <= hi) { 3. int mid = lo + (hi - lo) / 2 4. if (key < a[mid]) then hi = mid - 1 5. else if (key > a[mid]) then lo = mid + 1 6. else return mid 7. } 8. return -1 Note: here array indices start from 0 and go up to length-1. Invariant: if key in a[0..n-1] then key in a[lo..hi]

29 BinarySearch – Asymptotic Running Time Mon 29 Sep 2014Lecture 4 29 Asymptotic cost 1. lo = 0, hi = a.length-1 2. while (lo <= hi) { 3. int mid = lo + (hi - lo) / 2 4. if (key < a[mid]) then hi = mid - 1 5. else if (key > a[mid]) then lo = mid + 1 6. else return mid 7. } 8. return -1 Note: array indices start from 0 and go up to length-1.

30 BinarySearch – Asymptotic Running Time Mon 29 Sep 2014Lecture 4 30 Asymptotic cost 1. lo = 0, hi = a.length-1Θ(1) 2. while (lo <= hi) {Θ(log n) 3. int mid = lo + (hi - lo) / 2Θ(log n) 4. if (key < a[mid]) then hi = mid – 1Θ(log n) 5. else if (key > a[mid]) then lo = mid + 1Θ(log n) 6. else return midΘ(log n) 7. } 8. return -1Θ(1) Note: array indices start from 0 and go up to length-1. T(n) = Θ(log n) When a loop throws away half the input array at each iteration: it will perform Θ(log n) iterations!

31 We will use the Asymptotic Θ-notation from now on because it’s easier to calculate The book uses the ~ notation (more accurate but similar to Θ) We will mostly look at the worst case (sometimes the average) Sometimes we can sacrifice some memory space to improve running time We will discuss space performance and space/time tradeoff in next lecture Don’t forget the labs tomorrow, Tuesday and Thursday see website Mon 29 Sep 2014Lecture 4 31


Download ppt "Mon 29 Sep 2014Lecture 4 1. Running Time Performance analysis Techniques until now: Experimental Cost models counting execution of operations or lines."

Similar presentations


Ads by Google