Download presentation

Presentation is loading. Please wait.

1
Data Structures CS 310

2
Abstract Data Types (ADTs) An ADT is a formal description of a set of data values and a set of operations that manipulate the values. An ADT does not specify how the type should be implemented.

3
ADT: A concrete example Values –minimum –maximum –precision –Infinity –undefined Operators & Properties –add –subtract –multiply –divide –rounding mode (4 modes supported) IEEE floating point 754 An ADT frequently implemented in hardware

4
ADTs – Why bother? Abstraction Encapsulation Reusability Ability to change the implementation without changing programs using the ADT.

5
ADTs and languages An ADT can be implemented in any language, even assembly. ADTs can be particularly elegant in object oriented languages. –Why?

6
Is one algorithm better than another? How should we judge?

7
Comparing algorithms Maintainability Readability Elegance Complexity analysis

8
How long does it take for an algorithm to run? –Is runtime a good indication? –How does the algorithm behave as the number of items N increases? How much memory does the algorithm use? –Should we measure in bytes? –Behavior with respect to N?

9
Copyright © 2006 Pearson Addison-Wesley. All rights reserved

12
Sample analysis Maximum Contiguous subsequence Given a list of integers, return the subsequence that produces the maximum sum. Examples: 2, 3, 5, -33, 5, 6, 1, 3, 3 5, 6, 1, 3, 3 -2, -5, -8, -2, -17, -33 empty 3, -7, 13, -3, 10, 20, -50, 6, 3 13, -3, 10, 20

13
Brute force solution Copyright © 2006 Pearson Addison-Wesley. All rights reserved

14
How many operations? We estimated that O(N 3 ) operations Could we make this a little more precise? We need a helping hand from combinatorics

15
If we have 3 items, how many ways can we choose two of them?

16
combinatorics In general, we can determine how many possible ways i items can be chosen from N with:

17
Our first proof

18
How many times exactly? We know that 1 and N are constants, so we need to think about the number of ways to choose i, j, and k. They vary between 1 and N.

19
A first attempt Suppose we were to choose 3 integers from 1 to N. We know that this would be This is almost right, but why not?

20
A refinement k can be either equal to i or j Suppose we add two additional choices –low –high giving us N+2 choices When we select low or high: –low k=i –high k=j What does it mean when we pick both low and high?

21
Number of operations contd Now, when we choose 3 from N+2 choices, we have the exact number of iterations through the loop.

22
O(N 2 ) Copyright © 2006 Pearson Addison-Wesley. All rights reserved

23
O(N) – Linear time For the quadratic algorithm, we simply changed the inner most loop of the cubic algorithm from linear O(N) time to constant O(1) time. To make a linear time algorithm, we will need to be a bit more clever…

24
Clever observation 1 (Theorem 5.2) Let A i,j denote the sequence from i to j. Let S i,j denote the sum of A i,j. If A i,j has S i,j j, then A i,q is not a maximum subsequence.

25
Clever observation 2 All contiguous subsequences that border the maximum contiguous subsequence must be negative or zero. Let’s prove it…

26
Clever observation 3 Theorem 5.3 For any i, let A i,j be the first sequence with S i,j < 0. Then for any i≤p≤j and p≤q, A p,q is either: –not a maximum contiguous subsequence –equal to a previously seen maximum contiguous subsequence.

27
Clever observation 3 Remember – A i,j is the first sequence whose sum S i,j < 0 –i≤p≤j and p≤q case 1: p=i Then S p,q = S i,q but this is < 0

28
Clever observation 3 case 2: i<p<q≤j A p,q cannot be a max contiguous subsequence.

29
Clever observation 3 case 3 i<p≤j<q

30
O(N) algorithm Copyright © 2006 Pearson Addison-Wesley. All rights reserved

31
Big-Oh, Big-Omega, Big-Theta, and Little-Oh Copyright © 2006 Pearson Addison-Wesley. All rights reserved Note: We usually are finding the growth of the function with respect to N which is Θ(F(N)), but we usually abuse the notation and write O(F(N))

32
Big Oh notation in style… Earlier, we showed that our cubic algorithm had operations. Nonetheless, we write that it is O(N 3 ). The lower order terms are ignored as they are dominated by the N 3 and we ignore constant scale factors.

33
Review - logarithms For any base B and N > 0, Note that log B B K = K. Which base should we use for complexity analysis?

34
logs and complexity analysis Which base should we use for complexity analysis? It doesn’t matter! (See theorem 5.4, p. 181) For convenience, we always use base 2.

35
Fun with logarithms Repeated doubling –Given X = 1, how many times t must X be doubled before X >= N?

36
Fun with logarithms Repeated halving X = N while X > 1 { X = X / 2 } How many times will the loop execute?

37
Repeated halving Try to prove this at home…

38
Harmonic numbers The N th harmonic number is defined as By theorem 5.5 (p. 183 Weiss)

39
log N tools We now have three tools at our disposal to recognize log N phenomenon in our programs: –repeated doubling –repeated halving –harmonic series

40
Static searching A static search occurs when we look for an element of an abstract data type and do not modify the data structure. For now, we will consider searching for integers in arrays.

41
Linear search found = false; done = false; i = 0; while (! done && ! found) { if (a[i] == search_value){ found = true; } else { i = i+1; if (i >= a.length) { done = true; }

42
Complexity analysis of our search Failure case – What does it cost if we don’t find anything? Worst case – What if we do find something, but it is the last item we try? Average case – What happens on average?

43
Binary search Copyright © 2006 Pearson Addison-Wesley. All rights reserved

44
Empirical runtime of binary search or how to check your results… Copyright © 2006 Pearson Addison-Wesley. All rights reserved

45
Interpolation search Make a good guess about the next location based on the min and max values of the array. Assumes that numbers are uniformly distributed (they are spread randomly between the min and max values). Average case: O(log log N) Worst case: O(N)

46
Things to remember big-oh analysis is true for some N 0 such that N>N 0. big-oh analysis doesn’t take the physical devices into account: –e.g. disk and RAM access times are dramatically different Worst case usually easier to determine than average case.

47
Common errors Total time in consecutive loops does not affect the big-oh estimate other than the the size of the biggest loop. Pay attention to how many times the loop iterates. A loop could be log N, N 2, etc. Remember to drop the low order terms and constants: 5N 3 +4N 2 +8N O(N 3 )

48
Common errors continued Remember, big-oh is an upper bound. You cannot say that T(N) > O(N 3 ) as by definition O(N 3 ) implies that there are less than order N 3 operations. Write T(N)=O(N 3 )

49
Complexity recap O (big-oh) upper bound: order ≤ stated bound Ω (big-omega) lower bound: order ≥ stated bound Θ (big-theta) is exactly of order: order = stated bound o (little-oh) is less than order: order < stated bound Remember, we usually compute big-theta and write big-oh. Sigh…

Similar presentations

© 2019 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google