Presentation is loading. Please wait.

Presentation is loading. Please wait.

Discrete Structures CISC 2315

Similar presentations


Presentation on theme: "Discrete Structures CISC 2315"— Presentation transcript:

1 Discrete Structures CISC 2315
Growth of Functions

2 Introduction Once an algorithm is given for a problem and decided to be correct, it is important to determine how much in the way of resources, such as time or space, that the algorithm will require. We focus mainly on time in this course. Such an analysis will often allow us to improve our algorithms.

3 Running Time Analysis Returns in time T1(N) Algorithm 1
N Data Items Returns in time T1(N) Algorithm 1 N Data Items Returns in time T2(N) Algorithm 2

4 Analysis of Running Time (which algorithm is better?)
Running Time T(N) Algorithm 2 Number of Input Items N

5 Comparing Algorithms We need a theoretical framework upon which we can compare algorithms. The idea is to establish a relative order among different algorithms, in terms of their relative rates of growth. The rates of growth are expressed as functions, which are generally in terms of the number/size of inputs N. We also want to ignore details, e.g., the rate of growth is N2 rather than 3N2 – 5N + 2. NOTE: The text uses the variable x. We use N here.

6 Big-O Notation: Definition:
This says that function T(N) grows at a rate no faster than f(N); thus c f(N) is an upper bound on T(N). big-O NOTE: We use |T(N)| < c|f(N)| if the functions could be negative. In this class, we will assume they are positive unless stated otherwise.

7 Big-O Upper Bound c f(N) Running Time T(N) T(N)
Number of Input Items N

8 Big-O Example Prove that We could also prove that
Since Then We could also prove that but the first upper bound is tighter (lower). Note that Why? How do you show something is not O(something)?

9 Another Big-O Example Prove that We could also prove that
Since Then We could also prove that but the first bound is tighter. Note that Why?

10 Why Big-O? It gets very complicated comparing the time complexity of algorithms when you have all the details. Big-O gets rid of the details and focuses on the most important part of the comparison. Note that Big-O is a worst-case analysis.

11 Same-Order Functions Let f(N) = N2 + 2N + 1 Let g(N) = N2
g(N) = O(f(N)) because N2 < N2 + 2N + 1 f(N) = O(g(N)) because: N2 + 2N + 1 < N2 + 2N2 + N2 = 4N2, which is O(N2) with c = 4 and n0 = 1. In this case we say that f(N) and g(N) are of the same order.

12 Simplifying the Big-O Theorem 1 (in text):
Let f(N) = anNn + an-1Nn-1 + … + a 1N + a 0 Then f(N) is O(Nn).

13 Another Big-O Example Recall that N! = N*(N-1)*…*3*2*1 when N is a positive integer > 0, and 0! = 1. Then N! = N*(N-1)*…*3*2*1 < N*N*N* … *N = NN Therefore, N! = O(NN). But this is not all we know… Taking log of both sides, log N! < log NN = N log N. Therefore log N! is O(N log N). (c=1, n0 = 2) Recall that we assume logs are base 2.

14 Another Big-O Example In Section 3.2 it will be shown that N < 2N.
Taking the log of both sides, log N < N. Therefore log N = O(N). (c=1 and n0 = 1)

15 Complexity terminology
O(1) Constant complexity O(log N) Logarithmic complexity O(N) Linear complexity O(N log N) N log N complexity O(Nb) Polynomial complexity O(bN), b > 1 Exponential complexity O(N!) Factorial complexity

16 Growth of Combinations of Functions
Sum Rule: Suppose f1(N) is O(g1(N)) and f2(N) is O(g2(N)). Then (f1 + f2 )(N) is O(max(g1(N), g2(N))). Product Rule: Suppose that f1(N) is O(g1(N)) and f2(N) is O(g2(N)). Then (f1f2) (N) is O(g1(N)g2(N)).

17 Growth of Combinations of Functions
Subprocedure 2 Subprocedure 1 O(g2(N)) O(g1(N)) * What is the big-O time complexity of running the two subprocedures sequentially? Theorem 2: If f1(N) is O(g1(N)) and f2(N) is O(g2(N)), then (f1 + f2)(N) = O(max(g1(N),g2(N)).

18 Growth of Combinations of Functions
What about M not equal to N? Growth of Combinations of Functions Subprocedure 2 Subprocedure 1 O(g2(M)) O(g1(N)) * What is the big-O time complexity of running the two subprocedures sequentially? Theorem 2: If f1(N) is O(g1(N)) and f2(M) is O(g2(M)), then f1(N)+ f2(M) = O(g1(N) + g2(M)).

19 Growth of Combinations of Functions
O(g2(N)) for i=1 to 2N do O(g1(N)) 1 for j=1 to N do 1 1 N steps * 2N steps = 2N2 steps aij : = 1 * What is the big-O time complexity for nested loops? Theorem 3: If f1(N) is O(g1(N)) and f2(N) is O(g2(N)), then (f1 * f2)(N) = O(g1(N) * g2(N)).

20 Growth of Combinations of Functions: Example 1
Give a big-O estimate for f(N) = 3N log(N!) + (N2 + 3) logN, where N is a positive integer. Solution: First estimate 3N log(N!). Since we know log(N!) is O(N log N), and 3N is O(N), then from the Product Rule we conclude 3N log(N!) = O(N2 log N). Next estimate (N2 + 3) log N. Since (N2 + 3) < 2N2 when N > 2, N2 + 3 is O(N2). From the Product Rule, we have (N2 + 3) log N = O(N2 log N). Using the Sum Rule to combine these estimates, f(N) = 3N log(N!) + (N2 + 3) log N is O(N2 log N).

21 Growth of Combinations of Functions: Example 2
Give a big-O estimate for f(N) = (N + 1) log(N2 + 1) + 3N2. Solution: First estimate (N + 1) log(N2 + 1) . We know (N + 1) is O(N). Also, N2 + 1 < 2N2 when N > 1. Therefore: log(N2 + 1) < log(2N2) = log2 + logN2 = log2 + 2logN < 3logN, if N > 2. We conclude log(N2 + 1) = O(logN). From the Product Rule, we have (N + 1)log(N2 + 1) = O(N logN). Also, we know 3N2 = O(N2). Using the Sum Rule to combine these estimates, f(N) = O(max(N logN, N2). Since N log N < N2 for N >1, f(N) = O(N2).

22 Big-Omega Notation Definition:
This says that function T(N) grows at a rate no slower than f(N); thus c f(N) is a lower bound on T(N).

23 Big-Omega Lower Bound T(N) Running Time T(N) f(N)
Number of Input Items N

24 Big-Omega Example Prove that We could also prove that
Since Then We could also prove that but the first lower bound is tighter (higher). Note that

25 Another Big-Omega Example
Prove that Since Then We could also prove that but the first bound is tighter. Note that

26 Big-Theta Notation Definition: Put another way:
This says that function T(N) grows at the same rate as f(N). Put another way: We say that T(N) is of order f(N).

27 Big-Theta Example Show that 3N2 + 8N logN is θ(N2).
0 < 8N log N < 8N2, and so 3N2 + 8N log N < 11N2 for N > 1. Therefore 3N2 + 8N logN is O(N2). Clearly 3N2 + 8N logN is Ω(N2). We conclude that 3N2 + 8N logN is θ(N2).

28 A Hierarchy of Growth Rates
If f(N) is O(x) for x one of the above, then it is O(y) for any y > x in the above ordering. But the higher bounds are not tight. We prefer tighter bounds. Note that if the hierarchy states that x < y, then obviously for any expression z > 0, z*x < z*y.

29 Complexity of Algorithms vs Problems
We have been talking about polynomial, linear, logarithmic, or exponential time complexity of algorithms. But we can also talk about the time complexity of problems. Example decision problems: Let a, b, and c be positive integers. Is there a positive integer x < c such that x2 = a (mod b)? Does there exist a truth assignment for all variables that can satisfy a given logical expression?

30 Complexity of Problems
A problem that can be solved with a deterministic polynomial (or better) worst-case time complexity algorithm is called tractable. Problems that are not tractable are intractable. P is the set of all problems solvable in polynomial time (tractable problems). NP is the set of all problems not solvable by any known deterministic polynomial time algorithm. But their solution can be checked in polynomial time. P = NP ?

31 Complexity of Problems (cont’d)
Unsolvable problem: A problem that cannot be solved by any algorithm. Example: Halting Problem input program ? Will program halt on input? If we try running the program on the input, and it keeps running, how do we know if it will ever stop?


Download ppt "Discrete Structures CISC 2315"

Similar presentations


Ads by Google