= n0 When we say f (n) = O(g(n)) we really mean f (n) ∈ O(g(n)). Do Examples"> = n0 When we say f (n) = O(g(n)) we really mean f (n) ∈ O(g(n)). Do Examples">
Download presentation
Presentation is loading. Please wait.
1
CS200: Algorithms Analysis
2
ASYMPTOTIC NOTATION Assumes run-time of functions is N = <0, 1, 2 , ...> O–notation : f(n) = O(g(n)), gives an estimated upper-bound (may or may not be a tight bound) on the run-time of f(n). O(g(n)) is the set of functions: O(g(n)) = {f(n) :$ positive constants c, n0 st 0 £ f(n) £ cg(n), " n >= n0 When we say f (n) = O(g(n)) we really mean f (n) ∈ O(g(n)). Do Examples
4
Prove n2+42n+7=O(n2) n2 + 42n + 7 ≤ n2 + 42n2 + 7n2 for n ≥ 1 = 50n2 So, n2 + 42n + 7 ≤ 50n2 for all n ≥ 1 and n2 + 42n + 7n = O(n2) [c = 50, n0 = 1]
5
Prove 5nlog2n + 8n − 200 = O(nlog2n)
5nlog2n + 8n − 200 ≤ 5nlog2n + 8n ≤ 5nlog2n + 8nlog2n for n ≥ 2 ≤ 13nlog2n 5nlog2n + 8n − 200 ≤ 13nlog2n for all n ≥ 2 5nlog2n + 8n − 200 = O(nlog2n) [c = 13,n0 = 2]
6
Why Use Big O Notation? Consider the following (simple) code: The running time is 1 assignment (int i = 0) n+1 comparisons (i < n) n increments (i++) n array offset calculations (a[i]) n indirect assignments (a[i] = i) = a+b(n+1)+cn+dn+en,where a, b, c, d,and e are constants that depend on the machine running the code Easier just to say O(n) operations
7
O–notation is actually quite sloppy but convenient
O–notation is actually quite sloppy but convenient. It can be used to bound the worst-case runtime of an algorithm. Explain using insertion sort. W–notation: f(n) = W(g(n)), gives an estimated lower-bound (may or may not be a tight bound) on the runtime of f(n). W(g(n)) is the set of functions: W(g(n)) = {f(n) :$ positive constants c, n0 st 0 £ cg(n) £ f(n), " n >= n0 Do examples
8
Q–notation: f(n) = Q(g(n)), gives an estimated tight-bound on the runtime of f(n).
Q(g(n)) is the set of functions: Q(g(n)) = {f(n) :$ positive constants c1,c2, n0 st 0 £ c1g(n) £ f(n) £ c2g(n), " n >= n0 Do examples Obviously leading constants and lower order terms don’t matter because we can always choose constants large enough to swamp the other terms.
9
Theorem:for any 2 functions f(n), g(n); f(n) = Q(g(n)) iff f(n) = O(g(n) and f(n) = W(g(n))
This implies that we can show tight bounds from upper/lower bounds.
10
MERGESORT Input and output are defined as for insertion sort.
Mergesort is a divide and conquer algorithm that is recursive in structure. Divide the problem into a set of sub-problems. Conquer by solving the sub-problems recursively. If sub-problem small enough then solve in straightforward manner. Combine solutions to sub-problems to gain solution to original problem.
11
Show general recurrence for run time of Divide and Conquer algorithms.
MergeSort Divide and Conquer Mergesort divides an n element sequence into 2 subsequences of n/2 elements. It then sorts the 2 subsequences recursively by using Mergesort. It combines the 2 sorted subsequences via a Merge. For simplicity, assume n is a power of 2. T(n) = Theta(1) if n <= c = aT(n/b) + D(n) + C(n) otherwise a = number of recursive calls b = subdivision amount of n on each recursive call D(n) = any work required to perform the recursive call (Divide time) C(n) = any work required to combine sub-solutions to form final solution.
12
MergeSort Pseudo Code Mergesort(A, p, r) if p = r then return
Mergesort(A, p, (p+r) / 2) Mergesort(A, (p+r) / 2 + 1, r) Merge results and return What is the base case? How do we merge two sorted subsequences? What is the run time of such a merge? Do example.
13
Idea behind linear-time merging: Think of two piles of cards.
Each pile is sorted and placed face-up on a table with the smallest cards on top. We merge these into a single sorted pile, face-down on the table. A basic step: Choose the smaller of the two top cards. Remove it from its pile, thereby exposing a new top card. Place the chosen card face-down onto the output pile. Repeatedly perform basic steps until one input pile is empty. Once one input pile empties, just take the remaining input pile and place it face-down onto the output pile. Each basic step should take constant time, since we check just the two top cards. There are ≤ n basic steps, since each basic step removes one card from the input piles, and we started with n cards in the input piles. Therefore, this procedure should take θ(n) time.
15
T(n) = 2T(n/2) + θ(1) + θ(n) = 2T(n/2) + θ(n+1) = 2T(n/2) + θ(n)
Discuss recurrence for run time of MergeSort. In more depth, Do an instance of the problem. What does the block trace (recurrence tree) look like? Show recurrence tree for MergeSort. T(n) = 2T(n/2) + θ(1) + θ(n) = 2T(n/2) + θ(n+1) = 2T(n/2) + θ(n)
17
Solving Merge Sort Recurrence?
Formally done in chapter 4 but using intuition, examine the recurrence tree – find tree depth and the work performed at each level in the tree.
18
Binary Search Recursive Binary Search of a sorted array (assume n is a power of 2) T(n) = Theta(1) if n = 1 = 2T(n/2) + Theta(1) if n > 1
20
Binary Search cont. Divide runtime (check middle element) = ?
Conquer runtime (search upper or lower sub-array) = ? Combine runtime (trivial) = ? T(n) = ? = T(n/2) + θ(1) Confirm runtime using a recurrence tree.
21
Summary Definitions of Big O, Theta, and Omega
Theorem for Theta tight bounds Application of above to simple functions MergeSort/Binary Search functionality and run-time recurrences.
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.