Presentation is loading. Please wait.

Presentation is loading. Please wait.

Algorithmic Complexity: Complexity Analysis of Time Complexity Complexities Nate the Great.

Similar presentations


Presentation on theme: "Algorithmic Complexity: Complexity Analysis of Time Complexity Complexities Nate the Great."— Presentation transcript:

1 Algorithmic Complexity: Complexity Analysis of Time Complexity Complexities Nate the Great

2 The complexity of an algorithm is determined by the amount of resources it uses The complexity of an algorithm is determined by the amount of resources it uses Usually the most important resource we are concerned with is time— difficult problems take longer to solve in general Usually the most important resource we are concerned with is time— difficult problems take longer to solve in general But how can we measure the amount of time required by an algorithm? But how can we measure the amount of time required by an algorithm? Algorithm Complexity

3 Time Complexity We cannot simply use a stopwatch We cannot simply use a stopwatch The time required by an algorithm depends on the particular machine and how efficient the source code is The time required by an algorithm depends on the particular machine and how efficient the source code is Example: It usually takes longer to sort 100,000 numbers than to sort say 1,000 numbers but… Example: It usually takes longer to sort 100,000 numbers than to sort say 1,000 numbers but… So…what then?

4 Time Complexity Time is not measured in seconds but rather in basic operations or program steps. Time is not measured in seconds but rather in basic operations or program steps. The input problem is assigned a size. The input problem is assigned a size. The time required by an algorithm is then computed as a function of the size of its input. The time required by an algorithm is then computed as a function of the size of its input. But algorithms generally take different amounts of time on different inputs, even of the same size. But algorithms generally take different amounts of time on different inputs, even of the same size. Crap. Now what? Crap. Now what?

5 Time Complexity The time complexity function F of an algorithm A is defined based on the worst case scenario: F is the greatest possible amount of time required for A to solve a problem instance of size n. The time complexity function F of an algorithm A is defined based on the worst case scenario: F is the greatest possible amount of time required for A to solve a problem instance of size n. We do not worry about time used on small inputs—these can be handled as special cases. We do not worry about time used on small inputs—these can be handled as special cases. Instead, we are only concerned with time complexities based on large inputs. Instead, we are only concerned with time complexities based on large inputs. Constant factors do not matter. For example, special hardware can always be developed that is faster by a constant factor. Constant factors do not matter. For example, special hardware can always be developed that is faster by a constant factor. Running times that differ only by constant factors are always of the same Order. Running times that differ only by constant factors are always of the same Order.

6 Order Notation Order Notation is used to measure running time bounds on algorithms Order Notation is used to measure running time bounds on algorithms The 4 most common bounds are: The 4 most common bounds are: O (big Oh): upper bound (for worst case scenario)O (big Oh): upper bound (for worst case scenario) Ω (omega): lower bound (for best case scenario)Ω (omega): lower bound (for best case scenario) o (little oh): strict upper boundo (little oh): strict upper bound Θ (theta): equality (exact bound)Θ (theta): equality (exact bound) From this point forward, we will focus only on O, a worst case upper bound From this point forward, we will focus only on O, a worst case upper bound Precise Definition of O (pronounced “Big-Oh”) Precise Definition of O (pronounced “Big-Oh”) A function f(n) is O (g(n)) iff there exists a constant n 0 such that for all n ≥n 0, f(n) ≤ c*g(n), where c is a positive constant.A function f(n) is O (g(n)) iff there exists a constant n 0 such that for all n ≥n 0, f(n) ≤ c*g(n), where c is a positive constant. We say that f is order g, f has order g, or f is of order gWe say that f is order g, f has order g, or f is of order g Two Important Notes Two Important Notes 1) It is NOT required that f(n) ≤ g(n). We can make the constant c as large as necessary.1) It is NOT required that f(n) ≤ g(n). We can make the constant c as large as necessary. 2) The condition f(n) ≤ c*g(n) is not required to hold for ALL n, but only for n ≥n 02) The condition f(n) ≤ c*g(n) is not required to hold for ALL n, but only for n ≥n 0 Stated differently, time bound functions give us a measure of how fast or slow a function grows. Stated differently, time bound functions give us a measure of how fast or slow a function grows.

7 Order Notation Examples Two main Issues: Two main Issues: Determining the order of a given functionDetermining the order of a given function Determining if a function grows faster or slower than another function, in terms of order notationDetermining if a function grows faster or slower than another function, in terms of order notation Suppose f(n) = 500n and g(n) = 2n Suppose f(n) = 500n and g(n) = 2n What is the order of f(n)?What is the order of f(n)? Is f(n) O(g(n))?Is f(n) O(g(n))? Is g(n) O(f(n))?Is g(n) O(f(n))? Suppose h(n) = 2 n and p(n) = n 20 Suppose h(n) = 2 n and p(n) = n 20 What is the order of p(n)?What is the order of p(n)? Is h(n) O(p(n))?Is h(n) O(p(n))? Is p(n) O(h(n))?Is p(n) O(h(n))? Other Examples: Other Examples: f(x) = 9xf(x) = 9x g(x) = 3x 2g(x) = 3x 2 Is f(x) O (g(x)?Is f(x) O (g(x)? Is g(x) O (f(x))?Is g(x) O (f(x))? Is either f(x) or g(x) O (x x )?Is either f(x) or g(x) O (x x )? What about h(x) = 4log(x) + 3/x – 16x + 240000x 3 – 0.000001x 5What about h(x) = 4log(x) + 3/x – 16x + 240000x 3 – 0.000001x 5 Moral: Consider only large values of n and ignore constant factors. Moral: Consider only large values of n and ignore constant factors.

8 Order Notation Review So f(n) being Θ(g(n)) means: So f(n) being Θ(g(n)) means: f(n) is O (g(n)) and f(n) is Ω(g(n))f(n) is O (g(n)) and f(n) is Ω(g(n)) The most common bound is O, which proves some upper bound on a function The most common bound is O, which proves some upper bound on a function We use O notation to specify an upper bound on the running time of algorithms We use O notation to specify an upper bound on the running time of algorithms The running time is almost always based on the size of the input, and the variable n is normally used to describe the input size The running time is almost always based on the size of the input, and the variable n is normally used to describe the input size To construct a bound, we assume that some operation takes a constant amount of time, and base the remaining operations and running time on that metric. Exactly what kind of operation is assumed to take constant time depends on the context of the problem, but it is usually obvious. To construct a bound, we assume that some operation takes a constant amount of time, and base the remaining operations and running time on that metric. Exactly what kind of operation is assumed to take constant time depends on the context of the problem, but it is usually obvious. Example Example Consider a vector containing n elementsConsider a vector containing n elements Let’s assume that we can print out a single element in constant time.Let’s assume that we can print out a single element in constant time. What kind of upper bound can we place on a function that simply prints out each element in the vector?What kind of upper bound can we place on a function that simply prints out each element in the vector? Suppose it takes time proportional to n to print a single element though. What is an upper bound on the overall running time of the function?Suppose it takes time proportional to n to print a single element though. What is an upper bound on the overall running time of the function?

9 Key Concepts Experience in Computer Science has shown that Experience in Computer Science has shown that Algorithms tend to be applied to larger and larger problems, so the rate of increase in time as the problem gets larger is more important than the time required for small problems.Algorithms tend to be applied to larger and larger problems, so the rate of increase in time as the problem gets larger is more important than the time required for small problems. The “value” of an algorithm tends to be determined by the worst-case behavior, so the worst-case behavior is always quoted.The “value” of an algorithm tends to be determined by the worst-case behavior, so the worst-case behavior is always quoted.

10 Running Times Quick Sort: t(N) ≈ a + b* N 2 Quick Sort: t(N) ≈ a + b* N 2 Selection Sort: t(N) ≈ a + b* N 2 Selection Sort: t(N) ≈ a + b* N 2 Merge Sort: t(N) = a + b*Nlog 2 (N) Merge Sort: t(N) = a + b*Nlog 2 (N)

11 Final Time Complexity Complexities Helpful Concepts Helpful Concepts Logarithms with constant bases?Logarithms with constant bases? Exponentials with constant bases?Exponentials with constant bases? Bounding an uncommon function?Bounding an uncommon function? Be careful not to over- or under-bound your time bounds (when working with strange bounds).Be careful not to over- or under-bound your time bounds (when working with strange bounds). It’s relatively easy to correctly bound above. It’s relatively easy to correctly bound above. It’s relatively difficult to know if you bounded below. It’s relatively difficult to know if you bounded below. Relative order of time bounds Relative order of time bounds


Download ppt "Algorithmic Complexity: Complexity Analysis of Time Complexity Complexities Nate the Great."

Similar presentations


Ads by Google