Lecture 2: Math Review and Asymptotic Analysis. Common Math Functions Floors and Ceilings: x-1 < └ x ┘ < x < ┌ x ┐ < x+1. Modular Arithmetic: a mod n.

Slides:



Advertisements
Similar presentations
Introduction to Proofs
Advertisements

Analysis of Algorithms
 The running time of an algorithm as input size approaches infinity is called the asymptotic running time  We study different notations for asymptotic.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Asymptotic Growth Rate
Cutler/HeadGrowth of Functions 1 Asymptotic Growth Rate.
Growth of Functions CIS 606 Spring 2010.
CMSC 250 Discrete Structures Number Theory. 20 June 2007Number Theory2 Exactly one car in the plant has color H( a ) := “ a has color”  x  Cars –H(
1 Data Structures A program solves a problem. A program solves a problem. A solution consists of: A solution consists of:  a way to organize the data.
25 June 2015Comp 122, Spring 2004 Asymptotic Notation, Review of Functions & Summations.
Data Structures, Spring 2004 © L. Joskowicz 1 Data Structures – LECTURE 2 Elements of complexity analysis Performance and efficiency Motivation: analysis.
Analysis of Algorithms
DATA STRUCTURES AND ALGORITHMS Lecture Notes 1 Prepared by İnanç TAHRALI.
Introduction to Algorithms (2 nd edition) by Cormen, Leiserson, Rivest & Stein Chapter 3: Growth of Functions (slides enhanced by N. Adlai A. DePano)
Methods of Proof & Proof Strategies
Mathematics Review Exponents Logarithms Series Modular arithmetic Proofs.
Algorithm Efficiency CS 110: Data Structures and Algorithms First Semester,
1 Computer Algorithms Lecture 3 Asymptotic Notation Some of these slides are courtesy of D. Plaisted, UNC and M. Nicolescu, UNR.
CS 3343: Analysis of Algorithms
Chapter 1 Introduction. Goals Why the choice of algorithms is so critical when dealing with large inputs Basic mathematical background Review of Recursion.
BY Lecturer: Aisha Dawood.  stands alone on the right-hand side of an equation (or inequality), example : n = O(n 2 ). means set membership :n ∈ O(n.
2IL50 Data Structures Fall 2015 Lecture 2: Analysis of Algorithms.
CS 221 Analysis of Algorithms Instructor: Don McLaughlin.
Chapter 2 Mathematical preliminaries 2.1 Set, Relation and Functions 2.2 Proof Methods 2.3 Logarithms 2.4 Floor and Ceiling Functions 2.5 Factorial and.
Methods of Proof Lecture 3: Sep 9. This Lecture Now we have learnt the basics in logic. We are going to apply the logical rules in proving mathematical.
CSCI 3160 Design and Analysis of Algorithms Tutorial 1
Asymptotic Analysis-Ch. 3
CMPT 438 Algorithms Chapter 3 Asymptotic Notations.
Divide & Conquer  Themes  Reasoning about code (correctness and cost)  recursion, induction, and recurrence relations  Divide and Conquer  Examples.
Tonga Institute of Higher Education Design and Analysis of Algorithms IT 254 Lecture 2: Mathematical Foundations.
3.Growth of Functions Asymptotic notation  g(n) is an asymptotic tight bound for f(n). ``=’’ abuse.
Algorithms Growth of Functions. Some Notation NNatural numbers RReal numbers N + Positive natural numbers R + Positive real numbers R * Non-negative real.
1 o-notation For a given function g(n), we denote by o(g(n)) the set of functions: o(g(n)) = {f(n): for any positive constant c > 0, there exists a constant.
September 17, 2001 Algorithms and Data Structures Lecture II Simonas Šaltenis Nykredit Center for Database Research Aalborg University
CSC401 – Analysis of Algorithms Lecture Notes 2 Asymptotic Analysis Objectives: Mathematics foundation for algorithm analysis Amortization analysis techniques.
Analysis of Algorithms1 O-notation (upper bound) Asymptotic running times of algorithms are usually defined by functions whose domain are N={0, 1, 2, …}
Growth of Functions. 2 Analysis of Bubble Sort 3 Time Analysis – Best Case The array is already sorted – no swap operations are required.
September 9, Algorithms and Data Structures Lecture II Simonas Šaltenis Nykredit Center for Database Research Aalborg University
Time Complexity of Algorithms (Asymptotic Notations)
Chapter 1: Introduction
COSC 3101A - Design and Analysis of Algorithms 2 Asymptotic Notations Continued Proof of Correctness: Loop Invariant Designing Algorithms: Divide and Conquer.
Chapter 2 Computational Complexity. Computational Complexity Compares growth of two functions Independent of constant multipliers and lower-order effects.
Introduction to Algorithms Lecture 2 Chapter 3: Growth of Functions.
1/6/20161 CS 3343: Analysis of Algorithms Lecture 2: Asymptotic Notations.
ADVANCED ALGORITHMS REVIEW OF ANALYSIS TECHNIQUES (UNIT-1)
2IS80 Fundamentals of Informatics Fall 2015 Lecture 5: Algorithms.
Spring 2015 Lecture 2: Analysis of Algorithms
Nirmalya Roy School of Electrical Engineering and Computer Science Washington State University Cpt S 223 – Advanced Data Structures Math Review 1.
David Meredith Growth of Functions David Meredith
2IL50 Data Structures Spring 2016 Lecture 2: Analysis of Algorithms.
Asymptotic Notation Faculty Name: Ruhi Fatima
1 CMSC 250 Chapter 3, Number Theory. 2 CMSC 250 Introductory number theory l A good proof should have: –a statement of what is to be proven –"Proof:"
Big O David Kauchak cs302 Spring Administrative Assignment 1: how’d it go? Assignment 2: out soon… Lab code.
Kompleksitas Waktu Asimptotik CSG3F3 Lecture 5. 2 Intro to asymptotic f(n)=an 2 + bn + c RMB/CSG3F3.
September 18, Algorithms and Data Structures Lecture II Simonas Šaltenis Aalborg University
CSC317 1 Recap: Oh, Omega, Theta Oh (like ≤) Omega (like ≥) Theta (like =) O(n) is asymptotic upper bound 0 ≤ f(n) ≤ cg(n) Ω(n) is asymptotic lower bound.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 2.
1 Proofs by Counterexample & Contradiction There are several ways to prove a theorem:  Counterexample: By providing an example of in which the theorem.
Introduction to Algorithms: Asymptotic Notation
nalisis de lgoritmos A A
Growth of functions CSC317.
CS 3343: Analysis of Algorithms
O-notation (upper bound)
CS 3343: Analysis of Algorithms
Asymptotic Notations Algorithms Lecture 9.
Asymptotic Growth Rate
O-notation (upper bound)
At the end of this session, learner will be able to:
Algorithms and Data Structures Lecture II
Presentation transcript:

Lecture 2: Math Review and Asymptotic Analysis

Common Math Functions Floors and Ceilings: x-1 < └ x ┘ < x < ┌ x ┐ < x+1. Modular Arithmetic: a mod n = a – └ a/n ┘ n. Factorials: n! = 1 if n = 0 n! = n * (n-1)! If n > 0. n! < n n

Exponentials a 0 = 1 a 1 = a a -1 = 1/a (a m ) n = a mn a m a n = a m+n 0 0 = 1..when convenient

Logarithm Rules lg(n) = log 2 (n)Binary logarithm ln(n) = log e (n)Natural logarithm lg k (n) = (lg(n))kExponentiation lglg(n) = lg(lg(n))Composition log b (mn) = log b (m) + log b (n) log b (m/n) = log b (m) – log b (n) log b (m n ) = n · log b (m) log b (x) = log d (x) / log d (b)Change of Base Rule ln(x) = log d (x) / log d ( e ) log(1) = 0, lg(1) = 0 log(10) = 1, lg(2) = 1 log(100) = 2, lg(4) = 2

Summation Formulas Arithmetic series: Geometric series: –Special case if x < 1: Harmonic series: Other:

Fibonacci Numbers F 0 = 0 F 1 = 1 F i = F i-1 + F i-2

Proofs A proof is a logical argument that, assuming certain axioms, some statement is necessarily true. A few common proof techniques: –Direct Proof –Proof by Induction –Proof by Contradiction –Proof by Contraposition –Proof by Exhaustion –Proof by Counterexample

Direct Proof A conclusion is established by logically combining earlier definitions, axioms and theorems

Direct Proof: Example Pythagorean Theorem: For a right triangle, with sides (legs) a and b, and hypotenuse c, c²=a²+b². Proof (courtesy Legendre): 1.ABC, CBX, and ACX are similar triangles. 2.Corresponding parts of similar triangles are proportional: a/x=c/a → a²=cx b/(c-x)=c/b → b²=c²-cx → c²=cx+b² 3.Substituting a² for cx, we find c²=a²+b². CA B X x c b a

Proof by Induction A base case is proven, and an induction rule used to prove an series (possibly infinite) of other cases.

Proof by Induction: Example Theorem: … + n = n(n+1) / 2 Proof (courtesty Gauss): 1.Base Case: If n = 0, 0 = 0(0+1) / 2. 2.Inductive Hypothesis: Assume the statement is true for n = m, i.e., … + m = m(m+1) / 2 3.Inductive Step: Show that n = m + 1 holds: … + m + m + 1 = (m+1)(m+1+1) / 2 Since we know the theorem holds for n=m, we subtract those terms from each side.. m + 1 = (m 2 + 3m + 2) / 2 - (m 2 + m) / 2 m + 1 = (2m + 2)/2

Proof by Contradiction One shows that if a statement were false, a logical contradiction occurs, and thus the statement must be true.

Proof by Contradiction: Example Theorem: There exists an infinite number of prime numbers. Proof (courtesy of Euclid): 1.Assume that there are a finite number of primes. 2.Then there is a largest prime, p. Consider the number q = (2x3x5x7x...xp)+1. q is one more than the product of all primes up to p. q > p. And, q is not divisible by any prime up to p. For example, it is not divisible by 7, as it is one more than a multiple of 7. 3.If q is not a prime and it is not divisible by any prime p. 4.That is a contradiction, as p was assumed to be the largest prime. So, there is no largest prime. In other words, there are infinitely many primes.

Proof by Contraposition In order to prove A→B, prove ¬B → ¬A.

Proof by Contraposition Theorem: For all natural numbers: If n 2 is even, then n is even as well. Proof: 1.Contraposition: If n is odd, then n 2 is odd. 2.Let n = 2q+1, where q is a natural number. Then n 2 = (2q+1) 2 = 4q 2 +4q+1 = 2(2q 2 +2q)+1. 3.Let p = 2q 2 +2q, then n 2 = 2p+1. Since 2p must be even, n 2 must be odd proving the contraposition.

Proof by Exhaustion The conclusion is established by dividing the problem into a finite number of exhaustive cases and proving each one separately.

Proof by Exhaustion: Example Theorem: Every cube number is either a multiple of 9 or is 1 more or 1 less than a multiple of 9. Proof: Each cube number is the cube of an integer n. n is either a multiple of 3, or is 1 more or 1 less than a multiple of 3. The following 3 cases are exhaustive: 1.Case 1: If n is a multiple of 3 then the cube of n is a multiple of 27, and so certainly a multiple of 9. 2.Case 2: If n is 1 more than a multiple of 3 then the cube of n is 1 more than a multiple of 9. 3.Case 3: If n is 1 less than a multiple of 3 then the cube of n is 1 less than a multiple of 9.

Proof by Counterexample Given an assertion of the form : For all x, P(x) is true, disprove it by showing that there is a c such that ¬P(c) is true.

Proof by Counterexample: Example Conjecture: The square root of every integer is irrational. Counterexample: Given the function f(x) = √n, let n = 4. Clearly, √4 = 2 and 2 is rational.

Loop Invariants Another proof technique for proving correctness for loops is to prove these properties hold for a loop invariant (we saw an example last time): Initialization: It is true prior to the first iteration. Maintenance: It is true before an iteration of the loop and remains true before the next iteration. Termination: When the loop terminates, the invariant yields a useful property which helps show the algorithm is correct.

Loop Invariants: Example Show that the following pseudocode correctly determines if x is in the array A: for i=0 to len(A): if (x == A[i]) return true return false Loop Invariant: At the start of each for loop, the subsequence A[0]..A[i-1] does not contain the variable x.

Loop Invariants: Example Initialization: The loop invariant is true at initialization because the subsequence A[0]..A[-1] is an empty subsequence and by definition x is not contained in the subsequence. Maintenance: As we consider the (k-1)st element of A, if it is in A we return true. If it is not we continue. Therefore if we consider the kth element the (k-1)st element must not have been equal to x and subsequently any earlier element. Termination: The loop terminates when i=len(A) or when x is found in the array. From the maintenance property we know that if i=len(A), then A[0]..A[len(A)-1] does not contain x. If i != len(A) then the loop terminated because x was found.

Danger: Proof by Example Conjecture: For arbitrary sets A and B, the sets A \ B, B \ A, and A ∩ B are pair wise disjoint. Proof: Let A = {1,3,5,7} and B = {2,4,6,8}. Then... NO! In general, an example is almost never a proof. * Not to be confused with proof by construction (which you are unlikely to use in this class).

Danger: Proof by Example Conjecture: The Fibonacci sequence, F(x), is always odd. Proof: Consider x=2; F(2)=1. Or consider x=10; F(10)=55, etc.. Clearly the Fibonacci sequence is always odd. NO! Even if the conjecture were true this is not a correct proof.

Growth of Functions Review 1Constant log nLogarithmic nLinear n log n n 2 Quadratic n 3 Cubic 2 n Exponential

Asymptotic Notation How do we describe asymptotic efficiency? O(g(n)): g(n) is an asymptotic upper bound o(g(n)): g(n) is an upper bound that is not asymptotically tight Ω(g(n)): g(n) is an asymptotic lower bound ω(g(n)): g(n) is a lower bound that is not asymptotically tight Θ(g(n)): g(n) is an asymptotically tight bound

O-Notation O(g(n)) is pronounced “Big-oh of g of n” or sometimes just “Oh of g of n” We use this notation when we can only find an upper bound on a function. O(g(n)) = { f(n): there exists some positive constants c and n 0 such that: 0 n 0 }.

O-Notation Example Prove: ½ n 2 – 3n = O(n 2 ) Proof: 1.We must choose c and n 0 s.t.: ½ n 2 – 3n n 0. 2.Dividing by n 2 yields: ½ – 3/n < c 3.This holds if we choose c > ½ and n 0 > 7.

o-Notation o(g(n)) is pronounced “Little-oh of g of n” O(g(n)) may or may not be tight. We use o(g(n)) to specify that it is not asymptotically tight. o(g(n)) = { f(n): for any positive constant c>0 there exists a positive constant n 0 >0 such that: 0 n 0 }.

o-Notation Example For example, 2n = o(n 2 ) because n 2 is not an asymptotically tight upper bound for 2n. It is, however, an upper bound. But, 2n 2 ≠ o(n 2 ) because n 2 would be an asymptotically tight upper bound.

o-Notation Example Prove: 2n = o(n 2 ) Proof: 1.Let n 0 = 3/c. 2.Substituting n 0 for n in 2n < cn 2 : 2(3/c) < c(3/c) 2 6/c < 9/c. 3.Clearly the o-Notation definition holds.

Ω-Notation Ω(g(n)) is pronounced “Big-omega of g of n” or sometimes just “Omega of g of n” We use this notation when we can only find a lower bound on a function. Ω(g(n)) = { f(n): there exists some positive constants c and n 0 such that: 0 n 0 }.

Ω-Notation Example Prove: ½ n 2 – 3n = Ω (n 2 ) Proof: 1.We must choose c and n 0 s.t.: cn 2 n 0. 2.Dividing by n 2 yields: c < ½ – 3/n 3.This holds if we choose c > 1/14 and n 0 > 7.

ω-Notation ω(g(n)) is pronounced “Little-omega of g of n” Ω(g(n)) may or may not be tight. We use ω(g(n)) to specify that it is not asymptotically tight. ω(g(n)) = { f(n): for any positive constant c>0 there exists a positive constant n 0 >0 such that: 0 n 0 }.

ω-Notation Example For example, n 2 /2 = ω(n) because n is not an asymptotically tight lower bound for n 2 /2. It is, however, a lower bound. But, n 2 /2 ≠ ω (n 2 ) because n 2 would be an asymptotically tight upper bound.

ω-Notation Example Prove: n 2 /2 = ω(n) Proof: 1.Let n 0 = 3c. 2.Substituting n 0 for n in cn < n 2 /2 c(3c) < (3c) 2 /2 6c 2 < 9c 2 3.Clearly the ω-Notation definition holds.

Θ - Notation Θ(g(n)) is pronounced “Theta of g of n” We use this notation when g(n) provides an upper and lower bound for a function. That is, it is asymptotically tight. This is a stronger statement. Θ(g(n)) = { f(n): there exists some positive constants c 1, c 2 and n 0 such that: 0 n 0 }.

Θ - Notation You may notice that f(n) = Θ(g(n)) implies f(n) = O(g(n)) and f(n) = Ω(g(n)). The converse is not true. f(n) = Θ(g(n)) sandwhiches f(n) between an upper and lower bound.

Θ -Notation Example Prove: ½ n 2 – 3n = Θ(n 2 ) Proof: 1.We must choose c and n 0 s.t.: c 1 n 2 n 0. 2.Dividing by n 2 yields: c 1 < ½ – 3/n < c 2 3.This holds if we choose c 1 > 1/14 and c 2 > ½ and n 0 > 7.

Θ – Notation Example Prove: ½ n 2 – 3n = Θ(n 2 ) Alternate Proof: 1.We already proved ½ n 2 – 3n = O(n 2 ) and that ½ n 2 – 3n = Ω(n 2 ), then clearly the n 2 bound on ½ n 2 – 3n is asymptotically tight and ½ n 2 – 3n = Θ(n 2 ).

Asymptotic Notation in Equations and Inequalities 2n 2 + 3n + 1 = 2n 2 + Θ(n) means 2n 2 + 3n + 1 = 2n 2 + f(n) where f(n) is a function in the set Θ(n). We use this to impart meaning and remove clutter: 2n 2 + 3n + 1 = 2n 2 + Θ(n) = Θ(n 2 )

Transitivity f(n) = O(g(n)) && g(n) = O(h(n)) → f(n) = O(h(n)). f(n) = o(g(n)) && g(n) = o(h(n)) → f(n) = o(h(n)). f(n) = Ω(g(n)) && g(n) = Ω(h(n)) → f(n) = Ω(h(n)). f(n) = ω(g(n)) && g(n) = ω(h(n)) → f(n) = ω(h(n)). f(n) = Θ(g(n)) && g(n) = Θ(h(n)) → f(n) = Θ(h(n)).

Reflexivity f(n) = O(f(n)), f(n) = Ω(f(n)), f(n) = Θ(f(n)) But NOT: f(n) = o(f(n)), f(n) = ω(f(n)).

Symmetry f(n) = Θ(g(n)) if and only if g(n) = Θ(f(n)). Does not hold for O, o, Ω, or ω but the following transpose symmetries do: f(n) = O(g(n)) if and only if g(n) = Ω(f(n)). f(n) = o(g(n)) if and only if g(n) = ω(f(n)).

An Analogy The comparison of functions roughly analogizes to the comparison of real numbers: f(n) = O(g(n))…a < b f(n) = o(g(n))…a < b f(n) = Ω(g(n))…a > b f(n) = ω(g(n))…a > b f(n) = Θ(g(n))…a = b

Insertion Sort.. again Last time we came up with this running time for insertion sort: T(n) = (c 4 +c 5 +c 6 )n 2 /2 + (c 1 +c 2 +c 3 +c 4 /2- c 5 /2-c 6 /2c 7 )n – (c 2 + c 3 + c 4 + c 7 ) With some hand waving I claimed that T(n) = Θ(n 2 ) Prove more formally this is true. Simplification: T(n) = dn 2 + en – f where d, e and f are constants.

A Warm Up Exercise Show for any real constants a and b > 0, (n + a) b = Θ(n b ). Recall: Θ(g(n)) = { f(n): there exists some positive constants c 1, c 2 and n 0 such that: 0 n 0 }.

A Warm Up Exercise Show for any real constants a and b > 0, (n + a) b = Θ(n b ). Proof: 1.c 1 n b < (n + a) b < c 2 n b 2.(n+a) b = n b (1+a) b 3.Let c 1 = 1 4.Let c 2 = (2+a) b 5.Let n 0 = 1.

Insertion Sort.. Again Prove: T(n) = dn 2 + en – f Recall: Θ(g(n)) = { f(n): there exists some positive constants c 1, c 2 and n 0 such that: 0 n 0 }.

Insertion Sort.. again Prove: T(n) = dn 2 + en – f = Θ(n 2 ) Proof: c 1 n 2 < dn 2 + en – f < c 2 n 2 Let n0 = f*max(1,1/d)*max(1,1/e) Note: min(d,e)n 2 < dn 2 + en – f Let c1 = min(d,e). Note: dn 2 + en – f < (d+e)n 2 Let c2 = (d+e).

More Examples Prove: lg(n!) = Θ(n lg n)

More Examples Prove: lg(n!) = O(n lg n) Proof: 1.n! < n n 2.lg(n n ) = n lg (n) 3.lg(n n ) > lg(n!) 4.n lg (n) > lg(n!) 5.lg(n!) = O(n lg n)