Introduction to Algorithms Lecture 2 Chapter 3: Growth of Functions.

Slides:



Advertisements
Similar presentations
한양대학교 정보보호 및 알고리즘 연구실 이재준 담당교수님 : 박희진 교수님
Advertisements

BY Lecturer: Aisha Dawood. The notations we use to describe the asymptotic running time of an algorithm are defined in terms of functions whose domains.
 The running time of an algorithm as input size approaches infinity is called the asymptotic running time  We study different notations for asymptotic.
Discrete Structures & Algorithms Functions & Asymptotic Complexity.
Estimating Running Time Algorithm arrayMax executes 3n  1 primitive operations in the worst case Define a Time taken by the fastest primitive operation.
CS Section 600 CS Section 002 Dr. Angela Guercio Spring 2010.
Asymptotic Growth Rate
Cutler/HeadGrowth of Functions 1 Asymptotic Growth Rate.
Growth of Functions CIS 606 Spring 2010.
CS3381 Des & Anal of Alg ( SemA) City Univ of HK / Dept of CS / Helena Wong 2. Analysis of Algorithms - 1 Analysis.
25 June 2015Comp 122, Spring 2004 Asymptotic Notation, Review of Functions & Summations.
Data Structures, Spring 2004 © L. Joskowicz 1 Data Structures – LECTURE 2 Elements of complexity analysis Performance and efficiency Motivation: analysis.
Chapter 2: Fundamentals of the Analysis of Algorithm Efficiency
CHAPTER 2 ANALYSIS OF ALGORITHMS Part 1. 2 Big Oh and other notations Introduction Classifying functions by their asymptotic growth Theta, Little oh,
CSE 5311 DESIGN AND ANALYSIS OF ALGORITHMS. Definitions of Algorithm A mathematical relation between an observed quantity and a variable used in a step-by-step.
Algorithm analysis and design Introduction to Algorithms week1
Instructor Neelima Gupta
Introduction to Algorithms (2 nd edition) by Cormen, Leiserson, Rivest & Stein Chapter 3: Growth of Functions (slides enhanced by N. Adlai A. DePano)
Introduction to Algorithms Jiafen Liu Sept
Algorithmic Complexity: Complexity Analysis of Time Complexity Complexities Nate the Great.
Asymptotics Data Structures & OO Development I 1 Computer Science Dept Va Tech June 2006 ©2006 McQuain & Ribbens Big-O Analysis Order of magnitude analysis.
Lecture 2 Computational Complexity
Algorithm Efficiency CS 110: Data Structures and Algorithms First Semester,
Design and Analysis Algorithm Drs. Achmad Ridok M.Kom Fitra A. Bachtiar, S.T., M. Eng Imam Cholissodin, S.Si., M.Kom Aryo Pinandito, MT Pertemuan 04.
1 Computer Algorithms Lecture 3 Asymptotic Notation Some of these slides are courtesy of D. Plaisted, UNC and M. Nicolescu, UNR.
BY Lecturer: Aisha Dawood.  stands alone on the right-hand side of an equation (or inequality), example : n = O(n 2 ). means set membership :n ∈ O(n.
2IL50 Data Structures Fall 2015 Lecture 2: Analysis of Algorithms.
CSCI 3160 Design and Analysis of Algorithms Tutorial 1
Asymptotic Analysis-Ch. 3
Algorithms Growth of Functions. Some Notation NNatural numbers RReal numbers N + Positive natural numbers R + Positive real numbers R * Non-negative real.
Fundamentals of Algorithms MCS - 2 Lecture # 8. Growth of Functions.
Analysis of Algorithms1 O-notation (upper bound) Asymptotic running times of algorithms are usually defined by functions whose domain are N={0, 1, 2, …}
Growth of Functions. 2 Analysis of Bubble Sort 3 Time Analysis – Best Case The array is already sorted – no swap operations are required.
Analysis of Algorithms Algorithm Input Output An algorithm is a step-by-step procedure for solving a problem in a finite amount of time.
Time Complexity of Algorithms (Asymptotic Notations)
Big-O. Algorithm Analysis Exact analysis: produce a function f(n) measuring how many basic steps are needed for a given inputs n On any input of size.
Analysis of Algorithm. Why Analysis? We need to know the “behavior” of algorithms – How much resource (time/space) does it use So that we know when to.
COP 3530 Spring2012 Data Structures & Algorithms Discussion Session Week 5.
Asymptotic Notations By Er. Devdutt Baresary. Introduction In mathematics, computer science, and related fields, big O notation describes the limiting.
1/6/20161 CS 3343: Analysis of Algorithms Lecture 2: Asymptotic Notations.
Spring 2015 Lecture 2: Analysis of Algorithms
David Meredith Growth of Functions David Meredith
13 February 2016 Asymptotic Notation, Review of Functions & Summations.
2IL50 Data Structures Spring 2016 Lecture 2: Analysis of Algorithms.
Asymptotic Notation Faculty Name: Ruhi Fatima
Kompleksitas Waktu Asimptotik CSG3F3 Lecture 5. 2 Intro to asymptotic f(n)=an 2 + bn + c RMB/CSG3F3.
Algorithms Lecture #05 Uzair Ishtiaq. Asymptotic Notation.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 2.
Asymptotic Complexity
Introduction to Algorithms: Asymptotic Notation
Chapter 3: Growth of Functions
Asymptotic Analysis.
Introduction to Algorithms
Introduction to Algorithms (2nd edition)
Growth of functions CSC317.
CS 3343: Analysis of Algorithms
Chapter 3: Growth of Functions
O-notation (upper bound)
Asymptotic Notations Algorithms Lecture 9.
Asymptotic Growth Rate
Asymptotic Analysis.
Fundamentals of Algorithms MCS - 2 Lecture # 9
Advanced Analysis of Algorithms
Introduction to Algorithms
Performance Evaluation
O-notation (upper bound)
G.PULLAIAH COLLEGE OF ENGINEERING AND TECHNOLOGY
CS210- Lecture 2 Jun 2, 2005 Announcements Questions
Advanced Analysis of Algorithms
Presentation transcript:

Introduction to Algorithms Lecture 2 Chapter 3: Growth of Functions

Growth of Functions The order of growth of the running time of an algorithmg gives a simple characterization of the algorithm's efficiency and also allows us to compare the relative performance of alternative algorithms.The order of growth of the running time of an algorithmg gives a simple characterization of the algorithm's efficiency and also allows us to compare the relative performance of alternative algorithms. Once the input size n becomes large enough, merge sort, with its Θ(n lg n) worst-case running time, beats insertion sort, whose worst-case running time is Θ(n 2 ).Once the input size n becomes large enough, merge sort, with its Θ(n lg n) worst-case running time, beats insertion sort, whose worst-case running time is Θ(n 2 ). When we look at input sizes large enough to make only the order of growth of the running time relevant, we are studying the asymptotic efficiency of algorithms.When we look at input sizes large enough to make only the order of growth of the running time relevant, we are studying the asymptotic efficiency of algorithms. That is, we are concerned with how the running time of an algorithm increases with the size of the input in the limit, as the size of the input increases without bound. Usually, an algorithm that is asymptotically more efficient will be the best choice for all but very small inputs.That is, we are concerned with how the running time of an algorithm increases with the size of the input in the limit, as the size of the input increases without bound. Usually, an algorithm that is asymptotically more efficient will be the best choice for all but very small inputs. The asymptotic running time of an algorithm are defined in terms of functionsThe asymptotic running time of an algorithm are defined in terms of functions

Θ-notation In Chapter 2, we found that the worst-case running time of insertion sort is T (n) = Θ(n 2 ). Let us define what this notation means. For a given function g(n), we denote by Θ(g(n)) the set of functionsIn Chapter 2, we found that the worst-case running time of insertion sort is T (n) = Θ(n 2 ). Let us define what this notation means. For a given function g(n), we denote by Θ(g(n)) the set of functionsChapter 2Chapter 2 Θ(g(n)) = {f(n) : there exist positive constants c 1, c 2, and n 0 such that 0 ≤ c 1 g(n) ≤ f(n) ≤ c 2 g(n) for all n ≥ n 0 }. A function f(n) belongs to the set Θ(g(n)) if there exist positive constants c 1 and c 2 such that it can be "sandwiched" between c 1 g(n) and c 2 g(n), for sufficiently large n. We say that g(n) is an asymptotically tight bound for f(n).A function f(n) belongs to the set Θ(g(n)) if there exist positive constants c 1 and c 2 such that it can be "sandwiched" between c 1 g(n) and c 2 g(n), for sufficiently large n. We say that g(n) is an asymptotically tight bound for f(n). Because Θ(g(n)) is a set, we could write "f(n) ∈ Θ(g(n))" to indicate that f(n) is a member of Θ(g(n)). Instead, we will usually write "f(n) = Θ(g(n))" to express the same notion. This abuse of equality to denote set membership may at first appear confusing, but we shall see later in this section that it has advantages.Because Θ(g(n)) is a set, we could write "f(n) ∈ Θ(g(n))" to indicate that f(n) is a member of Θ(g(n)). Instead, we will usually write "f(n) = Θ(g(n))" to express the same notion. This abuse of equality to denote set membership may at first appear confusing, but we shall see later in this section that it has advantages.

Θ-notation Figure 3.1(a) gives an intuitive picture of functions f(n) and g(n), where we have that f(n) = Θ(g(n)). For all values of n to the right of n 0, the value of f(n) lies at or above c 1 g(n) and at or below c 2 g(n). In other words, for all n ≥ n 0, the function f(n) is equal to g(n) to within a constant factor. We say that g(n) is an asymptotically tight bound for f(n).Figure 3.1(a) gives an intuitive picture of functions f(n) and g(n), where we have that f(n) = Θ(g(n)). For all values of n to the right of n 0, the value of f(n) lies at or above c 1 g(n) and at or below c 2 g(n). In other words, for all n ≥ n 0, the function f(n) is equal to g(n) to within a constant factor. We say that g(n) is an asymptotically tight bound for f(n).Figure 3.1(a)Figure 3.1(a)

Θ-notationO-notation

The Θ-notation asymptotically bounds a function from above and below. When we have only an asymptotic upper bound, we use O- notation. For a given function g(n), we denote by O(g(n)) (pronounced "big-oh of g of n" or sometimes just "oh of g of n") the set of functionsThe Θ-notation asymptotically bounds a function from above and below. When we have only an asymptotic upper bound, we use O- notation. For a given function g(n), we denote by O(g(n)) (pronounced "big-oh of g of n" or sometimes just "oh of g of n") the set of functions O(g(n)) = {f(n): there exist positive constants c and n 0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n 0 }. We use O-notation to give an upper bound on a function, to within a constant factor. Figure 3.1(b) shows the intuition behind O-notation. For all values n to the right of n 0, the value of the function f(n) is on or below g(n).We use O-notation to give an upper bound on a function, to within a constant factor. Figure 3.1(b) shows the intuition behind O-notation. For all values n to the right of n 0, the value of the function f(n) is on or below g(n).Figure 3.1(b)Figure 3.1(b)

Θ-notationO-notation

We write f(n) = O(g(n)) to indicate that a function f(n) is a member of the set O(g(n)).We write f(n) = O(g(n)) to indicate that a function f(n) is a member of the set O(g(n)). Note that f(n) = Θ(g(n)) implies f(n) = O(g(n)), since Θ-notation is a stronger notion than O-notation.Note that f(n) = Θ(g(n)) implies f(n) = O(g(n)), since Θ-notation is a stronger notion than O-notation. Written set-theoretically, we have Θ(g(n)) ⊆ O(g(n)).Written set-theoretically, we have Θ(g(n)) ⊆ O(g(n)). Thus, our proof that any quadratic function an 2 + bn + c, where a > 0, is in Θ(n 2 ) also shows that any quadratic function is in O(n 2 ).Thus, our proof that any quadratic function an 2 + bn + c, where a > 0, is in Θ(n 2 ) also shows that any quadratic function is in O(n 2 ). What may be more surprising is that any linear function an + b is in O(n 2 ), which is easily verified by taking c = a + |b| and n 0 = 1.What may be more surprising is that any linear function an + b is in O(n 2 ), which is easily verified by taking c = a + |b| and n 0 = 1.

O-notation Some readers who have seen O-notation before may find it strange that we should write, for example, n = O(n 2 ).Some readers who have seen O-notation before may find it strange that we should write, for example, n = O(n 2 ). In the literature, O-notation is sometimes used informally to describe asymptotically tight bounds, that is, what we have defined using Θ- notation.In the literature, O-notation is sometimes used informally to describe asymptotically tight bounds, that is, what we have defined using Θ- notation. In this book, however, when we write f(n) = O(g(n)), we are merely claiming that some constant multiple of g(n) is an asymptotic upper bound on f(n), with no claim about how tight an upper bound it is.In this book, however, when we write f(n) = O(g(n)), we are merely claiming that some constant multiple of g(n) is an asymptotic upper bound on f(n), with no claim about how tight an upper bound it is. Distinguishing asymptotic upper bounds from asymptotically tight bounds has now become standard in the algorithms literature. Distinguishing asymptotic upper bounds from asymptotically tight bounds has now become standard in the algorithms literature.

Ω-notation Just as O-notation provides an asymptotic upper bound on a function, Ω-notation provides an asymptotic lower bound. For a given function g(n), we denote by Ω(g(n)) (pronounced "big-omega of g of n" or sometimes just "omega of g of n") the set of functionsJust as O-notation provides an asymptotic upper bound on a function, Ω-notation provides an asymptotic lower bound. For a given function g(n), we denote by Ω(g(n)) (pronounced "big-omega of g of n" or sometimes just "omega of g of n") the set of functions Ω(g(n)) = {f(n): there exist positive constants c and n 0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n 0 }.Ω(g(n)) = {f(n): there exist positive constants c and n 0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n 0 }. The intuition behind Ω-notation is shown in Figure 3.1(c). For all values n to the right of n 0, the value of f(n) is on or above cg(n).The intuition behind Ω-notation is shown in Figure 3.1(c). For all values n to the right of n 0, the value of f(n) is on or above cg(n).Figure 3.1(c)Figure 3.1(c)

Θ-notationO-notation Ω-notation

o-notation The asymptotic upper bound provided by O-notation may or may not be asymptotically tight. The bound 2n 2 = O(n 2 ) is asymptotically tight, but the bound 2n = O(n 2 ) is not.The asymptotic upper bound provided by O-notation may or may not be asymptotically tight. The bound 2n 2 = O(n 2 ) is asymptotically tight, but the bound 2n = O(n 2 ) is not. We use o-notation to denote an upper bound that is not asymptotically tight. We formally define o(g(n)) ("little-oh of g of n") as the setWe use o-notation to denote an upper bound that is not asymptotically tight. We formally define o(g(n)) ("little-oh of g of n") as the set o(g(n)) = {f(n) : for any positive constant c > 0, there exists a constant n 0 > 0 such that 0 ≤ f(n) 0, there exists a constant n 0 > 0 such that 0 ≤ f(n) < cg(n) for all n ≥ n 0 }. For example, 2n = o(n 2 ), but 2n 2 ≠ o(n 2 ).For example, 2n = o(n 2 ), but 2n 2 ≠ o(n 2 ).

o-notation The definitions of O-notation and o-notation are similar. The main difference is that in f(n) = O(g(n)), the bound 0 ≤ f(n) ≤ cg(n) holds for some constant c > 0, but in f(n) = o(g(n)), the bound 0 ≤ f(n) 0.The definitions of O-notation and o-notation are similar. The main difference is that in f(n) = O(g(n)), the bound 0 ≤ f(n) ≤ cg(n) holds for some constant c > 0, but in f(n) = o(g(n)), the bound 0 ≤ f(n) 0. Intuitively, in the o-notation, the function f(n) becomes insignificant relative to g(n) as n approaches infinity; that is,Intuitively, in the o-notation, the function f(n) becomes insignificant relative to g(n) as n approaches infinity; that is, (3.1)(3.1) Some authors use this limit as a definition of the o-notation; the definition in this book also restricts the anonymous functions to be asymptotically nonnegative.Some authors use this limit as a definition of the o-notation; the definition in this book also restricts the anonymous functions to be asymptotically nonnegative.

ω-notation By analogy, ω-notation is to Ω-notation as o-notation is to O-notation. We use ω-notation to denote a lower bound that is not asymptotically tight. One way to define it is byBy analogy, ω-notation is to Ω-notation as o-notation is to O-notation. We use ω-notation to denote a lower bound that is not asymptotically tight. One way to define it is by f(n) ∈ ω(g(n)) if and only if g(n) ∈ o(f(n)).f(n) ∈ ω(g(n)) if and only if g(n) ∈ o(f(n)). Formally, however, we define ω(g(n)) ("little-omega of g of n") as the setFormally, however, we define ω(g(n)) ("little-omega of g of n") as the set ω(g(n)) = {f(n): for any positive constant c > 0, there exists a constant n 0 > 0 such that 0 ≤ cg(n) 0, there exists a constant n 0 > 0 such that 0 ≤ cg(n) < f(n) for all n ≥ n 0 }. For example, n 2 /2 = ω(n), but n 2 /2 ≠ ω(n 2 ). The relation f(n) = ω(g(n)) implies thatFor example, n 2 /2 = ω(n), but n 2 /2 ≠ ω(n 2 ). The relation f(n) = ω(g(n)) implies that if the limit exists. That is, f(n) becomes arbitrarily large relative to g(n) as n approaches infinity.if the limit exists. That is, f(n) becomes arbitrarily large relative to g(n) as n approaches infinity.

Growth of Functions IntutivelyIntutively n o() is like < n O() is like  n  () is like > n  () is like  n  () is like =

Asymptotic notation in equations and inequalities We have already seen how asymptotic notation can be used within mathematical formulas. For example, in introducing O-notation, we wrote "n = O(n 2 )." We might also write 2n 2 + 3n + 1 = 2n 2 + Θ(n). How do we interpret such formulas?We have already seen how asymptotic notation can be used within mathematical formulas. For example, in introducing O-notation, we wrote "n = O(n 2 )." We might also write 2n 2 + 3n + 1 = 2n 2 + Θ(n). How do we interpret such formulas? When the asymptotic notation stands alone on the right-hand side of an equation (or inequality), as in n = O(n 2 ), we have already defined the equal sign to mean set membership: n ∈ O(n 2 ).When the asymptotic notation stands alone on the right-hand side of an equation (or inequality), as in n = O(n 2 ), we have already defined the equal sign to mean set membership: n ∈ O(n 2 ). In general, however, when asymptotic notation appears in a formula, we interpret it as standing for some anonymous function that we do not care to name.In general, however, when asymptotic notation appears in a formula, we interpret it as standing for some anonymous function that we do not care to name. For example, the formula 2n 2 + 3n + 1 = 2n 2 + Θ(n) means that 2n 2 + 3n + 1 = 2n 2 + f(n), where f(n) is some function in the set Θ(n). In this case, f(n) = 3n + 1, which indeed is in Θ(n).For example, the formula 2n 2 + 3n + 1 = 2n 2 + Θ(n) means that 2n 2 + 3n + 1 = 2n 2 + f(n), where f(n) is some function in the set Θ(n). In this case, f(n) = 3n + 1, which indeed is in Θ(n).

Asymptotic notation in equations and inequalities Using asymptotic notation in this manner can help eliminate inessential detail and clutter in an equation. For example, in Chapter 2 we expressed the worst-case running time of merge sort as the recurrenceUsing asymptotic notation in this manner can help eliminate inessential detail and clutter in an equation. For example, in Chapter 2 we expressed the worst-case running time of merge sort as the recurrenceChapter 2Chapter 2 T(n) = 2T (n/2) + Θ(n). If we are interested only in the asymptotic behavior of T(n), there is no point in specifying all the lower-order terms exactly; they are all understood to be included in the anonymous function denoted by the term Θ(n).If we are interested only in the asymptotic behavior of T(n), there is no point in specifying all the lower-order terms exactly; they are all understood to be included in the anonymous function denoted by the term Θ(n).

Asymptotic notation in equations and inequalities In some cases, asymptotic notation appears on the left-hand side of an equation, as inIn some cases, asymptotic notation appears on the left-hand side of an equation, as in 2n 2 + Θ(n) = Θ(n 2 ). We interpret such equations using the following rule: No matter how the anonymous functions are chosen on the left of the equal sign, there is a way to choose the anonymous functions on the right of the equal sign to make the equation valid.We interpret such equations using the following rule: No matter how the anonymous functions are chosen on the left of the equal sign, there is a way to choose the anonymous functions on the right of the equal sign to make the equation valid. Thus, the meaning of our example is that for any function f(n) ∈ Θ(n), there is some function g(n) ∈ Θ(n 2 ) such that 2n 2 + f(n) = g(n) for all n. In other words, the right-hand side of an equation provides a coarser level of detail than the left-hand side.Thus, the meaning of our example is that for any function f(n) ∈ Θ(n), there is some function g(n) ∈ Θ(n 2 ) such that 2n 2 + f(n) = g(n) for all n. In other words, the right-hand side of an equation provides a coarser level of detail than the left-hand side.

Asymptotic notation in equations and inequalities A number of such relationships can be chained together, as inA number of such relationships can be chained together, as in 2n 2 + 3n + 1 = 2n 2 + Θ(n) = Θ(n 2 ). = Θ(n 2 ). We can interpret each equation separately by the rule above. The first equation says that there is some function f(n) ∈ Θ(n) such that 2n 2 + 3n + 1 = 2n 2 + f(n) for all n.We can interpret each equation separately by the rule above. The first equation says that there is some function f(n) ∈ Θ(n) such that 2n 2 + 3n + 1 = 2n 2 + f(n) for all n. The second equation says that for any function g(n) ∈ Θ(n) (such as the f(n) just mentioned), there is some function h(n) ∈ Θ(n 2 ) such that 2n 2 + g(n) = h(n) for all n.The second equation says that for any function g(n) ∈ Θ(n) (such as the f(n) just mentioned), there is some function h(n) ∈ Θ(n 2 ) such that 2n 2 + g(n) = h(n) for all n. Note that this interpretation implies that 2n 2 + 3n + 1 = Θ(n 2 ), which is what the chaining of equations intuitively gives us.Note that this interpretation implies that 2n 2 + 3n + 1 = Θ(n 2 ), which is what the chaining of equations intuitively gives us.