Complexity & the O-Notation

Slides:



Advertisements
Similar presentations
Part VI NP-Hardness. Lecture 23 Whats NP? Hard Problems.
Advertisements

Computational Complexity 1. Time Complexity 2. Space Complexity.
Complexity 11-1 Complexity Andrei Bulatov Space Complexity.
Chapter 3 Growth of Functions
CS Master – Introduction to the Theory of Computation Jan Maluszynski - HT Lecture 8+9 Time complexity 1 Jan Maluszynski, IDA, 2007
CS3381 Des & Anal of Alg ( SemA) City Univ of HK / Dept of CS / Helena Wong 2. Analysis of Algorithms - 1 Analysis.
Lecture 2 We have given O(n 3 ), O(n 2 ), O(nlogn) algorithms for the max sub-range problem. This time, a linear time algorithm! The idea is as follows:
Lecture 2 Computational Complexity
Algorithm Efficiency CS 110: Data Structures and Algorithms First Semester,
Mathematics Review and Asymptotic Notation
DISCRETE MATHEMATICS I CHAPTER 11 Dr. Adam Anthony Spring 2011 Some material adapted from lecture notes provided by Dr. Chungsim Han and Dr. Sam Lomonaco.
Theory of Computing Lecture 15 MAS 714 Hartmut Klauck.
Asymptotic Analysis-Ch. 3
Fundamentals of Algorithms MCS - 2 Lecture # 8. Growth of Functions.
Measuring complexity Section 7.1 Giorgi Japaridze Theory of Computability.
Lecture 2 Analysis of Algorithms How to estimate time complexity? Analysis of algorithms Techniques based on Recursions ACKNOWLEDGEMENTS: Some contents.
E.G.M. PetrakisAlgorithm Analysis1  Algorithms that are equally correct can vary in their utilization of computational resources  time and memory  a.
Asymptotic Notation Faculty Name: Ruhi Fatima
1 Introduction to Turing Machines
BITS Pilani Pilani Campus Data Structure and Algorithms Design Dr. Maheswari Karthikeyan Lecture1.
1 Asymptotes: Why? How to describe an algorithm’s running time? (or space, …) How does the running time depend on the input? T(x) = running time for instance.
LECTURE 2 : fundamentals of analysis of algorithm efficiency Introduction to design and analysis algorithm 1.
Data Structures & Algorithm CS-102 Lecture 12 Asymptotic Analysis Lecturer: Syeda Nazia Ashraf 1.
Data Structures I (CPCS-204) Week # 2: Algorithm Analysis tools Dr. Omar Batarfi Dr. Yahya Dahab Dr. Imtiaz Khan.
CMPT 438 Algorithms.
The NP class. NP-completeness
Intractable Problems Time-Bounded Turing Machines Classes P and NP
CSE202: Introduction to Formal Languages and Automata Theory
Analysis of Non – Recursive Algorithms
Analysis of Non – Recursive Algorithms
Analysis of Algorithms
COMP108 Algorithmic Foundations Algorithm efficiency
Time complexity Here we will consider elements of computational complexity theory – an investigation of the time (or other resources) required for solving.
Part VI NP-Hardness.
Introduction to Algorithms
Introduction to Algorithms (2nd edition)
Analysis of Algorithms & Orders of Growth
Complexity & the O-Notation
Analysis of algorithms
Polynomial time The Chinese University of Hong Kong Fall 2010
Great Theoretical Ideas in Computer Science
Growth of functions CSC317.
Predicates Predicate: A Boolean (yes-no) function. Example:
Turing Machines 2nd 2017 Lecture 9.
DATA STRUCTURES Introduction: Basic Concepts and Notations
HIERARCHY THEOREMS Hu Rui Prof. Takahashi laboratory
Chapter 2 Fundamentals of the Analysis of Algorithm Efficiency
CS 3343: Analysis of Algorithms
Computation.
Sorting Algorithms Written by J.J. Shepherd.
O-notation (upper bound)
Intractable Problems Time-Bounded Turing Machines Classes P and NP
Intractable Problems Time-Bounded Turing Machines Classes P and NP
Asymptotes: Why? How to describe an algorithm’s running time?
Intractable Problems Time-Bounded Turing Machines Classes P and NP
Theory of Computation Turing Machines.
Chapter 14 Time Complexity.
CS 2210 Discrete Structures Algorithms and Complexity
Chapter 2.
CSE 373, Copyright S. Tanimoto, 2002 Asymptotic Analysis -
CS154, Lecture 12: Time Complexity
CSC 4170 Theory of Computation Time complexity Section 7.1.
O-notation (upper bound)
At the end of this session, learner will be able to:
Analysis of algorithms
Theory of Computability
CSE 373, Copyright S. Tanimoto, 2001 Asymptotic Analysis -
CSC 4170 Theory of Computation Time complexity Section 7.1.
Algorithm Course Dr. Aref Rashad
Intro to Theory of Computation
Presentation transcript:

Complexity & the O-Notation

Computability So far we talked about Turing Machines that decide languages (solve yes-no problems). We only cared about making the machine decide the language. We didn’t care about the machine’s performance.

Time Complexity The time complexity of a machine is the number of transitions it takes on input x in order accept or reject the input x. The time is counted in terms of the length of the input. The time complexity function is a function t: N → N.

Worst and average cases We say that a Turing machine has worst case time complexity t(n), if for every possible input x of length n the machine needs at most t(n) transitions in order to compute f(x)- decide if x is in the language. The average case analysis is to consider all the running times on every possible input of length n and take the average.

Space Complexity The space complexity of a Turing machine is very similar to the time complexity. The idea is exactly the same. Instead of number of transitions we count the number of explored cells. In the following examples the unexplored cells are shown in white and the explored ones are shown in blue.

Example: Put a $ before the input One solution: Repeat Erase the first symbol from the input. Write it last in the output. Until there is no symbol left in the input. Place a $. 1 1 1

Example: Put a $ before the input One solution: Repeat Erase the first symbol from the input. Write it last in the output. Until there is no symbol left in the input. Place a $. 1 1 1

Example: Put a $ before the input One solution: Repeat Erase the first symbol from the input. Write it last in the output. Until there is no symbol left in the input. Place a $. 1 1 1

Example: Put a $ before the input One solution: Repeat Erase the first symbol from the input. Write it last in the output. Until there is no symbol left in the input. Place a $. 1 1 1

Example: Put a $ before the input One solution: Repeat Erase the first symbol from the input. Write it last in the output. Until there is no symbol left in the input. Place a $. 1 1 1

Example: Put a $ before the input One solution: Repeat Erase the first symbol from the input. Write it last in the output. Until there is no symbol left in the input. Place a $. $ 1 1 1

Example: Put a $ before the input The number of transitions you need for this solution is: You repeat n times the following procedure (since n is the input length): Erase the first symbol of the input Move across the input and the output (about n transitions) Paste this symbol in the last position of the output Move across the output and the input (about n more transitions. So the number of transitions in total is about 2n2

Example: Put a $ before the input The number of explored cells is exactly 2n+1

Example: Put a $ before the input Another solution: Erase the first symbol. Remember it. Replace it with a $. Repeat: Move one cell right. Replace this symbol with the one you remember from the left. Until the input is consumed (you see a blank space). Put there the last symbol. 1 1

Example: Put a $ before the input Another solution: Erase the first symbol. Remember it. Replace it with a $. Repeat: Move one cell right. Replace this symbol with the one you remember from the left. Until the input is consumed (you see a blank space). Put there the last symbol. $ 1 remember 1

Example: Put a $ before the input Another solution: Erase the first symbol. Remember it. Replace it with a $. Repeat: Move one cell right. Replace this symbol with the one you remember from the left. Until the input is consumed (you see a blank space). Put there the last symbol. $ 1 1 remember 0

Example: Put a $ before the input Another solution: Erase the first symbol. Remember it. Replace it with a $. Repeat: Move one cell right. Replace this symbol with the one you remember from the left. Until the input is consumed (you see a blank space). Put there the last symbol. $ 1 1 remember 0

Example: Put a $ before the input Another solution: Erase the first symbol. Remember it. Replace it with a $. Repeat: Move one cell right. Replace this symbol with the one you remember from the left. Until the input is consumed (you see a blank space). Put there the last symbol. $ 1 remember 1

Example: Put a $ before the input Another solution: Erase the first symbol. Remember it. Replace it with a $. Repeat: Move one cell right. Replace this symbol with the one you remember from the left. Until the input is consumed (you see a blank space). Put there the last symbol. $ 1 1

Example: Put a $ before the input The number of transitions you need for this solution is: Replace the first symbol with a $. You repeat n times the following procedure (since n is the input length): Replace the symbol we see with the one that was written in the left cell. So the number of transitions in total is n+1

Example: Put a $ before the input The number of explored cells is exactly n+1

Example: Put a $ before the input The most efficient solution: Just move left and place a $ 1 1

Example: Put a $ before the input The most efficient solution: Just move left and place a $ $ 1 1

Example: Put a $ before the input The number of transitions is 2. The number of explored cells is 2. The time and space complexity of this machine doesn’t depend on the input (it is as we say a “constant”). This is a very rare phenomenon! Most of the times we need at least to read all the input (so we need time and space at least n).

O-Notation It is not always easy to count the exact complexity of a Turing Machine. Furthermore sometimes we are just not interested in finding the exact number of transitions made or cells explored. In those cases we perform as we say “asymptotic analysis”.

Asymptotic Analysis In asymptotic analysis we completely ignore additive and multiplicative constants. We also don’t care about small additive terms. We want to see how the machine performs on large input.

Asymptotic Analysis Imagine that a TM runs in time 2n2+5n steps. For really large n, the term 5n becomes negligible. For example, for n=1000: 5n steps is equal to 5000 steps while 2n2 steps is as large as 2000000 steps!!!

Asymptotic Analysis

Asymptotic Analysis Also, making a machine run in time n2 rather than 2n2 (twice as fast) might be a good improvement but it is not compared with making it run in time n, which is considered much more efficient!

Asymptotic Analysis

O-Notation We say that a function f is O(g(n)) (or that f is upper-bounded by g) if there is a constant c>0 and an integer n0 such that: n0 c∙g f

O-Notation We say that a function f is Ω(g(n)) (or that f is lower-bounded by g) if there is a constant c>0 and an integer n0 such that: n0 c∙g f

O-Notation We say that a function f is Θ(g(n)) (or that f is upper- and lower-bounded by g) if there are constants c1 ,c2 >0 and an integer n0 such that: n0 c1∙g f c2∙g

O-Notation - Properties If f is upper-bounded by g then g is lower-bounded by f. f = O(g) then g = Ω(f) If f is both upper- and lower- bounded by g then it is upper- and lower- bounded by g. f = O(g) and f = Ω(g) then f = Θ(g).

The o- and ω- symbols We say that a function f is o(g(n)) if for every constant c>0 there is an integer n0 such that: We say that a function f is ω(g(n)) if for every constant c>0 there is an integer n0 such that:

The o- and ω- symbols Another way to prove that f is o(g) or ω(g) is by using limits. A function f is o(g(n)) if: A function f is ω(g(n)) if:

The o- and ω- symbols o and ω have the same relation with O and Ω as < and > have with ≤ and ≥. Θ stands for ≈ If f = O(g) but f ≠ Θ(g) then we say that f = o(g) Similarly, if f = Ω(g) but f ≠ Θ(g) then f = ω(g) It is important to understand that if a function f is o(g) this doesn’t mean that f is going to be less than g for every input but that there is some input after which f is always less than g.

Prove it formally Example 1: 10n = Θ(2n) because 10n and 2n are only a constant factor apart. In other words , for c1=1/10, c2=1 and n0 =1, for all n≥1, n ≤ 2n ≤ 10n. Example 2: 10n = O(n2) because for c=1 and n0=10, for all n≥n0 10n ≤ n2 .

Prove it formally Furthermore 10n is o(n2) because for all c>0 there is a n0 (= 10/c) such that for all n≥n0 , 10n ≤ cn2. An other way to see this is by taking the limit The function n2 is considered greater than 10n despite the fact that for some small inputs 10n > n2 (for example for n=7, 70>49)

Examples Example 3: 1000n2 = o(2n). That is because:

Input representation, reminder If the input is just a natural number n, we can represent it in unary, in binary or in decimal… … … 1 1 1 1 1 1 In unary … … 1 1 In binary … … In decimal 6

Input length If the number is in unary then the space it consumes to represent it is n. … … 1 1 1 1 1 1 In unary

Input length If the number is in binary then the space it consumes to represent it is log2 n. … … 1 1 In binary

Input length If the number is in decimal then the space it consumes to represent it is log10 n. … … 6 In binary

Input length Binary is much more efficient than unary. log2n = o(n)

Input length Decimal is not much more efficient than binary log2n = Θ(log10n). There is a property of the logarithms: logxn = logyn ∙ logxy. So log2n = log210 ∙ log10n But log210 is a constant, 3 ≤ log210 ≤ 4. So for all n≥1, 3∙log10n ≤ log2n ≤ 4∙log10n

Input representation for graphs As we said, we can give a graph as input to a Turing Machine. To represent a graph we have to list all of its vertices and all of its edges. If the number of vertices is n and the number of edges is m, the input size will be O(n+m). The maximum number of edges a graph can have is n choose 2 = n(n-1)/2 = O(n2). Thus the input size is O(n2).

Complexity Classes The class DTIME(t(n)) contains all those languages L for which there is a DTM that decides L in time O(t(n)) (i.e. performing O(t(n)) steps) The class DSPACE(t(n)) contains all those languages L for which there is a DTM that decides L exploring O(t(n)) cells in total