Complexity 5-1 Complexity Andrei Bulatov Complexity of Problems.

Slides:



Advertisements
Similar presentations
Part VI NP-Hardness. Lecture 23 Whats NP? Hard Problems.
Advertisements

Turing Machines Memory = an infinitely long tape Persistent storage A read/write tape head that can move around the tape Initially, the tape contains only.
Complexity Classes: P and NP
Lecture 3 Universal TM. Code of a DTM Consider a one-tape DTM M = (Q, Σ, Γ, δ, s). It can be encoded as follows: First, encode each state, each direction,
CS 345: Chapter 9 Algorithmic Universality and Its Robustness
Lecture 16 Deterministic Turing Machine (DTM) Finite Control tape head.
Variants of Turing machines
1 Nondeterministic Space is Closed Under Complement Presented by Jing Zhang and Yingbo Wang Theory of Computation II Professor: Geoffrey Smith.
The Recursion Theorem Sipser – pages Self replication Living things are machines Living things can self-reproduce Machines cannot self reproduce.
CSCI 4325 / 6339 Theory of Computation Zhixiang Chen Department of Computer Science University of Texas-Pan American.
Decidable A problem P is decidable if it can be solved by a Turing machine T that always halt. (We say that P has an effective algorithm.) Note that the.
Reducibility 2 Theorem 5.1 HALT TM is undecidable.
Complexity 7-1 Complexity Andrei Bulatov Complexity of Problems.
Complexity 25-1 Complexity Andrei Bulatov #P-Completeness.
Computability and Complexity 4-1 Existence of Undecidable Problems Computability and Complexity Andrei Bulatov.
Computability and Complexity 5-1 Classifying Problems Computability and Complexity Andrei Bulatov.
Complexity 12-1 Complexity Andrei Bulatov Non-Deterministic Space.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
Complexity 11-1 Complexity Andrei Bulatov Space Complexity.
Computability and Complexity 22-1 Computability and Complexity Andrei Bulatov Hierarchy Theorem.
Complexity 13-1 Complexity Andrei Bulatov Hierarchy Theorem.
Computability and Complexity 14-1 Computability and Complexity Andrei Bulatov Cook’s Theorem.
Complexity 18-1 Complexity Andrei Bulatov Probabilistic Algorithms.
Computability and Complexity 13-1 Computability and Complexity Andrei Bulatov The Class NP.
1 Introduction to Computability Theory Lecture12: Reductions Prof. Amos Israeli.
1 Introduction to Computability Theory Lecture11: Variants of Turing Machines Prof. Amos Israeli.
1 Introduction to Computability Theory Lecture13: Mapping Reductions Prof. Amos Israeli.
Computability and Complexity 19-1 Computability and Complexity Andrei Bulatov Non-Deterministic Space.
CPSC 411, Fall 2008: Set 12 1 CPSC 411 Design and Analysis of Algorithms Set 12: Undecidability Prof. Jennifer Welch Fall 2008.
P, NP, PS, and NPS By Muhannad Harrim. Class P P is the complexity class containing decision problems which can be solved by a Deterministic Turing machine.
Computability and Complexity 7-1 Computability and Complexity Andrei Bulatov Recursion Theorem.
Computability and Complexity 20-1 Computability and Complexity Andrei Bulatov Class NL.
Fall 2004COMP 3351 Reducibility. Fall 2004COMP 3352 Problem is reduced to problem If we can solve problem then we can solve problem.
1 Decidability continued. 2 Undecidable Problems Halting Problem: Does machine halt on input ? State-entry Problem: Does machine enter state halt on input.
Automata & Formal Languages, Feodor F. Dragan, Kent State University 1 CHAPTER 5 Reducibility Contents Undecidable Problems from Language Theory.
Computability and Complexity 3-1 Turing Machine Computability and Complexity Andrei Bulatov.
Courtesy Costas Busch - RPI1 Reducibility. Courtesy Costas Busch - RPI2 Problem is reduced to problem If we can solve problem then we can solve problem.
CS5371 Theory of Computation Lecture 4: Automata Theory II (DFA = NFA, Regular Language)
Theory of Computing Lecture 20 MAS 714 Hartmut Klauck.
1 Introduction to Computability Theory Lecture11: The Halting Problem Prof. Amos Israeli.
1 Reducibility. 2 Problem is reduced to problem If we can solve problem then we can solve problem.
Theory of Computing Lecture 19 MAS 714 Hartmut Klauck.
Theory of Computing Lecture 15 MAS 714 Hartmut Klauck.
Computational Complexity Theory Lecture 2: Reductions, NP-completeness, Cook-Levin theorem Indian Institute of Science.
Theory of Computing Lecture 17 MAS 714 Hartmut Klauck.
Introduction to CS Theory Lecture 15 –Turing Machines Piotr Faliszewski
1Computer Sciences Department. Book: INTRODUCTION TO THE THEORY OF COMPUTATION, SECOND EDITION, by: MICHAEL SIPSER Reference 3Computer Sciences Department.
1 Turing’s Thesis. 2 Turing’s thesis: Any computation carried out by mechanical means can be performed by a Turing Machine (1930)
 2005 SDU Lecture13 Reducibility — A methodology for proving un- decidability.
NP-Completeness  For convenience, the theory of NP - Completeness is designed for decision problems (i.e. whose solution is either yes or no).  Abstractly,
Chapter 11 Introduction to Computational Complexity Copyright © 2011 The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1.
Fall 2013 CMU CS Computational Complexity Lecture 2 Diagonalization, 9/12/2013.
Overview of the theory of computation Episode 3 0 Turing machines The traditional concepts of computability, decidability and recursive enumerability.
CS623: Introduction to Computing with Neural Nets (lecture-7) Pushpak Bhattacharyya Computer Science and Engineering Department IIT Bombay.
Decidability Decidable/Undecidable problems. Jaruloj Chongstitvatana Decidability2 Accepting: Definition Let T = (Q, , , , s) be a TM. T accepts.
Donghyun (David) Kim Department of Mathematics and Computer Science North Carolina Central University 1 Chapter 5 Reducibility Some slides are in courtesy.
Theory of Computational Complexity Yuji Ishikawa Avis lab. M1.
1 8.4 Extensions to the Basic TM Extended TM’s to be studied: Multitape Turing machine Nondeterministic Turing machine The above extensions make no increase.
Theory of Computational Complexity TA : Junichi Teruyama Iwama lab. D3
 2005 SDU Lecture14 Mapping Reducibility, Complexity.
Time complexity Here we will consider elements of computational complexity theory – an investigation of the time (or other resources) required for solving.
Part VI NP-Hardness.
Computability and Complexity
Busch Complexity Lectures: Reductions
Reductions Costas Busch - LSU.
Complexity 6-1 The Class P Complexity Andrei Bulatov.
Decidable Languages Costas Busch - LSU.
Computability and Complexity
Recall last lecture and Nondeterministic TMs
Decidability continued….
Presentation transcript:

Complexity 5-1 Complexity Andrei Bulatov Complexity of Problems

Complexity 5-2 Classifying Problems We have seen that decision problems (and their associated languages) can be classified into decidable and undecidable. This result was obtained by Turing and others in the 1930’ — before the invention of computers. After the invention of computers, it became clear that it would be useful to classify decidable problems, to distinguish harder problems from easier problems This led to the development of computational complexity theory in the 1960’s and 1970’s

Complexity 5-3 The Aim All Languages Decidable Languages “Easy” Problems“Hard” Problems

Complexity 5-4 Question Which of these decision problem is hardest? 1.For a given n, is n prime? 2.For a given n, is n equal to the sum of 3 primes?¹ 3.For a given n, does the n th person in the Vancouver telephone directory have first initial J ? ¹ see

Complexity 5-5 Complexity Measures Every decidable problem has a set of algorithms (  TMs) that solve it. What property of this set of algorithms could we measure to classify the problem? The difficulty of constructing such an algorithm? The length of the shortest possible algorithm? (Giving a static complexity measure².) The efficiency of the most efficient possible algorithm? (Giving a dynamic complexity measure.) ²This has proved useful for classifying the complexity of strings, where it is called Kolmogorov complexity. See “Introduction to Kolmogorov Complexity and its Applications”, Li and Vitani, 1993

Complexity 5-6 Dynamic Complexity Measures A dynamic complexity measure is a numerical function that measures the maximum resources used by an algorithm to compute the answer to a given instance To define a dynamic complexity measure we have to define for each possible algorithm M a numerical function on the same inputs

Complexity 5-7 Blum’s Axioms Blum proposed³ that any useful dynamic complexity measure should satisfy the following properties: ³see “A machine independent theory of the complexity of recursive functions”, Blum, Journal of the ACM 14, pp , (1967) is defined exactly when M(x) is defined The problem: for given M, x, r, does is decidable

Complexity 5-8 Time Complexity The most critical computational resource is often time, so the most useful complexity measure is often time complexity If we take Turing Machine as our model of computation, then we can give a precise measure of the time resources used by a computation Definition The time complexity of a Turing Machine T is the function such that is the number of steps taken by the computation T(x) Definition The time complexity of a Turing Machine T is the function such that is the number of steps taken by the computation T(x) (Note that if T(x) does not halt, then is undefined.)

Complexity 5-9 Space Complexity Another important computational resource is amount of “memory” used by an algorithm, that is space. The corresponding complexity measure is space complexity As with time, if we take Turing Machine as our model of computation, then we can easily give a measure of the space resources used by a computation Definition The space complexity of a Turing Machine T is the function such that is the number of distinct tape cells visited during the computation T(x) Definition The space complexity of a Turing Machine T is the function such that is the number of distinct tape cells visited during the computation T(x) (Note that if T(x) does not halt, then is undefined.)

Complexity 5-10 Time Complexity of Problems I Now it seems that we could define the time complexity of a problem as the time complexity of the most efficient Turing Machine that decides the corresponding language, but there is a difficulty …

Complexity 5-11 It may be impossible to define the most efficient Turing Machine, because: It may be possible to speed up the computation by using a bigger alphabet It may be possible to speed up the computation by using more tapes

Complexity 5-12 Linear Speed-up Theorem Theorem For any Turing Machine T that decides a language L, and any m , there is another Turing Machine T’ that also decides L, such that Theorem For any Turing Machine T that decides a language L, and any m , there is another Turing Machine T’ that also decides L, such that (see Papadimitriou, Theorem 2.2)

Complexity 5-13 Proof The machine T' has a larger alphabet than T, many more states, and an extra tape. The alphabet includes an extra symbol for each possible k-tuple of symbols in the alphabet of T T' first compresses its input by writing a symbol on a new tape encoding each k -tuple of symbols of the original input. It then returns the head to the leftmost non-empty cell. This takes 2|x| steps in total T' then simulates T by manipulating these more complex symbols to achieve the same changes as T. T' can simulate k steps of T by reading and changing at most 3 complex symbols, which can be done in 6 steps Choosing k=6m gives a speed-up by a factor of m

Complexity 5-14 Table Look-Up We can do a similar trick to speed up the computation on any finite set of inputs. Theorem For any Turing Machine T that decides a language L, and any m , there is another Turing Machine T' that also decides L, such that for all inputs x with |x|  m, Theorem For any Turing Machine T that decides a language L, and any m , there is another Turing Machine T' that also decides L, such that for all inputs x with |x|  m,

Complexity 5-15 Proof Idea The machine T' has additional states corresponding to each possible input of length at most m T' first reads to the end of the input, remembering what it has seen by going into the corresponding state. If it reaches the end of the input in one of its special states it then immediately halts, giving the desired answer. Otherwise it returns to the start of the input and behaves like T

Complexity 5-16 Time Complexity of Problems II Given any decidable language, and any TM that decides it, we have seen that we can construct a TM that Decides it faster by any linear factor Decides it faster for all input up to some fixed length So we cannot define an exact time complexity for a language, but we can give an asymptotic form …

Complexity 5-17 Math Prerequisites Let f and g be two functions. We say that f(n)=O(g(n)) if there exist positive integers c and such that for every Examples A polynomial of degree k is

Complexity 5-18 Math Prerequisites Let f and g be two functions. We say that f(n)=o(g(n)) if In other words, f(n)=o(g(n)) means that, for any real number c, there is such that for every Examples

Complexity 5-19 Definition For any time constructable function f, we say that the time complexity of a decidable language L is in O(f) if there exists a Turing Machine T which decides L, and constants and c such that for all inputs x with Definition For any time constructable function f, we say that the time complexity of a decidable language L is in O(f) if there exists a Turing Machine T which decides L, and constants and c such that for all inputs x with

Complexity 5-20 Complexity Classes Now we are in a position to divide up the decidable languages into classes, according to their time complexity Definition The time complexity class TIME[f] is defined to be the class of all languages with time complexity in O(f) Definition The time complexity class TIME[f] is defined to be the class of all languages with time complexity in O(f) (Note, it is sometimes called DTIME[f] — for Deterministic Time)

Complexity 5-21 All Languages Decidable Languages Examples

Complexity 5-22 Reducibility A major tool in analysing and classifying problems is the idea of “reducing one problem to another” Informally, a problem  is reducible to a problem  if we can Somehow use methods that solve  in order to solve 

Complexity 5-23 Many-One Reducibility Definition A language A is many-one reducible (or m-reducible, or mapping reducible) to a language B if there exists a total and computable function f such that for all We can use a machine that decides B (plus a machine that decides A ) So, in a sense, A is “no harder than” B We denote this by writing

Complexity 5-24 Properties of Many-One Reducibility If and is decidable, then so is (Just build a machine that computes f(x) and then behaves like the machine that decides ) is reflexive and transitive If is any language (apart from  and  *) and is decidable, then The first property can be used when trying to prove that a language is decidable (or undecidable). The last property implies that many-one reducibility is too weak to distinguish between decidable languages — they are pretty much All reducible to each other!