Theory of Computing Lecture 15 MAS 714 Hartmut Klauck.

Slides:



Advertisements
Similar presentations
Analysis of Algorithms
Advertisements

Models of Computation Prepared by John Reif, Ph.D. Distinguished Professor of Computer Science Duke University Analysis of Algorithms Week 1, Lecture 2.
Part VI NP-Hardness. Lecture 23 Whats NP? Hard Problems.
Complexity Classes: P and NP
Theory of Computing Lecture 23 MAS 714 Hartmut Klauck.
CS 345: Chapter 9 Algorithmic Universality and Its Robustness
Lecture 24 MAS 714 Hartmut Klauck
Theory of Computing Lecture 16 MAS 714 Hartmut Klauck.
Complexity 7-1 Complexity Andrei Bulatov Complexity of Problems.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
Computability and Complexity 22-1 Computability and Complexity Andrei Bulatov Hierarchy Theorem.
Complexity 13-1 Complexity Andrei Bulatov Hierarchy Theorem.
Advanced Topics in Algorithms and Data Structures
1 Introduction to Computability Theory Lecture11: Variants of Turing Machines Prof. Amos Israeli.
Complexity Theory CSE 331 Section 2 James Daly. Reminders Project 4 is out Due Friday Dynamic programming project Homework 6 is out Due next week (on.
P, NP, PS, and NPS By Muhannad Harrim. Class P P is the complexity class containing decision problems which can be solved by a Deterministic Turing machine.
Complexity ©D.Moshkovitz 1 Turing Machines. Complexity ©D.Moshkovitz 2 Motivation Our main goal in this course is to analyze problems and categorize them.
Complexity 5-1 Complexity Andrei Bulatov Complexity of Problems.
1 dKS, Spring Some practical information Lecturers: Kristoffer Arnsfelt Hansen and Peter Bro Miltersen. Homepage:
1 CSE 417: Algorithms and Computational Complexity Winter 2001 Lecture 21 Instructor: Paul Beame.
Quantum Automata Formalism. These are general questions related to complexity of quantum algorithms, combinational and sequential.
Computability and Complexity 3-1 Turing Machine Computability and Complexity Andrei Bulatov.
Chapter 11: Limitations of Algorithmic Power
Turing Machines CS 105: Introduction to Computer Science.
Theory of Computing Lecture 22 MAS 714 Hartmut Klauck.
Advanced Topics in Algorithms and Data Structures Lecture 8.2 page 1 Some tools Our circuit C will consist of T ( n ) levels. For each time step of the.
Theory of Computing Lecture 19 MAS 714 Hartmut Klauck.
Section 11.4 Language Classes Based On Randomization
CMPS 3223 Theory of Computation Automata, Computability, & Complexity by Elaine Rich ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Slides provided.
Nattee Niparnan. Easy & Hard Problem What is “difficulty” of problem? Difficult for computer scientist to derive algorithm for the problem? Difficult.
February 18, 2015CS21 Lecture 181 CS21 Decidability and Tractability Lecture 18 February 18, 2015.
Theory of Computing Lecture 17 MAS 714 Hartmut Klauck.
CSCI 3160 Design and Analysis of Algorithms Tutorial 10 Chengyu Lin.
Theory of Computing Lecture 21 MAS 714 Hartmut Klauck.
Measuring complexity Section 7.1 Giorgi Japaridze Theory of Computability.
NP-COMPLETE PROBLEMS. Admin  Two more assignments…  No office hours on tomorrow.
NP-Complete problems.
Complexity & Computability. Limitations of computer science  Major reasons useful calculations cannot be done:  execution time of program is too long.
Automata & Formal Languages, Feodor F. Dragan, Kent State University 1 CHAPTER 7 Time complexity Contents Measuring Complexity Big-O and small-o notation.
Strings Basic data type in computational biology A string is an ordered succession of characters or symbols from a finite set called an alphabet Sequence.
CS 461 – Nov. 18 Section 7.1 Overview of complexity issues –“Can quickly decide” vs. “Can quickly verify” Measuring complexity Dividing decidable languages.
Computability NP complete problems. Space complexity. Homework: [Post proposal]. Find PSPACE- Complete problems. Work on presentations.
Fall 2013 CMU CS Computational Complexity Lecture 2 Diagonalization, 9/12/2013.
Lecture. Today Problem set 9 out (due next Thursday) Topics: –Complexity Theory –Optimization versus Decision Problems –P and NP –Efficient Verification.
TU/e Algorithms (2IL15) – Lecture 9 1 NP-Completeness NOT AND OR AND NOT AND.
Umans Complexity Theory Lectures Lecture 1b: Turing Machines & Halting Problem.
Recall last lecture and Nondeterministic TMs Ola Svensson.
1 Design and Analysis of Algorithms Yoram Moses Lecture 13 June 17, 2010
CSCI 3130: Formal languages and automata theory Andrej Bogdanov The Chinese University of Hong Kong Polynomial.
Design and Analysis of Approximation Algorithms
Complexity & the O-Notation
Time complexity Here we will consider elements of computational complexity theory – an investigation of the time (or other resources) required for solving.
Part VI NP-Hardness.
Polynomial time The Chinese University of Hong Kong Fall 2010
HIERARCHY THEOREMS Hu Rui Prof. Takahashi laboratory
Turing Machines Acceptors; Enumerators
Summary.
RAJALAKSHMI ENGINEERING COLLEGE
Jaya Krishna, M.Tech, Assistant Professor
Theory of Computation Turing Machines.
CS21 Decidability and Tractability
CS154, Lecture 12: Time Complexity
Recall last lecture and Nondeterministic TMs
Turing Machines Complexity ©D.Moshkovitz.
CS21 Decidability and Tractability
CSE 589 Applied Algorithms Spring 1999
Lecture One: Automata Theory Amjad Ali
Instructor: Aaron Roth
Intro to Theory of Computation
Complexity Theory: Foundations
Presentation transcript:

Theory of Computing Lecture 15 MAS 714 Hartmut Klauck

Part II Overview Problems that cannot be solved efficiently – P vs. NP – Time Hierarchy Problems that cannot be solved at all – Computability Weaker models of computation – Finite Automata

Languages Definition: – An alphabet is a finite set of symbols – ¡ * is the set of all finite sequences/strings over the alphabet ¡ – A language over alphabet ¡ is a subset of ¡ * – A machine decides a language L if on input x it outputs 1 if x 2 L and 0 otherwise A complexity class is a set of languages that can be computed given some restricted resources

The Class P The class P consists of all languages that can be decided in polynomial time Which machine model? – RAM’s with the logarithmic cost measure – Simpler: Turing machines – Polynomial size circuits (with simple descriptions)

The Class P For technical reasons P contains only decision problems Example: Sorting can be done in polynomial time, but is not a language Decision version: – ElementDistinctness={x 1,…, x n : the x i are pairwise distinct} ElementDistinctness 2 P

The Class P Problems solvable in polynomial time? – Sorting – Minimum Spanning Trees – Matching – Max Flow – Shortest Path – Linear Programming – Many more Decision version example: {G,W,K: there is a spanning tree of weight at most K in G}

Turing Machine Defined by Turing in 1936 to formalize the notion of computation A Turing machine has a finite control and a 1- dimensional storage tape it can access with its read/write head Operation: the machine reads a symbol from the tape, does an internal computation and writes another symbol to the tape, moves the head

Turing Machine A Turing machine is a 8-tuple (Q, ¡, b, §, q 0, A,R, ± ) Q: set of states of the machine ¡ : tape alphabet b 2 ¡ : blank symbol § µ ¡ -{b}: input alphabet q 0 2 Q: initial state A,R µ Q: accepting/rejecting states ± : Q- (A [ R) £ ¡  Q £ ¡ £ {left,stay,right}: transition function

Operation The tape consists of an infinite number of cells labeled by all integers In the beginning the tape contains the input x 2 § * starting at tape cell 0 The rest of the tape contains blank symbols The machine starts in state q 0 The head is on cell 0 in the beginning

Operation In every step the machine reads the symbol z at the position of the head Given z and the current state q it uses ± to determine the new state, the symbol that is written to the tape and the movement of the head – left, stay, right If the machine reaches a state in A it stops and accepts, on states in R it rejects

Example Turing Machine To compute the parity of x 2 {0,1}* Q={q 0, q 1, q a, q r } ¡ ={0,1,b} ± : q 0,1  q 1,b, right q 0,0  q 0,b, right q 1,1  q 0,b, right q 1,0  q 1,b, right q 1, b  q a q 0,b  q r

Example The Turing machine here only moves right and does not write anything useful – It is a finite automaton

Correctness/Time A TM decides L if it accepts all x 2 L and rejects all x not in L (and halts on all inputs) The time used by a TM on input x is the number of steps [evaluations of ± ] before the machine reaches a state in A or R The time complexity of a TM M is the function t M that maps n to the largest time used on any input in ¡ n The time complexity of L is upper bounded by g(n) if there is a TM M that decides L and has t M (n) · O(g(n))

Notes DTIME(f(n)) is the class of all languages that have time complexity at most O(f(n)) P is the class of languages L such that the time complexity of L can be upper bounded by a fixed polynomial in n [with a fixed highest power of n appearing in p] There are languages for which there is no asymptotically fastest TM [Speedup theorem]

Space The space used by a TM M on an input x is the number of cells visited by the head The space complexity of M is the function s M mapping n to the largest space used on x 2 ¡ n The space complexity of L is upper bounded by g(n) if there is a TM that decides L and s M (n)=O(g(n))

Facts A Turing machine can simulate a RAM with log cost measure such that – polynomial time RAM gives a polynomial time TM A log-cost RAM can simulate a TM – Store the tape in the registers – Store the current state in a register – Each register stores a symbol or state [O(1) bits] – Store also the head position in a register [log s M bits] – Compute the transition function by table lookup Hence the definition of P is robust

Criticism P is supposed to represent efficiently solvable problems P contains only languages – Can identify a problem with many outputs with a set of languages (one for each output bit) Problems with time complexity n 1000 are deemed easy while problems with time complexity 2 n/ hard Answer: P is mainly a theoretical tool In practice such problems don’t seem to arise Once a problem is known to be in P we can start searching for more efficient algorithms

Criticism Turing machines might not be the most powerful model of computation All computers currently built can be simulated efficiently – Small issue: randomization Some models have been proposed that are faster, e.g. analogue computation – Usually not realistic models Exception: Quantum computers – Quantum Turing machines are probably faster than Turing machines for some problems

Why P? P has nice closure properties Example: closed under calling subroutines: – Suppose R 2 P – If there is a polynomial time algorithm that solves L given a free subroutine that computes R, then L is also in P

Variants of TM Several tapes – Often easier to describe algorithms Example: Palindrome={xy: x is y in reverse} Compute length, copy x on another tape, compare x and y in reverse Any 1-tape TM needs quadratic time – Any TM with O(1) tapes and time T(n) can be simulated by a 1-tape TM using time T(n) 2

Problems that are not in P EXP: class of all L that can be decided by Turing machines that run in time at most exp(p(n)) for some polynomial p We will show later that EXP is not equal to P – i.e., there are problems that can be solved in exponential time, but not in polynomial time – These problems tend to be less interesting

Problems that are not in P? Definition: – Given an undirected graph, a clique is a set of vertices S µ V, such that all u,v 2 S are connected – The language MaxClique is the set of all G,k, such that G has a clique of size k (or more) Simple Algorithm: enumerate all subsets of size k, test if they form a clique Time O(n k ¢ poly(k)) Since k is not constant this is not a polynomial time algorithm

Clique It turns out that MaxClique is in P if and only if thousands of other interesting problems are We don’t know an efficient algorithm for any of them Widely believed: there is no such algorithm

Fun Piece of Information A random graph is a graph generated by putting each possible edge into G with probability ½ The largest clique in G has size d r(n) e or b r(n) c for an explicit r(n) that is around (2+o(1)) log(n) – with probability approaching 1 Hence on random graphs the max clique can be found in sub-exponential time