Peter van Emde Boas: Games and Complexity Guangzhou 2009 Complexity, Speed-up and Compression Games and Complexity Peter van Emde Boas Guangzhou 2009 ©

Slides:



Advertisements
Similar presentations
Part VI NP-Hardness. Lecture 23 Whats NP? Hard Problems.
Advertisements

NP-Completeness.
Variants of Turing machines
1 Savitch and Immerman- Szelepcsènyi Theorems. 2 Space Compression  For every k-tape S(n) space bounded offline (with a separate read-only input tape)
Cook’s Theorem The Foundation of NP-Completeness.
The Recursion Theorem Sipser – pages Self replication Living things are machines Living things can self-reproduce Machines cannot self reproduce.
Giorgi Japaridze Theory of Computability Savitch’s Theorem Section 8.1.
Complexity 7-1 Complexity Andrei Bulatov Complexity of Problems.
Peter van Emde Boas: Playing Savage SOFSEM 2000 Milovy PLAYING SAVITCH Peter van Emde Boas ILLC-FNWI-Univ. of Amsterdam Bronstee.com Software and Services.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
Complexity 11-1 Complexity Andrei Bulatov Space Complexity.
Computability and Complexity 22-1 Computability and Complexity Andrei Bulatov Hierarchy Theorem.
Complexity 13-1 Complexity Andrei Bulatov Hierarchy Theorem.
1 Introduction to Computability Theory Lecture12: Decidable Languages Prof. Amos Israeli.
Prof. Busch - LSU1 Decidable Languages. Prof. Busch - LSU2 Recall that: A language is Turing-Acceptable if there is a Turing machine that accepts Also.
Courtesy Costas Busch - RPI1 A Universal Turing Machine.
Peter van Emde Boas: Games and Computer Science 1999 Speed-up and Compression Theoretical Models 1999 Peter van Emde Boas References available at:
Computability and Complexity 19-1 Computability and Complexity Andrei Bulatov Non-Deterministic Space.
P, NP, PS, and NPS By Muhannad Harrim. Class P P is the complexity class containing decision problems which can be solved by a Deterministic Turing machine.
1 Linear Bounded Automata LBAs. 2 Linear Bounded Automata are like Turing Machines with a restriction: The working space of the tape is the space of the.
Fall 2004COMP 3351 Recursively Enumerable and Recursive Languages.
Complexity 5-1 Complexity Andrei Bulatov Complexity of Problems.
CHAPTER 4 Decidability Contents Decidable Languages
Fall 2005Costas Busch - RPI1 Recursively Enumerable and Recursive Languages.
Fall 2004COMP 3351 A Universal Turing Machine. Fall 2004COMP 3352 Turing Machines are “hardwired” they execute only one program A limitation of Turing.
Games and Complexity, Guangzhou Peter van Emde Boas KNOW YOUR NUMBERS ! The Impact of Complexity a short intro in Complexity theory Peter van Emde.
Theory of Computing Lecture 15 MAS 714 Hartmut Klauck.
Lower Bounds on the Time-complexity of Non-regular Languages on One-tape Turing Machine Miaohua Xu For Theory of Computation II Professor: Geoffrey S.
Unsolvability and Infeasibility. Computability (Solvable) A problem is computable if it is possible to write a computer program to solve it. Can all problems.
Computability Construct TMs. Decidability. Preview: next class: diagonalization and Halting theorem.
Theory of Computing Lecture 21 MAS 714 Hartmut Klauck.
A Universal Turing Machine
1 More About Turing Machines “Programming Tricks” Restrictions Extensions Closure Properties.
Measuring complexity Section 7.1 Giorgi Japaridze Theory of Computability.
1 Extensions to Turing machines TMs are clumsy. Can we make them more efficient or more powerful? We will consider 6 possibilities: 1.Multiple tapes 2.Multiple.
1 Linear Bounded Automata LBAs. 2 Linear Bounded Automata (LBAs) are the same as Turing Machines with one difference: The input string tape space is the.
1 Turing’s Thesis. 2 Turing’s thesis: Any computation carried out by mechanical means can be performed by a Turing Machine (1930)
Hierarchy theorems Section 9.1 Giorgi Japaridze Theory of Computability.
Recursively Enumerable and Recursive Languages
1 Turing Machines and Equivalent Models Section 13.1 Turing Machines.
Fall 2013 CMU CS Computational Complexity Lecture 2 Diagonalization, 9/12/2013.
Overview of the theory of computation Episode 3 0 Turing machines The traditional concepts of computability, decidability and recursive enumerability.
1 Introduction to Turing Machines
Umans Complexity Theory Lectures Lecture 1b: Turing Machines & Halting Problem.
Theory of Computational Complexity Yuji Ishikawa Avis lab. M1.
Chapter 7 Introduction to Computational Complexity.
1 Turing Machines 2 Motivation Our main goal in this course is to analyze problems and categorize them according to their complexity.
FORMAL LANGUAGES, AUTOMATA, AND COMPUTABILITY * Read chapter 4 of the book for next time * Lecture9x.ppt.
Universal Turing Machine
1 Recursively Enumerable and Recursive Languages.
Recursively Enumerable and Recursive Languages. Definition: A language is recursively enumerable if some Turing machine accepts it.
CSCI 2670 Introduction to Theory of Computing November 15, 2005.
1 A Universal Turing Machine. 2 Turing Machines are “hardwired” they execute only one program A limitation of Turing Machines: Real Computers are re-programmable.
The Acceptance Problem for TMs
A Universal Turing Machine
CSE202: Introduction to Formal Languages and Automata Theory
CSC 4170 Theory of Computation The class P Section 7.2.
Time complexity Here we will consider elements of computational complexity theory – an investigation of the time (or other resources) required for solving.
Part VI NP-Hardness.
CSC 4170 Theory of Computation The class P Section 7.2.
(Universal Turing Machine)
HIERARCHY THEOREMS Hu Rui Prof. Takahashi laboratory
Turing Machines Acceptors; Enumerators
Decidable Languages Costas Busch - LSU.
CS154, Lecture 12: Time Complexity
Recall last lecture and Nondeterministic TMs
Theory of Computability
Time Complexity Classes
DSPACE Slides By: Alexander Eskin, Ilya Berdichevsky
Theory of Computability
Presentation transcript:

Peter van Emde Boas: Games and Complexity Guangzhou 2009 Complexity, Speed-up and Compression Games and Complexity Peter van Emde Boas Guangzhou 2009 © Games Workshop

Peter van Emde Boas: Games and Complexity Guangzhou 2009 When is More Better ? More time or space will allow you to compute more This is not always true –Constant factor speed-up –Non Constructible time/space bounds: Gap theorems. Compression theorems for constructible bounds

Peter van Emde Boas: Games and Complexity Guangzhou 2009 Constant Factor Speed-up for Turing Machine A Turing Machine Alphabet is easily compressed by coding k symbols in one symbol of a larger alphabet:  k --->  ’  ’ =  3

Peter van Emde Boas: Games and Complexity Guangzhou 2009 Constant Factor Speed-up This yields automatic constant factor speed-up in space: Space( S(n) ) = Space( S(n)/k ) Snags: Input is not compressed! This may require additional steps and another worktape. It shows space speed-up for single tape model only for  (n) bounds. (superlinear growth) And what about Time?

Peter van Emde Boas: Games and Complexity Guangzhou 2009 The k for 6 solution PREPARE COMPUTE & UPDATE Snags: you must compress the alphabet 6 times more dense than you expected. Input must be preprocessed so it works only nice for time t(n) =  (n) (even  (n 2 ) in single tape model)

Peter van Emde Boas: Games and Complexity Guangzhou 2009 The direct solution Finite Control Encode two blocks in finite control simulator Turing Machine. Externally scan the block adjacent to the block scanned internally Now one can allways simulate k steps for 1 step and still preserve the above invariant after every step.

Peter van Emde Boas: Games and Complexity Guangzhou 2009 Move Left Finite Control Finite Control

Peter van Emde Boas: Games and Complexity Guangzhou 2009 Move Right Finite Control Finite Control

Peter van Emde Boas: Games and Complexity Guangzhou 2009 Time speed-up THEOREM: Time( t(n) ) = Time( t(n)/ k ) for fixed k, as long as t(n)/k > (1+  ).n Doesn’t work for single tape model; there the input compression already requires time  (n 2 ) So in order that Time( t(n) )  Time( G(t(n)) ) it is necessary that G(m) =  (m) This is however not sufficient ≠

Peter van Emde Boas: Games and Complexity Guangzhou 2009 Borodin-Trakhtenbrot Gap Theorem For every G(m) > m one can invent a pathological timebound u(n) such that Time(u(n)) = Time(G(u(n))) u(m) := min{ k | no machine computation M i (x) with |x| = m and i  m has a runtime T i (x) with k  T i (x) < G(k) } This is well defined; there are at most m.c m runtimes which have to be excluded from the interval [ k, G(k) ) so sooner or later the sequence 0, G(0), G(G(0)),.... will provide us with an example.... It is also computable, since the condition on runtime is so

Peter van Emde Boas: Games and Complexity Guangzhou 2009 Constructible Bounds t(n) is time constructible when some TM on input n (in binary) can initialize a binary counter with value t(n) in time < t(n) s(n) is space constructible when some TM on input x of length n can mark a block of s(n) tape cells without ever exceeding this block. Against constructible bounds effective diagonalization is possible

Peter van Emde Boas: Games and Complexity Guangzhou 2009 SPACE COMPRESSION Downward Diagonalization If S 1 (n) > log(n) is space constructible and S 2 (n) = o(S 1 (n)) then Space( S 2 (n) )  Space( S 1 (n) ) On input i # x : 1)mark S 1 (|i#x|) tape cells 2)simulate M i ( i#x ) within this block if simulation leaves the block accept if simulation cycles accept - counting OK since S 1 (n) > log(n) 3)if simulation terminates do the opposite: if accept reject and accept otherwise ≠

Peter van Emde Boas: Games and Complexity Guangzhou 2009 SPACE COMPRESSION Downward Diagonalization This program runs in space S 1 (n) by construction The result can’t be computed by any device in space S 2 (n). Assume M j does it then on input j#x for x sufficiently large, cases 1 and 2 won’t occur and therefore M j ( j#x ) accepts iff it rejects..... CONTRADICTION !

Peter van Emde Boas: Games and Complexity Guangzhou 2009 TIME COMPRESSION Downward Diagonalization A similar result for Time Compression is affected by the overhead required for maintaining the counter ticking down from T 1 (n) to 0. If we assume that this overhead is logarithmic the result becomes: If T 1 (n) > n is time constructible and T 2 (n) = o(T 1 (n)) then Time( T 2 (n) )  Time( T 1 (n).log(T 1 (n)) ) ≠

Peter van Emde Boas: Games and Complexity Guangzhou 2009 TIME COMPRESSION Downward Diagonalization Improvements: Add an extra tape. Storing the clock on it makes the overhead vanish..... At least two tapes: Divide clock in head and tail and move the head only when the tail underflows. This reduces overhead to loglog(n); the trick extends yielding log*(n) overhead (W. Paul) Use distributed super redundant clock; overhead vanishes (Fürer 1982)

Peter van Emde Boas: Games and Complexity Guangzhou 2009 Head and Tail Construction k-bit clock tail head log(k) After k moves the head is at distance at most k Using the second tape the tail can be moved in time O(k)

Peter van Emde Boas: Games and Complexity Guangzhou 2009 Compression in General The diagonalization argument is generic; the minimal overhead determining the size of the separation gaps is machine dependent. Extends to the world of nondeterministic computation; proof become rather complex. (Seiferas et al. for TM time measure) For the RAM world diagonalization results are similar; Constant factor speed-up is problematic.

Peter van Emde Boas: Games and Complexity Guangzhou 2009 Constructibility ? Reasonable bounds turn out to be constructible: –polynomials, –simple exponentials, –polylog functions –closed under sum & product –not closed under difference!

Peter van Emde Boas: Games and Complexity Guangzhou 2009 Constructibility ? Many Theorems are proven assuming constructibility of bounds Some theorems extend to general case, using trick of incremental resources –Savitch Theorem –Hopcroft, Paul, Valiant Theorem –resulting bounds are weak (terminating computations only)

Peter van Emde Boas: Games and Complexity Guangzhou 2009 Incremental resource trick First try to perform the task within resource K If this fails replace K by 2K and try again Repeat this until task is completed; at that time you will have allocated at most 2 * the amount of resources required for the task (unless K initially is to big...) This works if Task indeed can be completed. Otherwise you will add resources forever, and the process will diverge....