Presentation is loading. Please wait.

Presentation is loading. Please wait.

Theory of Computing Lecture 21 MAS 714 Hartmut Klauck.

Similar presentations


Presentation on theme: "Theory of Computing Lecture 21 MAS 714 Hartmut Klauck."— Presentation transcript:

1 Theory of Computing Lecture 21 MAS 714 Hartmut Klauck

2 Time Hierarchy We briefly return to complexity theory We believe that NP is larger than P Now we prove that EXP is larger than P Why now? Idea of the proof is similar to undecidability proofs/diagonalization

3 Time Hierarchy Informal statement: – If we increase the computing time enough, more languages can be decided – DTIME(f(n)) is larger than DTIME(g(n)) as long as f(n) is suitably larger than g(n) Problem: in this generality the statement is false Fix: f(n) and g(n) need to be `nice’ bounds

4 Constructible bounds What is nice? We should be able to compute f(n), so we can use a counter from 1…f(n) Definition: A function f(n):N  N is called time- constructible, if f(n) can be computed by a TM from input 1 n in time O(f(n)) Example: everything reasonable, e.g. polynomial, exponential bounds etc. Usually first compute n in binary Example: given 1 n we can compute n in binary in time O(n log n) and then n 2, takes time O(n log n) in total Example: given 1 n we can compute 2 n in time O(n)

5 Time Hierarchy Theorem: – Let f(n),g(n)>n be time-constructible functions with f(n)=  (g 2 (n) log 2 n) – Then there is a language L such that L is in DTIME(f(n)) L is not in DTIME(g(n))

6 Proof L={,x: M accepts x in at most g(|x|) log|x| steps} – M is 1-tape TM We first need to show that L in DTIME(f(n)) – On a 1-tape TM – We give a 2-tape machine that runs in time O(g(|x|) log |x|), then there is a 1-tape machine that runs in time O(f(n)) old exercise – 2-tape machine: tape 1: simulate M on x using the universal TM – tape 2: first construct g(|x|) [time g(n)] – Then use tape 2 to count the steps of M – If more than g(|x|) log |x| steps used: reject – Accept if M accepts in g(|x|) log |x| steps

7 Proof Show that L not in DTIME(g(n)) Assume A decides L in time O(g(n)) Intuition: cannot simulate M, because the time is not enough Construct a new machine N that on input x simulates A on x, x and rejects if A accepts on x, x and vice versa N is O(g(n)) time bounded [use universal 1-tape TM] What happens on input x= ? – If N accepts in time at most g(|x|)log |x| then A says accept, and hence N will reject. This takes time O(g(|x|)) – If N does not accept in time g(|x|) log |x|, then A says reject, and N will accept. This takes time O(g|x|) Contradiction for large enough |x|

8 Note Tighter time hierarchies are known, it is enough to increase the time bound by ! (log g(n)) factor – Idea: simulate the 2-tape machine by keeping the counter on the first tape, and moving it in each step together with the head movement – Counter needs O(log g(n)) bits For the nondeterministic time hierarchy the log-factor is not needed

9 Corollary DTIME(n k ) is a proper subset of DTIME(n 2k+1 ) for all constants k P is a subset of DTIME(n log n ) – log n grows faster than any constant P is a proper subset of EXP Improved hierarchy: DTIME(n k ) is proper subset of DTIME(n k+ ² )

10 Space Complexity We list a few results NP µ PSPACE µ EXP PSPACE=NPSPACE More specifically, NSPACE(f(n)) µ DSPACE(f 2 (n)) – Savitch’s theorem Def: co-NSPACE(f(n))={L: C(L) 2 NSPACE(f(n)) co-NSPACE(f(n))=NSPACE(f(n)) – Immerman–Szelepcsényi Space-hierarchy: like time hierarchy, no log-factor needed

11 Provably intractable Problems that are complete for EXP can provably not be computed in polynomial time Problems that are complete for EXPSPACE can provably not be computed with polynomial space Completeness: polynomial time reductions Proof: – If the complete problem is in P then EXP=P resp. EXPSPACE=PSPACE, contradiction to hierarchy

12 P vs. NP Why can we not prove that P is not equal to NP using diagonalization? The diagonalization technique relativizes, i.e., it applies to oracle TM’s There are oracles such that relative to them P=NP (and oracles where the opposite is the case)

13 Last Part of the Course Turing machines with constant space I.e., machines that cannot write on the tape – Any additional constant space can be moved into the internal state Definition: A two-way deterministic finite automaton is a Turing machine that cannot write on it’s tape and cannot leave the section where the input is Definition: A deterministic finite automaton (DFA) is a two way DFA whose head only moves from left to right on the input tape

14 Finite Automata Definition: A DFA is a 5-tuple (Q, §, ±,q 0,F) – Q: states – § : alphabet – ± : Q £ §  Q: transition function – q 0 : starting state – F µ Q: accepting states No need to specify terminal states, halts on the last input symbol

15 Graph of a finite automaton An DFA can be described by a directed graph with |Q| vertices and |Q| ¢ | § | edges Every vertex has | § | outgoing edges Computation starts on vertex q 0 In each step we follow the edge labeled by the next input symbol from the current state to the next state DFA accepts, iff the state reached after reading the whole input is in F

16 Example 1: Parity

17 Example: Set of all strings that contain substring 001

18 Regular languages Definition: A language is called regular, if it can be decided by a DFA – This is definition 1, name is explained by definition 2 later (via regular expressions)

19 Topics: 1.Closure Properties 2.Power of Non-determinism 3.Which languages are regular and which ones are not? 4. Two-way versus one-way automata 5.Regular expressions 6.DFA as a data-structure for languages – DFA minimization

20 1) Closure: complementation Observation: – If L is regular, then so is the complement of L Proof: swap accepting and nonaccepting states of a DFA Closure under union and intersection: later


Download ppt "Theory of Computing Lecture 21 MAS 714 Hartmut Klauck."

Similar presentations


Ads by Google