Lecture 9. Resource bounded KC K-, and C- complexities depend on unlimited computational resources. Kolmogorov himself first observed that we can put resource.

Slides:



Advertisements
Similar presentations
The Equivalence of Sampling and Searching Scott Aaronson MIT.
Advertisements

COMPLEXITY THEORY CSci 5403 LECTURE VII: DIAGONALIZATION.
Function Technique Eduardo Pinheiro Paul Ilardi Athanasios E. Papathanasiou The.
Part VI NP-Hardness. Lecture 23 Whats NP? Hard Problems.
Complexity Theory Lecture 6
Lecture 6. Prefix Complexity K The plain Kolmogorov complexity C(x) has a lot of “minor” but bothersome problems Not subadditive: C(x,y)≤C(x)+C(y) only.
Shortest Vector In A Lattice is NP-Hard to approximate
Lecture 3 Universal TM. Code of a DTM Consider a one-tape DTM M = (Q, Σ, Γ, δ, s). It can be encoded as follows: First, encode each state, each direction,
Isolation Technique April 16, 2001 Jason Ku Tao Li.
Lecture 19. Reduction: More Undecidable problems
Lecture 24 MAS 714 Hartmut Klauck
1 Nondeterministic Space is Closed Under Complement Presented by Jing Zhang and Yingbo Wang Theory of Computation II Professor: Geoffrey Smith.
The Recursion Theorem Sipser – pages Self replication Living things are machines Living things can self-reproduce Machines cannot self reproduce.
The Big Picture Chapter 3. We want to examine a given computational problem and see how difficult it is. Then we need to compare problems Problems appear.
Theory of Computing Lecture 16 MAS 714 Hartmut Klauck.
CSCI 4325 / 6339 Theory of Computation Zhixiang Chen.
Probabilistic algorithms Section 10.2 Giorgi Japaridze Theory of Computability.
Nathan Brunelle Department of Computer Science University of Virginia Theory of Computation CS3102 – Spring 2014 A tale.
Complexity 7-1 Complexity Andrei Bulatov Complexity of Problems.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
Complexity 11-1 Complexity Andrei Bulatov Space Complexity.
Computability and Complexity 22-1 Computability and Complexity Andrei Bulatov Hierarchy Theorem.
Complexity 13-1 Complexity Andrei Bulatov Hierarchy Theorem.
Complexity 18-1 Complexity Andrei Bulatov Probabilistic Algorithms.
Computability and Complexity 13-1 Computability and Complexity Andrei Bulatov The Class NP.
1 Introduction to Computability Theory Lecture15: Reductions Prof. Amos Israeli.
1 Introduction to Computability Theory Lecture12: Reductions Prof. Amos Israeli.
1 Introduction to Computability Theory Lecture13: Mapping Reductions Prof. Amos Israeli.
Fall 2004COMP 3351 Recursively Enumerable and Recursive Languages.
Complexity 5-1 Complexity Andrei Bulatov Complexity of Problems.
Recursively Enumerable and Recursive Languages
1 Uncountable Sets continued Theorem: Let be an infinite countable set. The powerset of is uncountable.
Submitted by : Estrella Eisenberg Yair Kaufman Ohad Lipsky Riva Gonen Shalom.
RELATIVIZATION CSE860 Vaishali Athale. Overview Introduction Idea behind “Relativization” Concept of “Oracle” Review of Diagonalization Proof Limits of.
Alternating Turing Machine (ATM) –  node is marked accept iff any of its children is marked accept. –  node is marked accept iff all of its children.
Theory of Computing Lecture 20 MAS 714 Hartmut Klauck.
1 Reducibility. 2 Problem is reduced to problem If we can solve problem then we can solve problem.
The Polynomial Hierarchy By Moti Meir And Yitzhak Sapir Based on notes from lectures by Oded Goldreich taken by Ronen Mizrahi, and lectures by Ely Porat.
Definition: Let M be a deterministic Turing Machine that halts on all inputs. Space Complexity of M is the function f:N  N, where f(n) is the maximum.
Section 11.4 Language Classes Based On Randomization
Randomized Turing Machines
Theory of Computing Lecture 15 MAS 714 Hartmut Klauck.
Approximation Algorithms Pages ADVANCED TOPICS IN COMPLEXITY THEORY.
Optimal Proof Systems and Sparse Sets Harry Buhrman, CWI Steve Fenner, South Carolina Lance Fortnow, NEC/Chicago Dieter van Melkebeek, DIMACS/Chicago.
Computational Complexity Theory Lecture 2: Reductions, NP-completeness, Cook-Levin theorem Indian Institute of Science.
1 1 CDT314 FABER Formal Languages, Automata and Models of Computation Lecture 15-1 Mälardalen University 2012.
CSC 413/513: Intro to Algorithms NP Completeness.
1 The Theory of NP-Completeness 2 Cook ’ s Theorem (1971) Prof. Cook Toronto U. Receiving Turing Award (1982) Discussing difficult problems: worst case.
Interactive proof systems Section 10.4 Giorgi Japaridze Theory of Computability.
Umans Complexity Theory Lectures Lecture 1a: Problems and Languages.
1 Turing’s Thesis. 2 Turing’s thesis: Any computation carried out by mechanical means can be performed by a Turing Machine (1930)
 2005 SDU Lecture13 Reducibility — A methodology for proving un- decidability.
1Computer Sciences Department. Book: INTRODUCTION TO THE THEORY OF COMPUTATION, SECOND EDITION, by: MICHAEL SIPSER Reference 3Computer Sciences Department.
Bringing Together Paradox, Counting, and Computation To Make Randomness! CS Lecture 21 
Chapter 15 P, NP, and Cook’s Theorem. 2 Computability Theory n Establishes whether decision problems are (only) theoretically decidable, i.e., decides.
Recursively Enumerable and Recursive Languages
Complexity 24-1 Complexity Andrei Bulatov Interactive Proofs.
Overview of the theory of computation Episode 3 0 Turing machines The traditional concepts of computability, decidability and recursive enumerability.
Theory of Computational Complexity Yuji Ishikawa Avis lab. M1.
The NP class. NP-completeness Lecture2. The NP-class The NP class is a class that contains all the problems that can be decided by a Non-Deterministic.
 2005 SDU Lecture14 Mapping Reducibility, Complexity.
Kolmogorov Complexity
Probabilistic Algorithms
Recursively Enumerable and Recursive Languages
Part VI NP-Hardness.
CSE838 Lecture notes copy right: Moon Jung Chung
Lecture 9. Resource bounded KC
CS154, Lecture 12: Time Complexity
Formal Languages, Automata and Models of Computation
Intro to Theory of Computation
Presentation transcript:

Lecture 9. Resource bounded KC K-, and C- complexities depend on unlimited computational resources. Kolmogorov himself first observed that we can put resource bounds on such computations. This was subsequently studied by Barzdins, Loveland, Daley, Levin, Adleman. Hartmanis, Ko, Sipser started a new trend in computer science in the 1980’s. In the 1960’s, two parallel theories were developed: Computational complexity – Hartmanis and Stearns, Rabin, Blum, measuring time/space complexity Kolmogorov complexity, measuring information. Resource bounded KC links the two theories.

Theory (I will do C, same applies for K) C t,s (x|y) is the t-time s-space limited Kolmogorov complexity of x condition on y. I.e. the length of shortest program that with input y, produces x in time t and space s. In standard C, K complexity, it does not matter if we say “produces x” or “accepts x”, they are the same. But in resource bounded case, they are likely to be different. So Sipser defined CD t,s (x|y) to be the length of the shortest program that with input y accepts x in time t and space s. When we use just one parameter such as time, we will simply write C t or CD t.

Relationship between C t and CD t Theorem (Sipser). (a) For any t, CD t (x)≤C t+O(n) (x)+O(1). (b) Reversely, for polynomial p, there is polynomial q, C q (x|A) ≤ CD p (x)+O(1), where A is an NP-complete set. Remark: When t is exponential, C t and CD t are same. Proof. (a) is trivial. (b) is true since the following set is NPC: A={ : T accepts yz in time p for some z s.t. |yz|=n}, where T accepts only x in time p. So we can find a string accepted by T by querying its successively bits (next bit would be 0 or 1, ask both ways). QED

Hierachy (Hartmanis) It is possible to obtain a resource bound hierarchy --- there are always strings that need more seed length, time, or space to compute. I.e. define C[f(n), t(n), s(n)] to be the set of strings of length n that can be generated by programs of length f(n), in time t(n) and space s(n). Then increasing each of the three parameters (to some degree) will result a bigger class.

Compression relative to a set If A is computable, |x|=n and x  A, we know: C(x), CD(x) ≤ log|A =n |+O(logn) But with time limit, this is not so easy. The above proof requires us to check all 2 n strings, even A is in P. We will only provide some sample of the results in this direction. Theorem [Buhrman-Fortnow-Laplante] Let A be in P. There is a polynomial p s.t. for all x  A =n CD p (x|A =n )≤2log|A =n |+2logn+O(1).

Proof of Buhrman-Fortnow-Laplante’s theorem Proof. Lemma. Let S={x 1, …, x d }  {0, …,2 n -1}. For all x i  S, there exists a prime p i ≤2dn such that for all j  i, x i  x j mod p i Proof of Lemma. There are at most log c 2 n = log2 n /logc different primes p s.t. c ≤ p ≤ 2c, and x i =x j mod p, for each pair (i,j). We have d-1 pairs (x i,x j ) for j  i. Total there are at most (d-1)log c 2 n such primes. By prime number theorem, there are c/logc primes in the range of [c,2c]. Taking c>(d-1)n, then there must be at least one s.t. for j  i, x i  x j mod p i. Since p i ≤2c, p i ≤2dn. QED Using the lemma, for x  A =n, get p x. We can write a program: input y, if y  A, then reject y; else check if y=x mod p x then accept y, else reject y. The above program needs information: p x and x mod p x, p x ≤ 2|A =n |n. This is 2|p x |+O(1), approximately 2log|A =n |+2logn+O(1). QED Remark: it is possible to improve to log|A =n | if we use a random string (Sipser, Theorem in textbook).

Using it … Example. There is a recursive set A s.t. P A  NP A Proof. Define f(1)=2, f(k)=2 f(k-1). By diagonalization, choose B so that B  {1 f(k) } and B  DTIME[n logn ]-P. Use B to construct A as follows: For each 1 f(k)  B, put first string of length n=f(k) from C[logn,n logn ] – C[logn,n loglogn ] (*) A is recursive and B  NP A as an NTM can guess the string in A. But B is not in P, and a DTM cannot find a string in A to query (in polynomial time) by (*). Hence the oracle A does not help it. Thus B is also not in P A. QED Remark: We have used the fact that some strings are hard to get to, without enough time.

Instance Complexity (Orponen-Ko-Schoning-Watanabe) This is a cute concept. Let T be a UTM. Given a set A and time bound t, define the instance complexity of x as ic T t (x: A)= min{|p|: T(p,x) halts, and for all y such that T(p,y) halts, T(p,y)=A(y) in time t.} You can prove an invariance theorem for this also. So we will drop T. The goal is to identify hard instances that makes a language hard. Facts: CD t (x)=ic t (x: {x}) A set A is in P iff there is a polynomial p, such that for all x, ic p (x: A)=constant.

Levin’s Kt complexity and universal search Imagine how much time God takes to create a sequence (such as our genomes). Assume He can flip a coin, and He has Turing machines. He can either keep on flipping a coin or enumerate until he gets x in time 2 |x|, or He can start to compute x from a short seed. For random sequence x, it takes expected 2 |x| time. But for easy sequences, it can be generated from shorter (random) seed in time t. So let’s define age(x)=min p {2 |p| t : U(p)=x in t steps} Then Kt(x)=log age(x).

Universal search If there is a polynomial time algorithm to solve NP- hard problems, can we find it? Yes, we now describe a method of Levin who called it “universal search”. Lemma. If there is an alg A that solves SAT in time t(n), then the following algorithm solves it in time ct(n) for some constant c. Proof. The algorithm is: Simulate all TM’s in the following dovetailing fashion: T 1 every 2 nd step; T 2 every 2 nd step among the remaining steps; T 3 every 2 nd step among the remaining steps; … Then if T k solves the SAT problem in time t(n), then this algorithm solves it in 2 k t(n) + O(1) steps. QED Remark: Levin’s algorithm uses Kt complexity and is slightly faster.

Logical depth (Chaitin, Bennett) We think a book in number theory is deep. It is not random, it has very low Kolmogorov complexity. In fact, all its theorems can be derived automatically by enumerating all the proofs --- hence C(number theory book )=O(1). Some other things are shallow: children books, sequence 1 n. In fact a Kolmogorov random sequence such as ideal gas is also shallow. Can we identify information with depth? 1 n does not contain information, but a random string also does not contain information.

Shallow and deep things Crystal is shallow Gas is also shallow A math book is deep Life is deep A structure is deep, if it is superficially random but subtly redundant... Bennett

Defining “depth” Attempt 1. We cannot use the number of steps needed to compute from x*, the shortest program generating x, to x, since perhaps there is another program slightly longer than x* but it runs much faster – such a definition is not “stable”. Attempt 2. So we relax to almost minimum programs. We can say x has depth d within error 2 -b if x can be computed in d steps by a program p s.t. |p|≤|x*|+b, i.e. 2 -|p| /2 -K(x) ≥2 -b. But what about there are many such programs of length |x*|+b? Consider them? Thus, we should consider all such programs Q(x) =  U(p)=x 2 -|p|

Defining “depth”, cont. Definition. The depth of a string x at significance level  =2 -b is depth  (x)=min{t: Q t (x)/Q(x) ≥  }, where Q t (x)=  p runs in t steps 2 -|p|. We say a string x is (b,d) deep if d=depth  (x) and  =2 -b. Remark. Thus a (d,b)-deep string receives 1/2 b fraction of its algorithmic probability from programs running in d steps.

Examples Remember the halting probability , and  (its i-th bit indicates if the i-th TM halts). Although both encodes the halting information of TMs,  is deep and  is shallow. This is because  encodes the halting problem in maximum density and it is recursively indistinguishable from a random string, hence practically useless. I.e. K(  1:n )≥n-O(1). K(  1:n ) on the other hand is small (logn). If t is the minimum time required to compute  1:n from a logn-sized program, then it is t-deep at all significance levels. In fact it is very deep as t grows faster than any computable function. Consider our genome. How deep is it? If it needs to evolve from nothing (or very shallow strings), how long does it take? This is actually a computation in PSPACE (linear space). If P=PSPACE, then every string (genome) has polynomial depth. Otherwise, some may have exponential depth. We of course know (from the historical records) that our genomes did not have exponential depth, if God did not play dice with us.