Presentation is loading. Please wait.

Presentation is loading. Please wait.

Scott Aaronson (University of Texas at Austin)

Similar presentations


Presentation on theme: "Scott Aaronson (University of Texas at Austin)"— Presentation transcript:

1 Scott Aaronson (University of Texas at Austin)
The P vs. NP Problem Scott Aaronson (University of Texas at Austin) Mexico City, August 8, 2018

2 Navier-Stokes Equations Hodge Conjecture Birch and Swinnerton-Dyer
Frank Wilczek (Physics Nobel 2004) was once asked: “If you could ask a superintelligent alien one yes-or-no question, what would it be?” His response: “P vs. NP. That basically contains all the other questions, doesn’t it?” P vs. NP Riemann Hypothesis Poincaré Conjecture Yang-Mills Mass Gap Navier-Stokes Equations Hodge Conjecture Birch and Swinnerton-Dyer

3 Computer Science In a Few Slides
Decision problem: “Given an integer N, is it prime?” Each N is an instance of the problem The size of the instance, n=log2N, is the number of bits needed to specify it An algorithm is polynomial-time if it uses at most knc steps, for some universal constants k,c P is the class of all decision problems that have polynomial-time algorithms to solve all instances “The efficiently solvable problems” Agrawal-Kayal-Saxena 2002: Primes  P

4 What, mathematically, is an “algorithm”?
Great question! Turing machine: Our model for a serial, deterministic, classical digital computer

5 NP: Nondeterministic Polynomial Time
Class of all decision problems for which a “yes” answer has a polynomial-size proof that can be verified in polynomial time Example: Does have a prime factor ending in 7?

6 A problem is NP-complete if it’s both NP-hard and in NP
A problem X is NP-hard if, by using a “black box” or “oracle” for X, you could solve all NP problems in polynomial time A problem is NP-complete if it’s both NP-hard and in NP Example: Given a graph G, does G have a Hamilton cycle (tour that visits each vertex exactly once)?

7 NP Efficiently verifiable
NP-hard All NP problems are efficiently reducible to these Matrix permanent Halting problem … Hamilton cycle Steiner tree Graph 3-coloring Satisfiability Maximum clique … NP-complete NP Efficiently verifiable Factoring Graph isomorphism … Here’s a rough map of the world. At the bottom is P, which includes everything we know how to solve quickly with today’s computers. Containing it is NP, the class of problems where we could recognize an answer if we saw it, and at the top of NP is this huge family of NP-complete problems. There are plenty of problems that are even harder than NP-complete – one famous example is the halting problem, to determine whether a given computer program will ever stop running. Very interestingly, there are also problems believed to be intermediate between P and NP-complete. One example is factoring. These intermediate problems are extremely important for quantum computing, as we’ll see later, and they’re also important for cryptography. Graph connectivity Primality testing Matrix determinant Linear programming … P Efficiently solvable

8 If [P=NP], this would have consequences of the greatest magnitude … It would mean that the mental effort of the mathematician could be completely replaced by machines (apart from the postulation of axioms) —Kurt Gödel to John von Neumann, 1956 Would a proof of P=NP mean the robot uprising was nigh? Would a proof of PNP mean it wasn’t? Not quite … Before we get carried away, are we sure that P vs. NP is even the right question to ask?

9 Old proposal for “surpassing Turing machines” : Dip two glass plates with pegs between them into soapy water. Let the soap bubbles form a minimum Steiner tree connecting the pegs—thereby solving a known NP-hard problem “instantaneously”

10 Ah, but what about quantum computers?
Interesting Quantum mechanics: “Probability with minus signs” (Nature seems to prefer it that way) NP NP-complete P Factoring BQP (Quantum Polynomial Time) Shor 1994: Factoring integers is in BQP But crucially, we don’t think BQP contains all of NP! (Can we prove it? Proving PNP would be a start…)

11 Why do most experts believe PNP?
Main reason: Because otherwise, the “clustering” of so many thousands of problems into two categories (P and NP-complete) would be an unexplained coincidence Approximating 3SAT to 7/8+, for any >0: NP-complete Approximating 3SAT to 7/8: in P Additional reasons: Asymmetry of ignorance Hierarchy theorems imply “most” pairs of complexity classes are unequal, so why not P and NP?

12 Claim: Had we been physicists, we would’ve long ago declared PNP a law of nature!
Feynman apparently had trouble accepting that P vs. NP was an open problem at all… When people say: “What if P=NP? What if there’s an n10000 algorithm for 3SAT? Or an nlogloglog(n) algorithm?” Response: What if the aliens killed JFK to keep him from discovering that algorithm?

13 Could PNP be independent of the axioms of set theory?
Sure, it’s possible—but so could the Riemann Hypothesis, or almost any other open problem (except, e.g., whether White has the win in Chess) Unlike (say) the Continuum Hypothesis , PNP is an arithmetical statement, so it’s either true or it isn’t! We know from Gödel that true arithmetical statements can be unprovable from our standard axioms—but we lack much experience of this for “natural” statements

14 Why is proving PNP so hard?
Because algorithms can exist for crazy reasons! Matrix multiplication: O(n2.373) time Maximum matching: in P (O(n2.5) time or O(n2.373) randomized) Sounds trite, but pretty much every PNP “proof” so far could be rejected on the ground that, if it worked, it would also put easy problems outside of P…

15 Has any actual progress been made toward proving PNP?
Sure! (In the sense that 19th-century number theory was “progress” toward Fermat’s Last Theorem) We can prove separations for models much weaker than P, and/or problems much harder than NP-complete We also know a lot about the barriers to all the obvious things you’d try—and in some cases, how to overcome those barriers

16 Diagonalization Theorem (Turing 1936): No program can decide whether an input program halts or not Theorem (Hartmanis-Stearns 1965): No program running in, say, n2 time can decide whether an input program halts in n3 time Theorem: There’s a Boolean function f:{0,1}n{0,1} that’s computable using 2n memory, yet that requires an exponential number of AND, OR, and NOT gates in any circuit computing it Trouble: All of these theorems “relativize” (i.e., go through if we declare some function computable for free)

17 Theorem (Baker-Gill-Solovay 1975): There are “oracles” that make P=NP
(like an oracle for a PSPACE-complete problem, PSPACE being the class of problems solvable with polynomial memory) But there are other oracles that make PNP (like an oracle that returns “1” on certain secret inputs, and “0” on all other inputs) Therefore, any resolution of P vs. NP must be “non-relativizing.” I.e., the proof must fail relative to certain oracles, by “noticing” aspects of computation that only exist in our “real,” oracle-free world

18 Lower Bounds that Exploit Structure
Theorem (Furst-Saxe-Sipser, Yao, Håstad 1980s): Any depth-d, unbounded-fanin circuit of AND, OR, and NOT gates, which computes the PARITY of an n-bit string, requires at least ~exp(n1/d) gates Theorem (Razborov 1985, Alon-Boppana 1987): Any monotone circuit requires exponentially many AND and OR gates to compute the CLIQUE function

19 The Natural Proofs Barrier
A pseudorandom function family is a set of functions fs:{0,1}n{0,1}, parameterized by a “seed” s{0,1}poly(n), that are computable in polynomial time, but such that for all (say) 2O(n)-time algorithms A, Razborov-Rudich 1993: Most known techniques to prove that a Boolean function requires large circuits (e.g., the ones that worked for PARITY), actually yield 2O(n)-time algorithms to certify that a random Boolean function f:{0,1}n{0,1} requires a large circuit, given its truth table But if such a technique worked against arbitrary circuits, we could use it to break cryptographic pseudorandom functions!

20 1990s, 2000s: Lower bounds were finally proven that circumvented the relativization and natural proofs barriers simultaneously Example: For every fixed k, there’s a problem in a complexity class called PP that requires circuits of size at least nk These new lower bounds were based on a complexity breakthrough called IP=PSPACE—proved by taking Boolean circuits, and reinterpreting the AND, OR, and NOT gates as arithmetic operations over a large finite field A.-Wigderson 2008: Alas, there’s a third barrier to proving PNP, which even the new results are subject to! We called this barrier algebrization: a generalization of relativization to allow “lifting” of oracles from Boolean functions to low-degree polynomials over finite fields

21 Ryan Williams’ Breakthrough (2011)
Theorem: NEXP  ACC NEXP: Nondeterministic Exponential Time ACC: Class of problems solvable by a family of polynomial-size, constant-depth, unbounded-fanin circuits with AND, OR, NOT, and MOD m gates Improvement (Murray-Williams 2018): NTIME(npolylog)  ACC Proof hinged on a faster-than-brute-force algorithm for the following problem: Given an ACC circuit C, decide if there’s an input x{0,1}n such that C(x)=1 Proof evades the relativization, natural proofs, and algebrization barriers!

22 Hierarchy theorems, diagonalization lower bounds
P  NP RELATIVIZATION ALGEBRIZATION NEXP  ACC PP lower bound Parity lower bounds NATURAL PROOFS NP  AC0 [Furst-Saxe-Sipser, Ajtai] NP  ACC0 NP  TC0 NP  NC NP  P/poly MAEXP  P/poly [Buhrman-Fortnow-Thierauf] NEXP  P/poly PSPACE  P/poly EXP  P/poly NP  P/poly

23 The Blum-Cucker-Shub-Smale Model
One can define analogues of P and NP over an arbitrary field F When F is finite (e.g., F=F2), we recover the usual P vs. NP question When F=R or F=C, we get an interesting new question with a “mathier” feel All three cases (F=F2, F=R, and F=C) are open, and no implications are known among them But the continuous versions might be more “yellow-bookable”!

24 Even “simpler” challenge: Prove Determinant harder than Permanent
Computable in polynomial time NP-hard When n=2, the permanent is clearly no harder…

25 But what about n=3? Grenet:
Let m(n) be the smallest integer such that the permanent of an nn matrix A can be rewritten as the determinant of an mm matrix of affine combinations of entries of A Known: n2/2  m(n)  2n-1 Conjecture (Valiant, 1970s): m(n) grows faster than any polynomial in n (presumably, exponentially) Considered the “algebraic warmup” to PNP!

26 Mulmuley’s Geometric Complexity Theory Program: “The String Theory of Computer Science”
Dream: Show that XPer(n) has “the wrong kinds of symmetry” to be embedded into XDet(m) XPer(n) = “Orbit closure” of the nn Permanent function, under invertible linear transformations of the entries XDet(m) = “Orbit closure” of the mm Determinant function, for some m=nO(1)

27 The new leverage would come from the “exceptional” nature of the Permanent and Determinant polynomials: the fact that they’re uniquely characterized by their linear symmetries Using that, one can reduce the embeddability problem to a problem in representation theory: show that some irrep occurs with greater multiplicity in a representation associated with the permanent orbit closure than in one associated with the determinant orbit closure Mulmuley’s “Flip” Program: Ruling out efficient algorithms might depend on discovering efficient algorithms to compute multiplicities of irreps! Ikenmeyer-Panova 2015: The multiplicities in the determinant orbit closures won’t be zero!  Major setback? Still, GCT illustrates in detail how a lower bound proof could exploit special properties of Per and Det, as we know it must

28 Conclusions A proof of PNP might have to be the greatest synthesis of mathematical ideas ever … but don’t let that discourage you! One starting point is Permanent vs. Determinant Another: Prove that, starting from 1, you need more than (log n)O(1) additions, subtractions, and multiplications to reach a nonzero multiple of n!. This would imply PCNPC! At least checking the proof of PNP should be easier than finding it…


Download ppt "Scott Aaronson (University of Texas at Austin)"

Similar presentations


Ads by Google