Presentation is loading. Please wait.

The P vs. NP Problem Scott Aaronson (MIT  UT Austin) Hudson River Undergraduate Math Conference Saint Michael’s College, Colchester, VT, April 2, 2016.

Presentation on theme: "The P vs. NP Problem Scott Aaronson (MIT  UT Austin) Hudson River Undergraduate Math Conference Saint Michael’s College, Colchester, VT, April 2, 2016."— Presentation transcript:

The P vs. NP Problem Scott Aaronson (MIT  UT Austin) Hudson River Undergraduate Math Conference Saint Michael’s College, Colchester, VT, April 2, 2016

Frank Wilczek (Physics Nobel 2004) was once asked: “If you could ask a superintelligent alien one yes-or-no question, what would it be?” His response: “P vs. NP. That basically contains all the other questions, doesn’t it?” P vs. NP Riemann Hypothesis Poincaré Conjecture Yang-Mills Mass Gap Navier-Stokes Equations Hodge Conjecture Birch and Swinnerton-Dyer

Decision problem: “Given an integer N, is it prime?” Each N is an instance of the problem The size of the instance, n=  log 2 N , is the number of bits needed to specify it An algorithm is polynomial-time if it uses at most kn c steps, for some universal constants k,c P is the class of all decision problems that have polynomial-time algorithms to solve all instances “The efficiently solvable problems” Agrawal-Kayal-Saxena 2002: P RIMES  P Computer Science In a Few Slides

What, mathematically, is an “algorithm”? Great question! Turing machine: Our model for a serial, deterministic, classical digital computer

NP: Nondeterministic Polynomial Time 37976595177176695379702491479374117272627593 30195046268899636749366507845369942177663592 04092298415904323398509069628960404170720961 97880513650802416494821602885927126968629464 31304735342639520488192047545612916330509384 69681196839122324054336880515678623037853371 49184281196967743805800830815442679903720933 Example: Does have a prime factor ending in 7? Class of all decision problems for which a “yes” answer has a polynomial-size proof that can be verified in polynomial time

A problem X is NP-hard if, by using a “black box” or “oracle” for X, you could solve all NP problems in polynomial time A problem is NP-complete if it’s both NP-hard and in NP Example: Given a graph G, does G have a Hamilton cycle (tour that visits each vertex exactly once)?

P Efficiently solvable NP Efficiently verifiable NP- complete NP-hard All NP problems are efficiently reducible to these Graph connectivity Primality testing Matrix determinant Linear programming … Matrix permanent Halting problem … Hamilton cycle Steiner tree Graph 3-coloring Satisfiability Maximum clique … Factoring Graph isomorphism …

If [P=NP], this would have consequences of the greatest magnitude … It would mean that the mental effort of the mathematician could be completely replaced by machines (apart from the postulation of axioms) —Kurt Gödel to John von Neumann, 1956 Before we get carried away, are we sure that P vs. NP is even the right question to ask? Would a proof of P=NP mean the robot uprising was nigh? Would a proof of P  NP mean it wasn’t? Not quite …

Old proposal for “surpassing Turing machines” : Dip two glass plates with pegs between them into soapy water. Let the soap bubbles form a minimum Steiner tree connecting the pegs—thereby solving a known NP-hard problem “instantaneously”

Relativity Computer DONE

Zeno’s Computer STEP 1 STEP 2 STEP 3 STEP 4 STEP 5 Time (seconds)

Ah, but what about quantum computers? Quantum mechanics: “Probability with minus signs” (Nature seems to prefer it that way) NP NP-complete P Factoring BQP (Quantum Polynomial Time) Shor 1994: Factoring integers is in BQP Interesting But crucially, we don’t think BQP contains all of NP! (Can we prove it? Proving P  NP would be a start…)

Why do most experts believe P  NP? Additional reasons: Asymmetry of ignorance Hierarchy theorems imply “most” pairs of complexity classes are unequal, so why not P and NP? Approximating 3SAT to 7/8+ , for any  >0: NP-complete Main reason: Because otherwise, the “clustering” of so many thousands of problems into two categories (P and NP-complete) would be an unexplained coincidence Approximating 3SAT to 7/8: in P

Claim: Had we been physicists, we would’ve long ago declared P  NP a law of nature! When people say: “What if P=NP? What if there’s an n 10000 algorithm for 3SAT? Or an n logloglog(n) algorithm?” Feynman apparently had trouble accepting that P vs. NP was an open problem at all… Response: What if the aliens killed JFK to keep him from discovering that algorithm?

Could P  NP be independent of the axioms of set theory? Unlike (say) the Continuum Hypothesis, P  NP is an arithmetical statement, so it’s either true or it isn’t! Sure, it’s possible—but so could the Riemann Hypothesis, or almost any other open problem (except, e.g., whether White has the win in Chess) We know from Gödel that true arithmetical statements can be unprovable from our standard axioms—but we lack much experience of this for “natural” statements

Why is proving P  NP so hard? Because algorithms can exist for crazy reasons! Matrix multiplication: O(n 2.373 ) time Maximum matching: in P (O(n 2.5 ) time or O(n 2.373 ) randomized) Sounds trite, but pretty much every P  NP “proof” so far could be rejected on the ground that, if it worked, it would also put easy problems outside of P…

Has any actual progress been made toward proving P  NP? Sure! (In the sense that 19 th -century number theory was “progress” toward Fermat’s Last Theorem) We can prove separations for models much weaker than P, and/or problems much harder than NP-complete We also know a lot about the barriers to all the obvious things you’d try—and in some cases, how to overcome those barriers

Diagonalization Theorem (Turing 1936): No program can decide whether an input program halts or not Theorem (Hartmanis-Stearns 1965): No program running in, say, n 2 time can decide whether an input program halts in n 3 time Theorem: There’s a Boolean function f:{0,1} n  {0,1} that’s computable using 2 n memory, yet that requires an exponential number of AND, OR, and NOT gates in any circuit computing it Trouble: All of these theorems “relativize” (i.e., go through if we declare some function computable for free)

Theorem (Baker-Gill-Solovay 1975): There are “oracles” that make P=NP (like an oracle for a PSPACE-complete problem, PSPACE being the class of problems solvable with polynomial memory) But there are other oracles that make P  NP (like an oracle that returns “1” on certain secret inputs, and “0” on all other inputs) Therefore, any resolution of P vs. NP must be “non- relativizing.” I.e., the proof must fail relative to certain oracles, by “noticing” aspects of computation that only exist in our “real,” oracle-free world

Lower Bounds that Exploit Structure Theorem (Furst-Saxe-Sipser, Yao, Håstad 1980s): Any depth- d, unbounded-fanin circuit of AND, OR, and NOT gates, which computes the PARITY of an n- bit string, requires at least ~exp(n 1/d ) gates Theorem (Razborov 1985, Alon-Boppana 1987): Any monotone circuit requires exponentially many AND and OR gates to compute the CLIQUE function

The Natural Proofs Barrier A pseudorandom function family is a set of functions f s :{0,1} n  {0,1}, parameterized by a “seed” s  {0,1} poly(n), that are computable in polynomial time, but such that for all (say) 2 O(n) -time algorithms A, Razborov-Rudich 1993: Most known techniques to prove that a Boolean function requires large circuits (e.g., the ones that worked for PARITY), actually yield 2 O(n) -time algorithms to certify that a random Boolean function f:{0,1} n  {0,1} requires a large circuit, given its truth table But if such a technique worked against arbitrary circuits, we could use it to break cryptographic pseudorandom functions!

1990s, 2000s: Lower bounds were finally proven that circumvented the relativization and natural proofs barriers simultaneously Example: For every fixed k, there’s a problem in a complexity class called PP that requires circuits of size at least n k These new lower bounds were based on a complexity breakthrough called IP=PSPACE—proved by taking Boolean circuits, and reinterpreting the AND, OR, and NOT gates as arithmetic operations over a large finite field A.-Wigderson 2008: Alas, there’s a third barrier to proving P  NP, which even the new results are subject to! We called this barrier algebrization: a generalization of relativization to allow “lifting” of oracles from Boolean functions to low-degree polynomials over finite fields

Ryan Williams’ Breakthrough (2011) Theorem: NEXP  ACC NEXP: Nondeterministic Exponential Time ACC: Class of problems solvable by a family of polynomial-size, constant-depth, unbounded-fanin circuits with AND, OR, NOT, and MOD m gates Proof hinged on a faster-than-brute-force algorithm for the following problem: Given an ACC circuit C, decide if there’s an input x  {0,1} n such that C(x)=1 Proof evades the relativization, natural proofs, and algebrization barriers!

NP  AC 0 [Furst-Saxe-Sipser, Ajtai]  NP  ACC 0  NP  TC 0  NP  NC  NP  P/poly MA EXP  P/poly [Buhrman-Fortnow-Thierauf]  NEXP  P/poly  PSPACE  P/poly  EXP  P/poly  NP  P/poly Hierarchy theorems, diagonalization lower bounds P ARITY lower bounds PP lower bound RELATIVIZATION NATURAL PROOFS ALGEBRIZATION P  NP NEXP  ACC

The Blum-Cucker-Shub-Smale Model One can define analogues of P and NP over an arbitrary field F When F is finite (e.g., F=F 2 ), we recover the usual P vs. NP question When F=R or F=C, we get an interesting new question with a “mathier” feel All three cases (F=F 2, F=R, and F=C) are open, and no implications are known among them But the continuous versions might be more “yellow-bookable”!

Even “simpler” challenge: Prove D ETERMINANT harder than P ERMANENT Computable in polynomial time NP-hard When n=2, the permanent is clearly no harder…

Let m(n) be the smallest integer such that the permanent of an n  n matrix A can be rewritten as the determinant of an m  m matrix of affine combinations of entries of A Known: n 2 /2  m(n)  2 n -1 Conjecture (Valiant, 1970s): m(n) grows faster than any polynomial in n (presumably, exponentially) Considered the “algebraic warmup” to P  NP! Grenet: But what about n=3?

Mulmuley’s Geometric Complexity Theory Program: “The String Theory of Computer Science” X Per (n) = “Orbit closure” of the n  n Permanent function, under invertible linear transformations of the entries X Det (m) = “Orbit closure” of the m  m Determinant function, for some m=n O(1) Dream: Show that X Per (n) has “the wrong kinds of symmetry” to be embedded into X Det (m)

The new leverage would come from the “exceptional” nature of the Permanent and Determinant polynomials: the fact that they’re uniquely characterized by their linear symmetries Using that, one can reduce the embeddability problem to a problem in representation theory: show that some irrep occurs with greater multiplicity in a representation associated with the permanent orbit closure than in one associated with the determinant orbit closure Mulmuley’s “Flip” Program: Ruling out efficient algorithms might depend on discovering efficient algorithms to compute multiplicities of irreps! Ikenmeyer-Panova 2015: The multiplicities in the determinant orbit closures won’t be zero!  Major setback? Still, GCT illustrates in detail how a lower bound proof could exploit special properties of Per and Det, as we know it must

Conclusions A proof of P  NP might have to be the greatest synthesis of mathematical ideas ever … but don’t let that discourage you! One starting point is P ERMANENT vs. D ETERMINANT Another: Prove that, starting from 1, you need more than (log n) O(1) additions, subtractions, and multiplications to reach a nonzero multiple of n!. This would imply P C  NP C ! At least checking the proof of P  NP should be easier than finding it…

Download ppt "The P vs. NP Problem Scott Aaronson (MIT  UT Austin) Hudson River Undergraduate Math Conference Saint Michael’s College, Colchester, VT, April 2, 2016."

Similar presentations

Ads by Google