Download presentation

Presentation is loading. Please wait.

Published byAurora Livermore Modified over 4 years ago

1
Phase transition behaviour Toby Walsh Dept of CS University of York

2
2 Outline What have phase transitions to do with computation? How can you observe such behaviour in your favourite problem? Is it confined to random and/or NP-complete problems? Can we build better algorithms using knowledge about phase transition behaviour? What open questions remain?

3
3 Health warning z To aid the clarity of my exposition, credit may not always be given where it is due z Many active researchers in this area: Achlioptas, Chayes, Dunne, Gent, Gomes, Hogg, Hoos, Kautz, Mitchell, Prosser, Selman, Smith, Stergiou, Stutzle, … Walsh

4
Before we begin A little history...

5
5 Where did this all start? zAt least as far back as 60s with Erdos & Renyi ythresholds in random graphs zLate 80s ypioneering work by Karp, Purdom, Kirkpatrick, Huberman, Hogg … zFlood gates burst yCheeseman, Kanefsky & Taylor’s IJCAI-91 paper In 91, I has just finished my PhD and was looking for some new research topics!

6
Phase transitions Enough of the history, what has this got to do with computation? Ice melts. Steam condenses. Now that’s a proper phase transition...

7
7 An example phase transition zPropositional satisfiability (SAT) ydoes a truth assignment exist that satisfies a propositional formula? yNP-complete z3-SAT yformulae in clausal form with 3 literals per clause yremains NP-complete (x1 v x2) & (-x2 v x3 v -x4) x1/ True, x2/ False,...

8
8 Random 3-SAT z Random 3-SAT ysample uniformly from space of all possible 3-clauses yn variables, l clauses z Which are the hard instances? yaround l/n = 4.3 What happens with larger problems? Why are some dots red and others blue?

9
9 Random 3-SAT zVarying problem size, n zComplexity peak appears to be largely invariant of algorithm ybacktracking algorithms like Davis-Putnam ylocal search procedures like GSAT What’s so special about 4.3?

10
10 Random 3-SAT z Complexity peak coincides with solubility transition yl/n < 4.3 problems under- constrained and SAT yl/n > 4.3 problems over- constrained and UNSAT yl/n=4.3, problems on “knife- edge” between SAT and UNSAT

11
11 “But it doesn’t occur in X?” zX = some NP-complete problem zX = real problems zX = some other complexity class Little evidence yet to support any of these claims!

12
12 “But it doesn’t occur in X?” zX = some NP-complete problem zPhase transition behaviour seen in: yTSP problem (decision not optimization) yHamiltonian circuits (but NOT a complexity peak) ynumber partitioning ygraph colouring yindependent set y...

13
13 “But it doesn’t occur in X?” zX = real problems No, you just need a suitable ensemble of problems to sample from? zPhase transition behaviour seen in: yjob shop scheduling problems yTSP instances from TSPLib yexam timetables @ Edinburgh yBoolean circuit synthesis yLatin squares (alias sports scheduling) y...

14
14 “But it doesn’t occur in X?” zX = some other complexity class Ignoring trivial cases (like O(1) algorithms) zPhase transition behaviour seen in: ypolynomial problems like arc-consistency yPSPACE problems like QSAT and modal K y...

15
15 “But it doesn’t occur in X?” zX = theorem proving zConsider k-colouring planar graphs yk=3, simple counter-example yk=4, large proof yk=5, simple proof (in fact, false proof of k=4 case)

16
Locating phase transitions How do you identify phase transition behaviour in your favourite problem?

17
17 What’s your favourite problem? z Choose a problem ye.g. number partitioning dividing a bag of numbers into two so their sums are as balanced as possible z Construct an ensemble of problem instances yn numbers, each uniformly chosen from (0,l ] other distributions work (Poisson, …)

18
18 Number partitioning zIdentify a measure of constrainedness ymore numbers => less constrained ylarger numbers => more constrained ycould try some measures out at random (l/n, log(l)/n, log(l)/sqrt(n), …) zBetter still, use kappa! y(approximate) theory about constrainedness ybased upon some simplifying assumptions e.g. ignores structural features that cluster solutions together

19
19 Theory of constrainedness zConsider state space searched ysee 10-d hypercube opposite of 2^10 truth assignments for 10 variable SAT problem zCompute expected number of solutions, yindependence assumptions often useful and harmless!

20
20 Theory of constrainedness zConstrainedness given by: kappa= 1 - log2( )/n where n is dimension of state space zkappa lies in range [0,infty) ykappa=0, =2^n, under-constrained ykappa=infty, =0, over-constrained ykappa=1, =1, critically constrained phase boundary

21
21 Phase boundary zMarkov inequality yprob(Sol) Now, kappa > 1 implies < 1 Hence, kappa > 1 implies prob(Sol) < 1 zPhase boundary typically at values of kappa slightly smaller than kappa=1 yskew in distribution of solutions (e.g. 3-SAT) ynon-independence

22
22 Examples of kappa z3-SAT ykappa = l/5.2n yphase boundary at kappa=0.82 z3-COL ykappa = e/2.7n yphase boundary at kappa=0.84 znumber partitioning ykappa = log2(l)/n yphase boundary at kappa=0.96

23
23 Number partition phase transition Prob(perfect partition) against kappa

24
24 Finite-size scaling zSimple “trick” from statistical physics yaround critical point, problems indistinguishable except for change of scale given by simple power-law zDefine rescaled parameter ygamma = kappa-kappa c. n^1/v kappa c yestimate kappa c and v empirically xe.g. for number partitioning, kappa c =0.96, v=1

25
25 Rescaled phase transition Prob(perfect partition) against gamma

26
26 Rescaled search cost Optimization cost against gamma

27
27 Easy-Hard-Easy? zSearch cost only easy-hard here? yOptimization not decision search cost! yEasy if (large number of) perfect partitions yOtherwise little pruning (search scales as 2^0.85n) zPhase transition behaviour less well understood for optimization than for decision ysometimes optimization = sequence of decision problems (e.g branch & bound) yBUT lots of subtle issues lurking?

28
Algorithms at the phase boundary What do we understand about problem hardness at the phase boundary? How can this help build better algorithms?

29
29 Looking inside search z Three key insights yconstrainedness “knife- edge” ybackbone structure y2+p-SAT z Suggests branching heuristics yalso insight into branching mistakes

30
30 Inside SAT phase transition zRandom 3-SAT, l/n =4.3 zDavis Putnam algorithm ytree search through space of partial assignments yunit propagation zClause to variable ratio l/n drops as we search => problems become less constrained Aside: can anyone explain simple scaling? l/n against depth/n

31
31 Inside SAT phase transition z But (average) clause length, k also drops => problems become more constrained z Which factor, l/n or k wins? yLook at kappa which includes both! Aside: why is there again such simple scaling? Clause length, k against depth/n

32
32 Constrainedness knife-edge kappa against depth/n

33
33 Constrainedness knife-edge zSeen in other problem domains ynumber partitioning, … zSeen on “real” problems yexam timetabling (alias graph colouring) zSuggests branching heuristic y“get off the knife-edge as quickly as possible” yminimize or maximize-kappa heuristics must take into account branching rate, max-kappa often therefore not a good move!

34
34 Minimize constrainedness zMany existing heuristics minimize-kappa yor proxies for it zFor instance yKarmarkar-Karp heuristic for number partitioning yBrelaz heuristic for graph colouring yFail-first heuristic for constraint satisfaction y… zCan be used to design new heuristics yremoving some of the “black art”

35
35 Backbone zVariables which take fixed values in all solutions yalias unit prime implicates zLet f k be fraction of variables in backbone yl/n < 4.3, f k vanishing (otherwise adding clause could make problem unsat) yl/n > 4.3, f k > 0 discontinuity at phase boundary!

36
36 Backbone zSearch cost correlated with backbone size yif f k non-zero, then can easily assign variable “wrong” value ysuch mistakes costly if at top of search tree zBackbones seen in other problems ygraph colouring yTSP y… Can we make algorithms that identify and exploit the backbone structure of a problem?

37
37 2+p-SAT z Morph between 2-SAT and 3-SAT yfraction p of 3-clauses yfraction (1-p) of 2-clauses z 2-SAT is polynomial (linear) yphase boundary at l/n =1 ybut no backbone discontinuity here! z 2+p-SAT maps from P to NP yp>0, 2+p-SAT is NP-complete

38
38 2+p-SAT zf k only becomes discontinuous above p=0.4 ybut NP-complete for p>0 ! zsearch cost shifts from linear to exponential at p=0.4 zrecent work on backbone fragility Search cost against n

39
Structure Can we model structural features not found in uniform random problems? How does such structure affect our algorithms and phase transition behaviour?

40
40 The real world isn’t random? zVery true! Can we identify structural features common in real world problems? zConsider graphs met in real world situations ysocial networks yelectricity grids yneural networks y...

41
41 Real versus Random z Real graphs tend to be sparse ydense random graphs contains lots of (rare?) structure z Real graphs tend to have short path lengths yas do random graphs z Real graphs tend to be clustered yunlike sparse random graphs L, average path length C, clustering coefficient (fraction of neighbours connected to each other, cliqueness measure) mu, proximity ratio is C/L normalized by that of random graph of same size and density

42
42 Small world graphs z Sparse, clustered, short path lengths z Six degrees of separation yStanley Milgram’s famous 1967 postal experiment yrecently revived by Watts & Strogatz yshown applies to: xactors database xUS electricity grid xneural net of a worm x...

43
43 An example z1994 exam timetable at Edinburgh University y59 nodes, 594 edges so relatively sparse ybut contains 10-clique zless than 10^-10 chance in a random graph yassuming same size and density zclique totally dominated cost to solve problem

44
44 Small world graphs zTo construct an ensemble of small world graphs ymorph between regular graph (like ring lattice) and random graph yprob p include edge from ring lattice, 1-p from random graph real problems often contain similar structure and stochastic components?

45
45 Small world graphs z ring lattice is clustered but has long paths z random edges provide shortcuts without destroying clustering

46
46 Small world graphs

47
47 Small world graphs

48
48 Colouring small world graphs

49
49 Small world graphs z Other bad news ydisease spreads more rapidly in a small world z Good news ycooperation breaks out quicker in iterated Prisoner’s dilemma

50
50 Other structural features It’s not just small world graphs that have been studied zLarge degree graphs yBarbasi et al’s power-law model zUltrametric graphs yHogg’s tree based model zNumbers following Benford’s Law y1 is much more common than 9 as a leading digit! prob(leading digit=i) = log(1+1/i) ysuch clustering, makes number partitioning much easier

51
The future? What open questions remain? Where to next?

52
52 Open questions zProve random 3-SAT occurs at l/n = 4.3 yrandom 2-SAT proved to be at l/n = 1 yrandom 3-SAT transition proved to be in range 3.003 < l/n < 4.506 yrandom 3-SAT phase transition proved to be “sharp” z2+p-SAT yheuristic argument based on replica symmetry predicts discontinuity at p=0.4 yprove it exactly!

53
53 Open questions zImpact of structure on phase transition behaviour ysome initial work on quasigroups (alias Latin squares/sports tournaments) ymorphing useful tool (e.g. small worlds, 2-d to 3-d TSP, …) zOptimization v decision ysome initial work by Slaney & Thiebaux yproblems in which optimized quantity appears in control parameter and those in which it does not

54
54 Open questions zDoes phase transition behaviour give insights to help answer P=NP? yit certainly identifies hard problems! yproblems like 2+p-SAT and ideas like backbone also show promise zBut problems away from phase boundary can be hard to solve xover-constrained 3-SAT region has exponential resolution proofs xunder-constrained 3-SAT region can throw up occasional hard problems (early mistakes?)

55
Summary That’s nearly all from me!

56
56 Conclusions zPhase transition behaviour ubiquitous ydecision/optimization/... yNP/PSpace/P/… yrandom/real zPhase transition behaviour gives insight into problem hardness ysuggests new branching heuristics yideas like the backbone help understand branching mistakes

57
57 Conclusions zAI becoming more of an experimental science? ytheory and experiment complement each other well yincreasing use of approximate/heuristic theories to keep theory in touch with rapid experimentation zPhase transition behaviour is FUN ylots of nice graphs as promised yand it is teaching us lots about complexity and algorithms!

58
58 Very partial bibliography Cheeseman, Kanefsky, Taylor, Where the really hard problem are, Proc. of IJCAI-91 Gent et al, The Constrainedness of Search, Proc. of AAAI-96 Gent & Walsh, The TSP Phase Transition, Artificial Intelligence, 88:359-358, 1996 Gent & Walsh, Analysis of Heuristics for Number Partitioning, Computational Intelligence, 14 (3), 1998 Gent & Walsh, Beyond NP: The QSAT Phase Transition, Proc. of AAAI-99 Gent et al, Morphing: combining structure and randomness, Proc. of AAAI-99 Hogg & Williams (eds), special issue of Artificial Intelligence, 88 (1-2), 1996 Mitchell, Selman, Levesque, Hard and Easy Distributions of SAT problems, Proc. of AAAI-92 Monasson et al, Determining computational complexity from characteristic ‘phase transitions’, Nature, 400, 1998 Walsh, Search in a Small World, Proc. of IJCAI-99 Watts & Strogatz, Collective dynamics of small world networks, Nature, 393, 1998

Similar presentations

OK

10/7/2014 Constrainedness of Search Toby Walsh NICTA and UNSW

10/7/2014 Constrainedness of Search Toby Walsh NICTA and UNSW

© 2018 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google