Optimization in mean field random models Johan Wästlund Linköping University Sweden.

Slides:



Advertisements
Similar presentations
Great Theoretical Ideas in Computer Science
Advertisements

Instructor Neelima Gupta Table of Contents Approximation Algorithms.
22C:19 Discrete Math Graphs Fall 2014 Sukumar Ghosh.
Great Theoretical Ideas in Computer Science for Some.
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
1 The TSP : Approximation and Hardness of Approximation All exact science is dominated by the idea of approximation. -- Bertrand Russell ( )
1 NP-Complete Problems. 2 We discuss some hard problems:  how hard? (computational complexity)  what makes them hard?  any solutions? Definitions 
Combinatorial Algorithms
CS774. Markov Random Field : Theory and Application Lecture 04 Kyomin Jung KAIST Sep
Characterizing the Distribution of Low- Makespan Schedules in the Job Shop Scheduling Problem Matthew J. Streeter Stephen F. Smith Carnegie Mellon University.
Great Theoretical Ideas in Computer Science.
Computability and Complexity 23-1 Computability and Complexity Andrei Bulatov Search and Optimization.
Complexity 15-1 Complexity Andrei Bulatov Hierarchy Theorem.
1 Discrete Structures & Algorithms Graphs and Trees: II EECE 320.
Combinatorial optimization and the mean field model Johan Wästlund Chalmers University of Technology Sweden.
Approximation Algorithms: Combinatorial Approaches Lecture 13: March 2.
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
Math443/543 Mathematical Modeling and Optimization
Approximation Algorithms Lecture for CS 302. What is a NP problem? Given an instance of the problem, V, and a ‘certificate’, C, we can verify V is in.
The Theory of NP-Completeness
CSE 326: Data Structures NP Completeness Ben Lerner Summer 2007.
It’s all about the support: a new perspective on the satisfiability problem Danny Vilenchik.
NP-Complete Problems (Fun part)
22C:19 Discrete Math Graphs Spring 2014 Sukumar Ghosh.
Ant Colony Optimization: an introduction
Combinatorial approach to Guerra's interpolation method David Gamarnik MIT Joint work with Mohsen Bayati (Stanford) and Prasad Tetali (Georgia Tech) Physics.
Hon Wai Leong, NUS (CS6234, Spring 2009) Page 1 Copyright © 2009 by Leong Hon Wai CS6234 Lecture 1 -- (14-Jan-09) “Introduction”  Combinatorial Optimization.
Algorithms for Network Optimization Problems This handout: Minimum Spanning Tree Problem Approximation Algorithms Traveling Salesman Problem.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
1.3 Modeling with exponentially many constr.  Some strong formulations (or even formulation itself) may involve exponentially many constraints (cutting.
Topology Design for Service Overlay Networks with Bandwidth Guarantees Sibelius Vieira* Jorg Liebeherr** *Department of Computer Science Catholic University.
Spin Glasses and Complexity: Lecture 2 Brief review of yesterday’s lecture Brief review of yesterday’s lecture Spin glass energy and broken symmetry Spin.
Random matching and traveling salesman problems Johan Wästlund Chalmers University of Technology Sweden.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Approximation Algorithms
1 Lower Bounds Lower bound: an estimate on a minimum amount of work needed to solve a given problem Examples: b number of comparisons needed to find the.
自旋玻璃与消息传递算法 Spin Glass and Message-Passing Algorithms 周海军 中国科学院理论物理研究所.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Unit 9: Coping with NP-Completeness
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Mean, Variance, Moments and.
Projects Network Theory VLSI PSM 1. Network 1. Steiner trees
5.5.2 M inimum spanning trees  Definition 24: A minimum spanning tree in a connected weighted graph is a spanning tree that has the smallest possible.
Lecture 6 NP Class. P = ? NP = ? PSPACE They are central problems in computational complexity.
CS 3343: Analysis of Algorithms Lecture 25: P and NP Some slides courtesy of Carola Wenk.
CSE 589 Part V One of the symptoms of an approaching nervous breakdown is the belief that one’s work is terribly important. Bertrand Russell.
1 6. Mean, Variance, Moments and Characteristic Functions For a r.v X, its p.d.f represents complete information about it, and for any Borel set B on the.
Lecture 25 NP Class. P = ? NP = ? PSPACE They are central problems in computational complexity.
6.4 Random Fields on Graphs 6.5 Random Fields Models In “Adaptive Cooperative Systems” Summarized by Ho-Sik Seok.
NP-completeness NP-complete problems. Homework Vertex Cover Instance. A graph G and an integer k. Question. Is there a vertex cover of cardinality k?
NPC.
Number of primal and dual bases of network flow and unimodular integer programs Hiroki NAKAYAMA 1, Takayuki ISHIZEKI 2, Hiroshi IMAI 1 The University of.
CSE 421 Algorithms Richard Anderson Lecture 27 NP-Completeness Proofs.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
School of Information Sciences University of Pittsburgh TELCOM2125: Network Science and Analysis Konstantinos Pelechrinis Spring 2013 Figures are taken.
Approximation algorithms
ICS 353: Design and Analysis of Algorithms NP-Complete Problems King Fahd University of Petroleum & Minerals Information & Computer Science Department.
Biointelligence Laboratory, Seoul National University
Greedy Technique.
Optimization problems such as
Lecture 2-2 NP Class.
Great Theoretical Ideas in Computer Science
Great Ideas in Computing Complexity Theory
1.3 Modeling with exponentially many constr.
Discrete Mathematics for Computer Science
ICS 353: Design and Analysis of Algorithms
1.3 Modeling with exponentially many constr.
traveling salesman problem
Optimization Methods H. Rieger, Saarland University, Saarbrücken, Germany Summerschool on Computational Statistical Physics, NCCU Taipei,
Lecture 24 Vertex Cover and Hamiltonian Cycle
Presentation transcript:

Optimization in mean field random models Johan Wästlund Linköping University Sweden

Statistical Mechanics Each particle has a spin Energy = Hamiltonian depends on spins of interacting particles Ising model: Spins ±1, H = # interacting pairs of opposite spin

Statistical Mechanics Spin configuration  has energy H(  ) Gibbs measure depends on temperature T: T→∞ random state T→0 ground state, i.e. minimizing H(  )

Statistical Mechanics Thermodynamic limit N →∞ Average energy? (suitably normalized)

Disordered Systems Spin glasses AuFe random alloy Fe atoms interact

Disordered Systems Random interactions between Fe atoms Sherrington-Kirkpatrick model:

Disordered Systems Quenched random variables g i,j S-K is a mean field model: No correlation betweeen quenched variables NP hard to find ground state given g i,j

Computer Science Test / evaluate heuristics for NP-hard problems Average case analysis Random problem instances

Combinatorial Optimization Minimum Matching / Assignment Minimum Spanning Tree Traveling Salesman Shortest Path … Points with given distances, minimize total length of configuration

Statistical Physics / Computer Science Spin configuration Hamiltonian Ground state energy Temperature Gibbs measure Thermodynamic limit Feasible solution Cost of solution Cost of minimal solution Artificial parameter T Gibbs measure N→∞

Mean field models Replica-cavity method has given good results for mean field models Parisi solution of S-K model The same methods can be applied to combinatorial optimization problems in mean field models

Mean field models of distance N points Abstract geometry Inter-point distances given by i. i. d. random variables Exponential distribution easiest to analyze (pseudodimension 1)

Matching Set of edges giving a pairing of all points

Spanning tree Network connecting all points

Traveling salesman Tour visiting all points

Mean field limits No normalization needed! (pseudodimension 1) Matching:  2 /12≈0.822 (Mézard & Parisi 1985, rigorous proof by Aldous 2000) Spanning tree:  (3) = 1+1/8+1/27+… ≈1.202 (Frieze 1985) Traveling salesman: … (Krauth- Mézard-Parisi 1989), now established rigorously!

Cavity results Non-rigorous method Aldous derived equivalent equations with the Poisson-Weighted Infinite Tree (PWIT)

Cavity results Non-rigorous quantity X = cost of minimal solution – cost of minimal solution with the root removed Define X 1, X 2, X 3,… similarly on sub-trees Leads to the equation X i distributed like X,  i are times of events in rate 1 Poisson process

Cavity results Analytically, this is equivalent to where

Cavity results Explicit solution Ground state energy

Cavity results Note that the integral is equal to the area under the curve when f(u) is plotted against f(-u) In this case, f satisfies the equation

Cavity results

K-L matching

Similarly, the K-L matching problem leads to the equations:  has rate K and  has rate L min[K] stands for K:th smallest

Shown by Parisi (2006) that this system has an essentially unique solution The ground state energy is given by where x and y satisfy an explicit equation For K=L=2, this equation is Unfortunately the cavity method is not rigorous K-L matching

The exponential bipartite assignment problem n

Exact formula conjectured by Parisi (1998) Suggests proof by induction Researchers in discrete math, combinatorics and graph theory became interested Generalizations…

Generalizations by Coppersmith & Sorkin to incomplete matchings Remarkable paper by M. Buck, C. Chan & D. Robbins (2000) Introduces weighted vertices Extremely close to proving Parisi’s conjecture!

Incomplete matchings n m

Weighted assignment problems Weights  1,…,  m,  1,…,  n on vertices Edge cost exponential of rate  i  j Conjectured formula for the expected cost of minimum assignment Formula for the probability that a vertex participates in solution (trivial for less general setting!)

The Buck-Chan-Robbins urn process Balls are drawn with probabilities proportional to weight 11 22 33

Proofs of the conjectures Two independent proofs of the Parisi and Coppersmith-Sorkin conjectures were announced on March 17, 2003 (Nair, Prabhakar, Sharma and Linusson, Wästlund)

Annealing Powerful idea: Let T→0, forcing the system to converge to its ground state Replica-cavity approach Simulated annealing meta-algorithm (optimization by random local moves)

In the mean field model: Underlying rate 1 variables Y i r i plays the same role as T Local temperature Associate weight to vertices rather than edges

Cavity/annealing method Relax by introducing an extra vertex Let the weight of the extra vertex go to zero Example: Assignment problem with  1 =…=  m =1,  1 =…=  n =1, and  m+1 =  p = P(extra vertex participates) p/n = P(edge (m+1,n) participates)

Annealing p/n = P (edge (m+1,n) participates) When  →0, this is Hence By Buck-Chan-Robbins urn theorem,

Annealing Hence Inductively this establishes the Coppersmith- Sorkin formula

Results with annealing Much simpler proofs of Parisi, Coppersmith-Sorkin, Buck-Chan-Robbins formulas Exact results for higher moments Exact results and limits for optimization problems on the complete graph

The 2-dimensional urn process 2-dimensional time until k balls have been drawn

Limit shape as n→∞ Matching: TSP/2-factor:

Mean field TSP If the edge costs are i.i.d and satisfy P(l<t)/t→1 as t→0 (pseudodimension 1), then as n →∞, A. Frieze proved that whp a 2-factor can be patched to a tour at small cost

Further exact formulas

LP-relaxation of matching in the complete graph K n

Future work Explain why the cavity method gives the same equation as the limit shape in the urn process Establish more detailed cavity predictions Use proof method of Nair-Prabhakar- Sharma in more general settings

Thank you!