Random matching and traveling salesman problems Johan Wästlund Chalmers University of Technology Sweden.

Slides:



Advertisements
Similar presentations
Great Theoretical Ideas in Computer Science
Advertisements

Instructor Neelima Gupta Table of Contents Approximation Algorithms.
On the Density of a Graph and its Blowup Raphael Yuster Joint work with Asaf Shapira.
22C:19 Discrete Math Graphs Fall 2010 Sukumar Ghosh.
22C:19 Discrete Math Graphs Fall 2014 Sukumar Ghosh.
Great Theoretical Ideas in Computer Science for Some.
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
Optimization in mean field random models Johan Wästlund Linköping University Sweden.
1 The TSP : Approximation and Hardness of Approximation All exact science is dominated by the idea of approximation. -- Bertrand Russell ( )
Combinatorial Algorithms
CS774. Markov Random Field : Theory and Application Lecture 17 Kyomin Jung KAIST Nov
On the Spread of Viruses on the Internet Noam Berger Joint work with C. Borgs, J.T. Chayes and A. Saberi.
1 By Gil Kalai Institute of Mathematics and Center for Rationality, Hebrew University, Jerusalem, Israel presented by: Yair Cymbalista.
Complexity 16-1 Complexity Andrei Bulatov Non-Approximability.
Great Theoretical Ideas in Computer Science.
Computability and Complexity 23-1 Computability and Complexity Andrei Bulatov Search and Optimization.
1 Discrete Structures & Algorithms Graphs and Trees: II EECE 320.
Introduction to Approximation Algorithms Lecture 12: Mar 1.
Combinatorial optimization and the mean field model Johan Wästlund Chalmers University of Technology Sweden.
Approximation Algorithms
Approximation Algorithms: Combinatorial Approaches Lecture 13: March 2.
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
Great Theoretical Ideas in Computer Science.
The Theory of NP-Completeness
CSE 326: Data Structures NP Completeness Ben Lerner Summer 2007.
EXPANDER GRAPHS Properties & Applications. Things to cover ! Definitions Properties Combinatorial, Spectral properties Constructions “Explicit” constructions.
Job Scheduling Lecture 19: March 19. Job Scheduling: Unrelated Multiple Machines There are n jobs, each job has: a processing time p(i,j) (the time to.
Approximation Algorithms Motivation and Definitions TSP Vertex Cover Scheduling.
22C:19 Discrete Math Graphs Spring 2014 Sukumar Ghosh.
Induction and recursion
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
Algorithms for Network Optimization Problems This handout: Minimum Spanning Tree Problem Approximation Algorithms Traveling Salesman Problem.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
The Traveling Salesman Problem Approximation
1 The TSP : NP-Completeness Approximation and Hardness of Approximation All exact science is dominated by the idea of approximation. -- Bertrand Russell.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Edge-disjoint induced subgraphs with given minimum degree Raphael Yuster 2012.
Princeton University COS 423 Theory of Algorithms Spring 2001 Kevin Wayne Approximation Algorithms These lecture slides are adapted from CLRS.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Unit 9: Coping with NP-Completeness
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Mean, Variance, Moments and.
CSE 421 Algorithms Richard Anderson Lecture 27 NP-Completeness and course wrap up.
Random walks on undirected graphs and a little bit about Markov Chains Guy.
1 Decomposition into bipartite graphs with minimum degree 1. Raphael Yuster.
1 6. Mean, Variance, Moments and Characteristic Functions For a r.v X, its p.d.f represents complete information about it, and for any Borel set B on the.
Mathematical Induction Section 5.1. Climbing an Infinite Ladder Suppose we have an infinite ladder: 1.We can reach the first rung of the ladder. 2.If.
CompSci 102 Discrete Math for Computer Science March 13, 2012 Prof. Rodger Slides modified from Rosen.
The Theory of NP-Completeness 1. Nondeterministic algorithms A nondeterminstic algorithm consists of phase 1: guessing phase 2: checking If the checking.
COMPSCI 102 Introduction to Discrete Mathematics.
Approximation Algorithms by bounding the OPT Instructor Neelima Gupta
School of Information Sciences University of Pittsburgh TELCOM2125: Network Science and Analysis Konstantinos Pelechrinis Spring 2013 Figures are taken.
Instructor Neelima Gupta Table of Contents Introduction to Approximation Algorithms Factor 2 approximation algorithm for TSP Factor.
Introduction Wireless Ad-Hoc Network  Set of transceivers communicating by radio.
1 Euler and Hamilton paths Jorge A. Cobb The University of Texas at Dallas.
Theory of Computational Complexity Probability and Computing Ryosuke Sasanuma Iwama and Ito lab M1.
Theory of Computational Complexity M1 Takao Inoshita Iwama & Ito Lab Graduate School of Informatics, Kyoto University.
Approximation algorithms
Theory of Computational Complexity Probability and Computing Chapter Hikaru Inada Iwama and Ito lab M1.
Hamiltonian Graphs Graphs Hubert Chan (Chapter 9.5)
Optimization problems such as
Random walks on undirected graphs and a little bit about Markov Chains
Hamiltonian Graphs Graphs Hubert Chan (Chapter 9.5)
Great Theoretical Ideas in Computer Science
Computability and Complexity
Discrete Mathematics for Computer Science
Richard Anderson Lecture 28 NP-Completeness
Introduction Wireless Ad-Hoc Network
Applied Combinatorics, 4th Ed. Alan Tucker
Richard Anderson Lecture 26 NP-Completeness
Lecture 24 Vertex Cover and Hamiltonian Cycle
Presentation transcript:

Random matching and traveling salesman problems Johan Wästlund Chalmers University of Technology Sweden

Mean field model of distance The edges of a complete graph on n vertices are given i. i. d. nonnegative costs Exponential(1) distribution.

Mean field model of distance We are interested in the cost of the minimum matching, minimum traveling salesman tour etc, for large n.

Matching Set of edges giving a pairing of all points

Traveling salesman Tour visiting all points

Walkup’s theorem Theorem (Walkup 1979): The expected cost of the minimum matching is bounded Bipartite model n R L

Walkup’s theorem = cost of the minimum assignment. Modify the graph model: Multiple edges with costs given by a Poisson process This obviously doesn’t change the minimum assignment

Walkup’s theorem Give each edge a random direction Choose the five cheapest edges from each vertex. We show that whp this set contains a perfect matching

Hall’s criterion An edge set contains a perfect matching iff for every subset S of L,

Hall’s criterion If Hall’s criterion holds, an incomplete matching can always be extended.

Hall’s criterion If Hall’s criterion fails for S, then it also fails for S T  (S)‏

Hall’s criterion Here we can take |S| + |T| = n+1 If Hall’s criterion fails, then it fails for some S (in L or in R) with

Walkup’s theorem

The directed edges from a given vertex have costs from a rate n/2 Poisson process The 5 cheapest edges have expected costs 2/n, 4/n, 6/n, 8/n, 10/n. The average cost in this set is 6/n, and there are n edges in a perfect matching

Walkup’s theorem If Hall’s criterion holds, there is a perfect matching of expected cost at most 6. What about the cases of failure?

Walkup’s theorem Randomly color the edges Red p Blue 1-p Take the 5 cheapest blue edges from each vertex. If Hall’s criterion holds, this gives a matching of cost 6/(1-p)‏ Otherwise the red edges 1-1, 2-2 etc give a matching of cost n/p.

Walkup’s theorem Total expected cost Take p = 1/n for instance. For large n, the expected cost is < 6 + o(1)‏ This completes the proof.

Walkup’s theorem Actually but we return to this…

Walkup’s theorem Walkup’s theorem obviously carries over to the complete graph (for even n)‏ The method also works for the TSP, minimum spanning tree, and other related problems Natural conjecture: E(cost) converges in probability to some constant.

Statistical physics The typical edge in the optimum solution has cost of order 1/n, and the number of edges in a solution is of order n. Analogous to spin systems of statistical physics

Disordered Systems Spin glasses AuFe random alloy Fe atoms interact

Statistical physics Each particle essentially interacts only with its close neighbors Macroscopic observables (magnetic field) arise as sums of many small terms, and are essentially independent of individual particles

Statistical physics Convergence in probability to a constant?

Statistical Physics / C-S Spin configuration Hamiltonian Ground state energy Temperature Gibbs measure Thermodynamic limit Feasible solution Cost of solution Cost of minimal solution Artificial parameter T Gibbs measure n→∞

Statistical physics Replica-cavity method of statistical mechanics has given spectacular predictions for random optimization problems M. Mézard, G. Parisi, W. Krauth, 1980’s Limit of   /12 for minimum matching on the complete graph (Aldous 2000)‏ Limit … for the TSP (Wästlund 2006)‏

Non-rigorous derivation of the   /12 limit Matching problem on K n for large n. In principle, this requires even n, but we shall consider a relaxation Let the edges be exponential of mean n, so that the sequence of ordered edge costs from a given vertex is approximately a Poisson process of rate 1.

Non-rigorous derivation of the   /12 limit The total cost of the minimum matching is of order n. Introduce a punishment c>0 for not using a particular vertex. This makes the problem well-defined also for odd n. For fixed c, let n tend to infinity. As c tends to infinity, we expect to recover the behavior of the original problem.

Non-rigorous derivation of the   /12 limit For large n, suppose that the problem behaves in the same way for n-1 vertices. Choose an arbitrary vertex to be the root What does the graph look like locally around the root? When only edges of cost <2c are considered, the graph becomes locally tree-like

Non-rigorous derivation of the   /12 limit Non-rigorous replica-cavity method Aldous derived equivalent equations with the Poisson-Weighted Infinite Tree (PWIT)‏

Non-rigorous derivation of the   /12 limit Let X be the difference in cost between the original problem and that with the root removed. If the root is not matched, then X = c. Otherwise X =  i – X i, where X i is distributed like X, and  i is the cost of the i:th edge from the root. The X i ’s are assumed to be independent.

Non-rigorous derivation of the   /12 limit It remains to do some calculations. We have where X i is distributed like X

Non-rigorous derivation of the   /12 limit Let X  -u

Non-rigorous derivation of the   /12 limit Then if u>-c,

Non-rigorous derivation of the   /12 limit Henceis constant

Non-rigorous derivation of the   /12 limit The constant depends on c and holds when –c<u<c f(-u)‏ f(u)‏

Non-rigorous derivation of the   /12 limit From definition, exp(-f(c)) = P(X=c) = proportion of vertices that are not matched, and exp(-f(-c)) = exp(0) = 1 e -f(u) + e -f(-u) = 2 – proportion of vertices that are matched = 1 when c = infinity.

Non-rigorous derivation of the   /12 limit

What about the cost of the minimum matching?

Non-rigorous derivation of the   /12 limit

Hence J = area under the curve when f(u) is plotted against f(-u)! Expected cost = n/2 times this area In the original setting = ½ times the area =   /12.

K-L matching

Similarly, the K-L matching problem leads to the equations:  has rate K and  has rate L min[K] stands for K:th smallest

Shown by Parisi (2006) that this system has an essentially unique solution The ground state energy is given by where x and y satisfy an explicit equation For K = L = 2 (equivalent to the TSP), this equation is K-L matching

The exponential bipartite assignment problem n

Exact formula conjectured by Parisi (1998)‏ Suggests proof by induction Researchers in discrete math, combinatorics and graph theory became interested Generalizations…

Generalizations by Coppersmith & Sorkin to incomplete matchings Remarkable paper by M. Buck, C. Chan & D. Robbins (2000) Introduces weighted vertices Extremely close to proving Parisi’s conjecture!

Incomplete matchings n m

Weighted assignment problems Weights  1,…,  m,  1,…,  n on vertices Edge cost exponential of rate  i  j Conjectured formula for the expected cost of minimum assignment Formula for the probability that a vertex participates in solution (trivial for less general setting!)‏

The Buck-Chan-Robbins urn process Balls are drawn with probabilities proportional to weight 11 22 33

Proofs of the conjectures Two independent proofs of the Parisi and Coppersmith-Sorkin conjectures were announced on March 17, 2003 (Nair, Prabhakar, Sharma and Linusson, Wästlund)‏

Rigorous method Relax by introducing an extra vertex Let the weight of the extra vertex go to zero Example: Assignment problem with  1 =…=  m =1,  1 =…=  n =1, and  m+1 =  p = P(extra vertex participates) p/n = P(edge (m+1,n) participates)

Rigorous method p/n = P (edge (m+1,n) participates)‏ When  →0, this is Hence By Buck-Chan-Robbins urn theorem,

Rigorous method Hence Inductively this establishes the Coppersmith-Sorkin formula

Rigorous results Much simpler proofs of Parisi, Coppersmith- Sorkin, Buck-Chan-Robbins formulas Exact results for higher moments Exact results and limits for optimization problems on the complete graph

The 2-dimensional urn process 2-dimensional time until k balls have been drawn

Limit shape as n→∞ Matching: TSP/2-factor:

Mean field TSP If the edge costs are i.i.d and satisfy P(l<t)/t→1 as t→0 (pseudodimension 1), then as n →∞, A. Frieze proved that whp a 2-factor can be patched to a tour at small cost

Further exact formulas

LP-relaxation of matching in the complete graph K n

Future work Explain why the cavity method gives the same equation as the limit shape in the urn process Establish more detailed cavity predictions Use proof method of Nair-Prabhakar-Sharma in more general settings

Thank you!