1 The Smoothed Analysis of Algorithms: Simplex Methods and Beyond Shang-Hua Teng Boston University/Akamai Joint work with Daniel Spielman (MIT)

Slides:



Advertisements
Similar presentations
Shortest Vector In A Lattice is NP-Hard to approximate
Advertisements

A Randomized Polynomial- Time Simplex Algorithm for Linear Programming CS3150 Course Presentation.
Linear Programming and Simplex Algorithm Reference: Numerical Recipe Sec
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2002 Lecture 8 Tuesday, 11/19/02 Linear Programming.
Theory of Computing Lecture 14 MAS 714 Hartmut Klauck.
C&O 355 Lecture 12 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
How should we define corner points? Under any reasonable definition, point x should be considered a corner point x What is a corner point?
Lecture 17 Introduction to Eigenvalue Problems
A Randomized Polynomial-Time Simplex Algorithm for Linear Programming Daniel A. Spielman, Yale Joint work with Jonathan Kelner, M.I.T.
On the randomized simplex algorithm in abstract cubes Jiři Matoušek Charles University Prague Tibor Szabó ETH Zürich.
Uri Zwick – Tel Aviv Univ. Randomized pivoting rules for the simplex algorithm Upper bounds TexPoint fonts used in EMF. Read the TexPoint manual before.
CS38 Introduction to Algorithms Lecture 15 May 20, CS38 Lecture 15.
1 Linear Programming Jose Rolim University of Geneva.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2006 Lecture 9 Wednesday, 11/15/06 Linear Programming.
1 Introduction to Linear and Integer Programming Lecture 9: Feb 14.
Introduction to Linear and Integer Programming Lecture 7: Feb 1.
Approximation Algorithm: Iterative Rounding Lecture 15: March 9.
Lecture 6: Greedy Algorithms I Shang-Hua Teng. Optimization Problems A problem that may have many feasible solutions. Each solution has a value In maximization.
1 Maximum matching Max Flow Shortest paths Min Cost Flow Linear Programming Mixed Integer Linear Programming Worst case polynomial time by Local Search.
Chapter 10: Iterative Improvement
 Linear Programming and Smoothed Complexity Richard Kelley.
Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract.
Spring 07, Mar 13, 15 ELEC 7770: Advanced VLSI Design (Agrawal) 1 ELEC 7770 Advanced VLSI Design Spring 2007 Linear Programming – A Mathematical Optimization.
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Fall, 2008 Lecture 9 Tuesday, 11/18/08 Linear Programming.
Optimization of Linear Problems: Linear Programming (LP) © 2011 Daniel Kirschen and University of Washington 1.
LINEAR PROGRAMMING PROBLEM Definition and Examples.
Daniel Kroening and Ofer Strichman Decision Procedures An Algorithmic Point of View Deciding ILPs with Branch & Bound ILP References: ‘Integer Programming’
Course: Advanced Algorithms CSG713, Fall 2008 CCIS Department, Northeastern University Dimitrios Kanoulas.
Dana Moshkovitz, MIT Joint work with Subhash Khot, NYU.
1.1 Chapter 1: Introduction What is the course all about? Problems, instances and algorithms Running time v.s. computational complexity General description.
C&O 355 Mathematical Programming Fall 2010 Lecture 17 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A.
Decision Procedures An Algorithmic Point of View
C&O 355 Mathematical Programming Fall 2010 Lecture 1 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AA A A.
STUDY OF THE HIRSCH CONJECTURE BASED ON “A QUASI-POLYNOMIAL BOUND FOR THE DIAMETER OF GRAPHS OF POLYHEDRA” Instructor: Dr. Deza Presenter: Erik Wang Nov/2013.
Computational Geometry Piyush Kumar (Lecture 5: Linear Programming) Welcome to CIS5930.
Linear Programming Piyush Kumar. Graphing 2-Dimensional LPs Example 1: x y Feasible Region x  0y  0 x + 2 y  2 y  4 x  3 Subject.
MIT and James Orlin1 NP-completeness in 2005.
Lecture 2: General Problem-Solving Methods. Greedy Method Divide-and-Conquer Backtracking Dynamic Programming Graph Traversal Linear Programming Reduction.
Theory of Computing Lecture 13 MAS 714 Hartmut Klauck.
1 Smoothed Analysis of Algorithms Shang-Hua Teng Boston University Akamai Technologies Inc Joint work with Daniel Spielman (MIT)
§1.4 Algorithms and complexity For a given (optimization) problem, Questions: 1)how hard is the problem. 2)does there exist an efficient solution algorithm?
Chapter 3 Algorithms Complexity Analysis Search and Flow Decomposition Algorithms.
Optimization - Lecture 4, Part 1 M. Pawan Kumar Slides available online
Direct Methods for Sparse Linear Systems Lecture 4 Alessandra Nardi Thanks to Prof. Jacob White, Suvranu De, Deepak Ramaswamy, Michal Rewienski, and Karen.
MIT and James Orlin © NP-hardness. MIT and James Orlin © Moving towards complexity theory Linear Programming is viewed as easy and Integer.
Linear Programming Maximize Subject to Worst case polynomial time algorithms for linear programming 1.The ellipsoid algorithm (Khachian, 1979) 2.Interior.
1 Approximation algorithms Algorithms and Networks 2015/2016 Hans L. Bodlaender Johan M. M. van Rooij TexPoint fonts used in EMF. Read the TexPoint manual.
C&O 355 Lecture 19 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
Linear Programming Piyush Kumar Welcome to CIS5930.
1 Smoothed Analysis of Algorithms: Why The Simplex Method Usually Takes Polynomial Time Shang-Hua Teng Boston University/Akamai Joint work with Daniel.
TU/e Algorithms (2IL15) – Lecture 12 1 Linear Programming.
An Affine-invariant Approach to Linear Programming Mihály Bárász and Santosh Vempala.
Submodularity Reading Group Matroid Polytopes, Polymatroid M. Pawan Kumar
UMass Lowell Computer Science Analysis of Algorithms Prof. Karen Daniels Spring, 2010 Lecture 9 Tuesday, 4/20/10 Linear Programming 1.
8.1 Determine whether the following statements are correct or not
Integer Programming An integer linear program (ILP) is defined exactly as a linear program except that values of variables in a feasible solution have.
ADVANCED COMPUTATIONAL MODELS AND ALGORITHMS
Georgina Hall Princeton, ORFE Joint work with Amir Ali Ahmadi
Haim Kaplan and Uri Zwick
Spectral Clustering.
Nonnegative polynomials and applications to learning
Polynomial DC decompositions
Uri Zwick – Tel Aviv Univ.
Linear Programming Piyush Kumar Welcome to COT 5405.
Uri Zwick – Tel Aviv Univ.
EMIS 8373 Complexity of Linear Programming
Chapter 5: Morse functions and function-induced persistence
Chapter 10: Iterative Improvement
ADVANCED COMPUTATIONAL MODELS AND ALGORITHMS
Presentation transcript:

1 The Smoothed Analysis of Algorithms: Simplex Methods and Beyond Shang-Hua Teng Boston University/Akamai Joint work with Daniel Spielman (MIT)

2 Outline Why Simplex Method What Condition Numbers/Gaussian Elimination Numerical Analysis Conjectures and Open Problems

3 Wonderful algorithms and heuristics that work well in practice, but whose performance cannot be understood through traditional analyses. Motivation for Smoothed Analysis worst-case analysis: if good, is wonderful. But, often exponential for these heuristics examines most contrived inputs average-case analysis: a very special class of inputs may be good, but is it meaningful?

4 Random is not typical

5 Analyses of Algorithms: worst case max x T(x) average case E r T(r) smoothed complexity

6 Instance of smoothed framework x is Real n -vector  r is Gaussian random vector, variance  2 measure smoothed complexity as function of n and 

7 Complexity Landscape run time input space

8 Complexity Landscape run time input space worst case

9 Complexity Landscape run time input space worst case average case

10 run time input space Smoothed Complexity Landscape

11 run time input space Smoothed Complexity Landscape smoothed complexity

12 Smoothed Analysis of Algorithms Interpolate between Worst case and Average Case. Consider neighborhood of every input instance If low, have to be unlucky to find bad input instance

13 Motivating Example: Simplex Method for Linear Programming max z T x s.t. A x  y Worst-Case: exponential Average-Case: polynomial Widely used in practice

US RDA Minimum 20¢618862tsp Peanut Butter 80¢ cup yogurt 30¢ slice bread CostIronFatProteinCarbs Minimize 30 x x x 3 s.t. 30x x x 3  300 5x 1 + 9x 2 + 8x 3  x x x 3  70 10x x 3  100 x 1, x 2, x 3  0 The Diet Problem

15 The Simplex Method start opt

16 Simplex Method (Dantzig, ‘47) Exponential Worst-Case (Klee-Minty ‘72) Avg-Case Analysis (Borgwardt ‘77, Smale ‘82, Haimovich, Adler, Megiddo, Shamir, Karp, Todd) Ellipsoid Method (Khaciyan, ‘79) Interior-Point Method (Karmarkar, ‘84) Randomized Simplex Method (m O(  d) ) (Kalai ‘92, Matousek-Sharir-Welzl ‘92) History of Linear Programming

17 Smoothed Analysis of Simplex Method max z T x s.t. A x  y max z T x s.t. G is Gaussian Theorem: For all A, simplex method takes expected time polynomial [Spielman-Teng 01]

18 Shadow Vertices

19 Another shadow

20 Shadow vertex pivot rule objective start z

21 Theorem: For every plane, the expected size of the shadow of the perturbed tope is poly(m,d,1/  )

22 max   z  ConvexHull(a 1, a 2,..., a m ) z Polar Linear Program

23 Initial Simplex Opt Simplex

24 Shadow vertex pivot rule

25

26 Count facets by discretizing to N directions, N 

27 Count pairs in different facets Pr Different Facets [] < c/N So, expect c Facets

28 Expect cone of large angle

29 Intuition for Smoothed Analysis of Simplex Method After perturbation, “most” corners have angle bounded away from flat most: some appropriate measure angle: measure by condition number of defining matrix start opt

30 Condition number at corner Corner is given by Condition number is sensitivity of x to change in C and b distance of C to singular

31 Condition number at corner Corner is given by Condition number is

32 Connection to Numerical Analysis Measure performance of algorithms in terms of condition number of input 1. Bound the running time of an algorithm solving a problem in terms of its condition number. 2. Prove it is unlikely that a random problem instance has large condition number. Average-case framework of Smale:

33 Connection to Numerical Analysis Measure performance of algorithms in terms of condition number of input 1. Bound the running time of an algorithm solving a problem in terms of its condition number. 2’. Prove it is unlikely that a perturbed problem instance has large condition number. Smoothed Suggestion:

34 Condition Number Edelman ‘88: for standard Gaussian random matrix Theorem: for Gaussian random matrix variance centered anywhere [Sankar-Spielman-Teng 02]

35 Condition Number Edelman ‘88: for standard Gaussian random matrix Theorem: for Gaussian random matrix variance centered anywhere [Sankar-Spielman-Teng 02] (conjecture)

36 Gaussian Elimination A = LU Growth factor: With partial pivoting, can be 2 n Precision needed  (n ) bits For every A,

37 Condition Number and Iterative LP Solvers Renegar defined condition number for LP maximize subject to distance of (A, b, c) to ill-posed linear program related to sensitivity of x to change in (A,b,c) Number of iterations of many LP solvers bounded by function of condition number: Ellipsoid, Perceptron, Interior Point, von Neumann

38 Smoothed Analysis of Perceptron Algorithm Theorem: For perceptron algorithm [Blum-Dunagan 01] Note: slightly weaker than a bound on expectation Bound through “wiggle room”, a condition number

39 Smoothed Analysis of Renegar’s Cond Number Theorem: [Dunagan-Spielman-Teng 02] Corollary: smoothed complexity of interior point method is for accuracy  Compare: worst-case complexity of IPM is iterations, note

40 Perturbations of Structured and Sparse Problems Structured perturbations of structured inputs perturb Zero-preserving perturbations of sparse inputs Or, perturb discrete structure… perturb non-zero entries

41 Goals of Smoothed Analysis Relax worst-case analysis Maintain mathematical rigor Provide plausible explanation for practical behavior of algorithms Develop a theory closer to practice

42 Geometry of

43 Geometry of (union bound) should be d 1/2

44 For, Apply to random Lemma: So Improving bound on conjecture

45 Smoothed Analysis of Renegar’s Cond Number Theorem: [Dunagan-Spielman-Teng 02] Corollary: smoothed complexity of interior point method is for accuracy  Compare: worst-case complexity of IPM is iterations, note conjecture