Models of Greedy Algorithms for Graph Problems Sashka Davis, UCSD Russell Impagliazzo, UCSD SIAM SODA 2004.

Slides:



Advertisements
Similar presentations
Chapter 9 Greedy Technique. Constructs a solution to an optimization problem piece by piece through a sequence of choices that are: b feasible - b feasible.
Advertisements

Great Theoretical Ideas in Computer Science
NP-Hard Nattee Niparnan.
Generalization and Specialization of Kernelization Daniel Lokshtanov.
Greedy Algorithms Greed is good. (Some of the time)
Great Theoretical Ideas in Computer Science for Some.
1 Discrete Structures & Algorithms Graphs and Trees: III EECE 320.
Lecture 24 Coping with NPC and Unsolvable problems. When a problem is unsolvable, that's generally very bad news: it means there is no general algorithm.
The Greedy Approach Chapter 8. The Greedy Approach It’s a design technique for solving optimization problems Based on finding optimal local solutions.
1 The TSP : Approximation and Hardness of Approximation All exact science is dominated by the idea of approximation. -- Bertrand Russell ( )
Combinatorial Algorithms
CS774. Markov Random Field : Theory and Application Lecture 17 Kyomin Jung KAIST Nov
Chapter 3 The Greedy Method 3.
Great Theoretical Ideas in Computer Science.
CSC5160 Topics in Algorithms Tutorial 2 Introduction to NP-Complete Problems Feb Jerry Le
1 Discrete Structures & Algorithms Graphs and Trees: II EECE 320.
Introduction to Approximation Algorithms Lecture 12: Mar 1.
Chapter 23 Minimum Spanning Trees
3 -1 Chapter 3 The Greedy Method 3 -2 The greedy method Suppose that a problem can be solved by a sequence of decisions. The greedy method has that each.
CSE 421 Algorithms Richard Anderson Dijkstra’s algorithm.
A general approximation technique for constrained forest problems Michael X. Goemans & David P. Williamson Presented by: Yonatan Elhanani & Yuval Cohen.
Greedy Algorithms Reading Material: Chapter 8 (Except Section 8.5)
2-Layer Crossing Minimisation Johan van Rooij. Overview Problem definitions NP-Hardness proof Heuristics & Performance Practical Computation One layer:
CSE 421 Algorithms Richard Anderson Lecture 10 Minimum Spanning Trees.
Greedy Algorithms Like dynamic programming algorithms, greedy algorithms are usually designed to solve optimization problems Unlike dynamic programming.
Steiner trees Algorithms and Networks. Steiner Trees2 Today Steiner trees: what and why? NP-completeness Approximation algorithms Preprocessing.
Approximation Algorithms Motivation and Definitions TSP Vertex Cover Scheduling.
Priority Models Sashka Davis University of California, San Diego June 1, 2003.
Grace Hopper Celebration of Women in Computing Evaluating Algorithmic Design Paradigms Sashka Davis Advised by Russell Impagliazzo UC San Diego October.
CPSC 411, Fall 2008: Set 4 1 CPSC 411 Design and Analysis of Algorithms Set 4: Greedy Algorithms Prof. Jennifer Welch Fall 2008.
TECH Computer Science Graph Optimization Problems and Greedy Algorithms Greedy Algorithms  // Make the best choice now! Optimization Problems  Minimizing.
Outline Introduction The hardness result The approximation algorithm.
Approximating the MST Weight in Sublinear Time Bernard Chazelle (Princeton) Ronitt Rubinfeld (NEC) Luca Trevisan (U.C. Berkeley)
Theory of Computing Lecture 10 MAS 714 Hartmut Klauck.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
© The McGraw-Hill Companies, Inc., Chapter 3 The Greedy Method.
MST Many of the slides are from Prof. Plaisted’s resources at University of North Carolina at Chapel Hill.
Nattee Niparnan. Easy & Hard Problem What is “difficulty” of problem? Difficult for computer scientist to derive algorithm for the problem? Difficult.
1 The TSP : NP-Completeness Approximation and Hardness of Approximation All exact science is dominated by the idea of approximation. -- Bertrand Russell.
Approximation Algorithms for NP-hard Combinatorial Problems Magnús M. Halldórsson Reykjavik University
Design Techniques for Approximation Algorithms and Approximation Classes.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
Advanced Algorithm Design and Analysis (Lecture 13) SW5 fall 2004 Simonas Šaltenis E1-215b
UNC Chapel Hill Lin/Foskey/Manocha Minimum Spanning Trees Problem: Connect a set of nodes by a network of minimal total length Some applications: –Communication.
Chapter 15 Approximation Algorithm Introduction Basic Definition Difference Bounds Relative Performance Bounds Polynomial approximation Schemes Fully Polynomial.
Approximation Algorithms
Speaker: Yoni Rozenshein Instructor: Prof. Zeev Nutov.
Week 10Complexity of Algorithms1 Hard Computational Problems Some computational problems are hard Despite a numerous attempts we do not know any efficient.
EMIS 8374 Optimal Trees updated 25 April slide 1 Minimum Spanning Tree (MST) Input –A (simple) graph G = (V,E) –Edge cost c ij for each edge e 
Kruskal’s and Dijkstra’s Algorithm.  Kruskal's algorithm is an algorithm in graph theory that finds a minimum spanning tree for a connected weighted.
Graph Colouring L09: Oct 10. This Lecture Graph coloring is another important problem in graph theory. It also has many applications, including the famous.
Projects Network Theory VLSI PSM 1. Network 1. Steiner trees
Lower Bounds in Greedy Model Sashka Davis Advised by Russell Impagliazzo (Slides modified by Jeff) UC San Diego October 6, 2006.
Minimum Spanning Trees CS 146 Prof. Sin-Min Lee Regina Wang.
Strings Basic data type in computational biology A string is an ordered succession of characters or symbols from a finite set called an alphabet Sequence.
© The McGraw-Hill Companies, Inc., Chapter 12 On-Line Algorithms.
Vasilis Syrgkanis Cornell University
1 Approximation algorithms Algorithms and Networks 2015/2016 Hans L. Bodlaender Johan M. M. van Rooij TexPoint fonts used in EMF. Read the TexPoint manual.
11 -1 Chapter 12 On-Line Algorithms On-Line Algorithms On-line algorithms are used to solve on-line problems. The disk scheduling problem The requests.
Approximation Algorithms by bounding the OPT Instructor Neelima Gupta
Network Formation Games. NFGs model distinct ways in which selfish agents might create and evaluate networks We’ll see two models: Global Connection Game.
TU/e Algorithms (2IL15) – Lecture 11 1 Approximation Algorithms.
Greedy Technique.
Great Theoretical Ideas in Computer Science
COMP 6/4030 ALGORITHMS Prim’s Theorem 10/26/2000.
Vertex Cover, Dominating set, Clique, Independent set
Discrete Mathematics for Computer Science
Unit 3 (Part-I): Greedy Algorithms
Minimum Spanning Tree Algorithms
CSE 417: Algorithms and Computational Complexity
Presentation transcript:

Models of Greedy Algorithms for Graph Problems Sashka Davis, UCSD Russell Impagliazzo, UCSD SIAM SODA 2004

Why greedy algorithms? Greedy algorithms are simple, and have efficient implementations They are used as: –Exact algorithms for many optimization problems –Approximation schemes with guaranteed approximation ratios for hard problems –As heuristics for hard optimization problems

Goal To design an abstract model of greedy algorithms and answer the questions: 1.Could a known greedy approximation algorithm be improved? 2.Can we prove lower bounds on approximation ratio of greedy algorithms for hard problems? 3.Can we formalize the intuition that greedy algorithms are weaker than other algorithmic paradigms?

History 1.[BNR02] defined Priority algorithm framework for scheduling problems 2.[AB02] defined Priority algorithms for facility location and set cover 3.[BL03] proved bounds on performance of Priority algorithms for VC, IS and Coloring

General results Extended the work of [BNR02] and [AB02] and defined problem-independent model for greedy algorithms Defined a formal model of Memoryless priority algorithms Characterization of the power of Fixed, Memoryless, and Adaptive algorithms in terms of combinatorial games

Separations Dynamic Programming Algorithms Adaptive Priority Algorithms Fixed Priority Algorithms Memoryless Priority Algorithms   

Results for graph problems Shortest paths –No Fixed priority algorithm can achieve any constant approximation ratio, for ShortPath problem on graphs, with non-negative weights –No Adaptive priority algorithm can achieve any constant approximation ratio, for ShortPath problem on graphs with negative weights, but no negaitve weight cycles Fixed Priority Algorithms Adaptive Priority Algorithms  Adaptive Priority Algorithms Dynamic Programming Algorithms 

Results for specific graph problems Shortest paths –No Fixed priority algorithm can achieve any constant approximation ratio, for ShortPath problem on graphs, with non-negative weights –No Adaptive priority algorithm can achieve any constant approximation ratio, for ShortPath problem on graphs with negative weights, but no negaitve weight cycles

Steiner trees –Proved lower bound of 1.18 on the approximation ratio achieved by Adaptive priority algorithms –Improved adaptive priority algorithm, achieving an approximation ratio for special metric instances where the distance between nodes is [1,2] Results for specific graph problems

Weighted Vertex Cover –Proved lower bound 2 for Adaptive priority algorithms, matching the standard 2-approximation scheme Independent Set –Proved lower bound of 3/2 on the performance of Adaptive algorithms for degree-3 graphs Results for specific graph problems

Remainder of the talk What is a greedy algorithm? Formal definitions of Fixed and Adaptive priority algorithms and show a strong separation between Fixed and Adaptive priority algorithms Lower bound 2 on the approximation ratio of Adaptive priority algorithms on weighted vertex cover problem Fixed Priority Algorithms Adaptive Priority Algorithms 

What is a greedy algorithm? Given a universe of data items  1.The instance is a set of data items, subset of  2.The algorithm defines an ordering function on  and views the data items in the instance in that order 3.The algorithm makes an irrevocable decision for each data item, which depends only on data items seen so far, and decisions made so far, not future data items 4.The solution is decisions made on each data item

Kruskal algorithm for MST Input (G=(V,E), ω: E → R ) 1.Initialize empty solution T 2.L = Sorted list of edges in non-decreasing order according to their weight 3.while (L is not empty) –e = next edge in L –Add the edge to T, as long as T remains a forest and remove e from L 4.Output T

 is a set of data items ;  is a set of options Input: instance I={  1,  2,…,  n }, I   Output: solution S={(  i,  i ) | i= 1,2,…,d};  i   1. Determine an ordering function π :  → R +  { ∞ } 2. Order I according to π (  ) 3. Repeat –Let the next data item in the ordering π be  i –Make a decision  i   –Update the partial solution S until (decisions are made for all data items) 4. Output S={(  i,  i ) | i= 1,2,…,d} Fixed priority algorithms

Kruskal is a Fixed priority algorithm  is a set of edges Each edge is represented as (u, v, ω)  ={accepted, rejected} Priority function π : (u, v, ω)= ω

Question: Can all problems with known greedy algorithms be solved by a Fixed priority algorithm? ShortPath Problem: Given a graph G=(V,E), ω: E →R + ; s, t  V. Find a directed tree of edges, rooted at s, such that the combined weight of the path from s to t is minimal Fixed Priority Algorithms ShortPath ?

Answer: Theorem: No Fixed priority algorithm can achieve any constant approximation ratio for the ShortPath problem Fixed Priority Algorithms ShortPath

Fixed priority game SolverAdversary γdγd γiγi γ3γ3 γjγj γkγk γ2γ2 γ1γ1 Γ0Γ0 S_sol = {(γ i2,σ i2 )} σ i2 S_sol = {(γ i2,σ i2 ), (γ i4,σ i4 )} γ i2 γ i9,…γ i1 γ i3 γ i4 γ i5 γ i6 γ i7 γ i8 Γ0Γ0 Γ1Γ1 Γ2Γ2 σ i4 Γ3Γ3 End Game S_adv = {(γ i2,σ * i2 ), (γ i4,σ * i4 )} Solver is awarded =∅

Fixed priority game for ShortPath problem Data items are edges of the graph Decision options = {accept, reject} A strategy for the Adversary in the game establishes a lower bound on approximation ratio achieved by any Fixed priority algorithm

Adversary selects  0 t b s a u(k) w(k) x(1) v(1) y(1) z(1)

Solver selects an order on  0 If then the Adversary presents: t b s a u(k) w(k) x(1) v(1) y(1) z(1)

Adversary’s strategy Waits until Solver considers edge y(1) Solver will consider y(1) before z(1) Event 1 σ y =accept Event 2 σ y =reject

Event 1: Solver accepts y(1) t u(k) x(1) y(1) z(1) b a s The Solver constructs a path {u,y} The Adversary outputs solution {x,z}

Event 2: Solver rejects y(1) The Solver fails to construct a path. The Adversary outputs a solution {u,y}. t u(k) x(1) y(1) z(1) b a s

The outcome of the game: The Solver either fails to output a solution or achieves an approximation ratio (k+1)/2 The Adversary can set k arbitrarily large and thus can force the Algorithm to claim arbitrarily large approximation ratio

Conclusion No Fixed priority algorithm can achieve any constant approximation ratio for the ShortPath problem Dijkstra algorithm solves the ShortPath problem exactly Dijkstra algorithm (G=(V,E), s  V) T← ∅; S←{s}; Until (S≠V) Find e=(u,x) | e = min e  Cut(S, V-S) {path(s, x)+ω(e)} T← T  {e}; S ← S  {x}

Adaptive priority algorithms  is a set of data items;  is a set of options Input: instance I={  1,  2,…,  d }, I  Output: solution S={(  i,  i ) | i= 1, 2,…,d} 1. Initialization U= ∅; S=∅;I *= ∅;t=1 2. Repeat Determine an ordering function π t :  -I * → R +  { ∞ } Pick the highest priority data item  t  U according to π t Make an irrevocable decision  t   Update U ← U-{  t }; S←S  {(  t,  t )}; I * ←I *  {  t }; t←t+1 until (decisions are made for all data items) 3. Output S={(  i,  i ) | i= 1,2,…,d}

Dijkstra is an Adaptive priority algorithm –  is a set of edges; Each edge is represented as (u, v, ω) –  ={accepted, rejected} –ordering Adaptive Priority Fixed Priority ShortPath Fixed Priority Algorithms Adaptive Priority Algorithms 

Weighted Vertex Cover [Joh74] gave a greedy 2-approximation algorithm Can we design an improved approximation algorithm in the class of Adaptive priority algorithms?

[Joh74] greedy 2-approximation for WVC Input: instance I={  1,  2,…,  n } Output: solution S  I 1. Initialization U= ∅; S=∅;I *= ∅;t=1 2. Repeat π t (  ) = ω(v)/(|adj_list – I * |) Order all  t  U according to their π t value in non- decreasing order. Let  t be first data item in the order. if (π t (  ) ≠ ∞ ) then –Make an irrevocable decision  t = in –Update U ← U-{  t }; S←S  {  t }; I * ←I *  {  t }; t←t+1 until (π t (  ) ≠ ∞ ) 3. Output S

Priority model for WVC problem A data item is a vertex of the graph  = (v, ω(v), [adj_list] ); ω:V → R The input to the algorithm is a set of data items I={  1,  2,…,  n } Set of decision options is Σ={accept, reject} Ordering function: π t (  ) = ω(v)/(|adj_list – I * |) For simplicity, the solution will be a vertex cover S  I, of vertices accepted by the algorithm.

Question Is there an Adaptive priority algorithm which gives a better for the WVC than the known 2-approximation? –No! Theorem: No Adaptive priority algorithm can achieve an approximation ration better than 2

Adaptive priority game SolverAdversary γ3γ3 γ5γ5 γ6γ6 γ1γ1 γ4γ4 γ7γ7 γ2γ2 S_sol = {(γ 7,σ 7 )} σ4σ4 S_sol = {(γ 7,σ 7 ), (γ 4,σ 4 )} Γ3Γ3 Γ1Γ1 Γ2Γ2 σ7σ7 The Game Ends : 1.S_adv = {(γ 7,σ * 7 ), (γ 4,σ * 4 ),(γ 2,σ * 2 )} 2.Solver is awarded payoff f(S_sol)/f(S_adv) γ8γ8 γ9γ9 γ 10 γ 11 γ 12 Γ0Γ0 σ2σ2 S_sol = {(γ 7,σ 7 ), (γ 4,σ 4 ),(γ 2,σ 2 )} 

The Adversary chooses instances to be graphs K n,n The weight function ω:V→ {1, n 2 } n2n2 1n2n2 n2n2 n2n2 11 1

The game Data items –each node appears in  o as two separate data items with weights 1, n 2 Solver moves –Choses a data item, and commits to a decision Adversary move –Removes from the next  t the data item, corresponding to the node just committed and..

Adversary’s strategy is to wait unitl Event 1: Solver accepts a node of weight n 2 Event 2: Solver rejects a node of any weight Event 3: Solver has committed to all but one nodes on either side of the bipartite

Event 1: Solver accepts a node ω(v)=n 2 The Adversary chooses part B of the bipartite as a cover, and incurs cost n The cost of a cover for the Solver is at least n 2 +n n2n

Event 2: Solver rejects a node of any weight The Adversary chooses part A of the bipartite as a cover. The Solver must choose part B of the bipartite as a cover. n2n2 n2n2

Event 3: Solver commits to n-1 nodes w(v)=1, on either side of K n,n The Adversary chooses part B of the bipartite as a cover, and incurs cost n The cost of a cover for the Solver is 2n n2n2

Summary No Adaptive priority algorithm can achieve an approximation ratio better than 2 for the WVC The known 2-approximation is optimal and cannot be improved

Conclusions 1.The class of Adaptive priority algorithms is more powerful 2.The known 2-approximation for the WVC is optimal in the class of Adaptive priority algorithms and cannot be improved Fixed Priority Algorithms Adaptive Priority Algorithms 

Future research directions Extend the framework to capture larger class of greedy algorithms –Define a notion of global information –Redefine the notion of local information Build formal models for backtracking and dynamic programming algorithms and evaluate their performance on hard problems