Deterministic Algorithms for Submodular Maximization Problems Moran Feldman The Open University of Israel Joint work with Niv Buchbinder.

Slides:



Advertisements
Similar presentations
On allocations that maximize fairness Uriel Feige Microsoft Research and Weizmann Institute.
Advertisements

Primal Dual Combinatorial Algorithms Qihui Zhu May 11, 2009.
Submodular Set Function Maximization A Mini-Survey Chandra Chekuri Univ. of Illinois, Urbana-Champaign.
Submodular Set Function Maximization via the Multilinear Relaxation & Dependent Rounding Chandra Chekuri Univ. of Illinois, Urbana-Champaign.
C&O 355 Lecture 23 N. Harvey TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A A A A A A A A.
A Simple, Greedy Approximation Algorithm for MAX SAT David P. Williamson Joint work with Matthias Poloczek (Frankfurt, Cornell) and Anke van Zuylen (William.
Randomized Sensing in Adversarial Environments Andreas Krause Joint work with Daniel Golovin and Alex Roper International Joint Conference on Artificial.
Approximation Algorithms for Capacitated Set Cover Ravishankar Krishnaswamy (joint work with Nikhil Bansal and Barna Saha)
Maximizing the Spread of Influence through a Social Network
Dependent Randomized Rounding in Matroid Polytopes (& Related Results) Chandra Chekuri Jan VondrakRico Zenklusen Univ. of Illinois IBM ResearchMIT.
Planning under Uncertainty
Ron Lavi Presented by Yoni Moses.  Introduction ◦ Combining computational efficiency with game theoretic needs  Monotonicity Conditions ◦ Cyclic Monotonicity.
Optimal Marketing Strategies over Social Networks Jason Hartline (Northwestern), Vahab Mirrokni (Microsoft Research) Mukund Sundararajan (Stanford)
Solving Integer Programs. Natural solution ideas that don’t work well Solution idea #1: Explicit enumeration: Try all possible solutions and pick the.
Semidefinite Programming
Recent Development on Elimination Ordering Group 1.
1 Optimization problems such as MAXSAT, MIN NODE COVER, MAX INDEPENDENT SET, MAX CLIQUE, MIN SET COVER, TSP, KNAPSACK, BINPACKING do not have a polynomial.
Implicit Hitting Set Problems Richard M. Karp Harvard University August 29, 2011.
[1][1][1][1] Lecture 5-7: Cell Planning of Cellular Networks June 22 + July 6, Introduction to Algorithmic Wireless Communications David Amzallag.
Jan 6-10th, 2007VLSI Design A Reduced Complexity Algorithm for Minimizing N-Detect Tests Kalyana R. Kantipudi Vishwani D. Agrawal Department of Electrical.
1 Introduction to Approximation Algorithms Lecture 15: Mar 5.
CS Bayesian Learning1 Bayesian Learning. CS Bayesian Learning2 States, causes, hypotheses. Observations, effect, data. We need to reconcile.
SUDOKU Via Relaxation Labeling
Fast Algorithms for Submodular Optimization
1 The Santa Claus Problem (Maximizing the minimum load on unrelated machines) Nikhil Bansal (IBM) Maxim Sviridenko (IBM)
Distributed Constraint Optimization Michal Jakob Agent Technology Center, Dept. of Computer Science and Engineering, FEE, Czech Technical University A4M33MAS.
CS774. Markov Random Field : Theory and Application Lecture 13 Kyomin Jung KAIST Oct
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
© 2009 IBM Corporation 1 Improving Consolidation of Virtual Machines with Risk-aware Bandwidth Oversubscription in Compute Clouds Amir Epstein Joint work.
Hardness of Learning Halfspaces with Noise Prasad Raghavendra Advisor Venkatesan Guruswami.
Constraint Satisfaction Problems (CSPs) CPSC 322 – CSP 1 Poole & Mackworth textbook: Sections § Lecturer: Alan Mackworth September 28, 2012.
Heuristic Optimization Methods Greedy algorithms, Approximation algorithms, and GRASP.
Randomized Composable Core-sets for Submodular Maximization Morteza Zadimoghaddam and Vahab Mirrokni Google Research New York.
Approximation Algorithms for Prize-Collecting Forest Problems with Submodular Penalty Functions Chaitanya Swamy University of Waterloo Joint work with.
BLAST: Basic Local Alignment Search Tool Altschul et al. J. Mol Bio CS 466 Saurabh Sinha.
Linear Program Set Cover. Given a universe U of n elements, a collection of subsets of U, S = {S 1,…, S k }, and a cost function c: S → Q +. Find a minimum.
Approximation Algorithms Department of Mathematics and Computer Science Drexel University.
Implicit Hitting Set Problems Richard M. Karp Erick Moreno Centeno DIMACS 20 th Anniversary.
Submodular Maximization with Cardinality Constraints Moran Feldman Based On Submodular Maximization with Cardinality Constraints. Niv Buchbinder, Moran.
Improved Competitive Ratios for Submodular Secretary Problems ? Moran Feldman Roy SchwartzJoseph (Seffi) Naor Technion – Israel Institute of Technology.
A Unified Continuous Greedy Algorithm for Submodular Maximization Moran Feldman Roy SchwartzJoseph (Seffi) Naor Technion – Israel Institute of Technology.
Maximization Problems with Submodular Objective Functions Moran Feldman Publication List Improved Approximations for k-Exchange Systems. Moran Feldman,
Non-Preemptive Buffer Management for Latency Sensitive Packets Moran Feldman Technion Seffi Naor Technion.
Submodular Set Function Maximization A Mini-Survey Chandra Chekuri Univ. of Illinois, Urbana-Champaign.
Aspects of Submodular Maximization Subject to a Matroid Constraint Moran Feldman Based on A Unified Continuous Greedy Algorithm for Submodular Maximization.
Maximizing Symmetric Submodular Functions Moran Feldman EPFL.
Truthful and near-optimal mechanism design via linear programming Chaitanya Swamy Caltech and U. Waterloo Joint work with Ron Lavi Caltech.
Non-LP-Based Approximation Algorithms Fabrizio Grandoni IDSIA
Polyhedral Optimization Lecture 5 – Part 3 M. Pawan Kumar Slides available online
Approximation Algorithms Duality My T. UF.
Approximation Algorithms based on linear programming.
Unconstrained Submodular Maximization Moran Feldman The Open University of Israel Based On Maximizing Non-monotone Submodular Functions. Uriel Feige, Vahab.
A Simple ¾-Approximation Algorithm for MAX SAT David P. Williamson Joint work with Matthias Poloczek (Cornell), Georg Schnitger (Frankfurt), and Anke van.
Approximation algorithms for combinatorial allocation problems
Monitoring rivers and lakes [IJCAI ‘07]
Vitaly Feldman and Jan Vondrâk IBM Research - Almaden
Moran Feldman The Open University of Israel
Exact Algorithms via Monotone Local Search
Distributed Submodular Maximization in Massive Datasets
Framework for the Secretary Problem on the Intersection of Matroids
Data Integration with Dependent Sources
Submodular Maximization Through the Lens of the Multilinear Relaxation
CSC 380: Design and Analysis of Algorithms
On Approximating Covering Integer Programs
Submodular Function Maximization with Packing Constraints via MWU
Submodular Maximization in the Big Data Era
Submodular Maximization with Cardinality Constraints
Guess Free Maximization of Submodular and Linear Sums
Presentation transcript:

Deterministic Algorithms for Submodular Maximization Problems Moran Feldman The Open University of Israel Joint work with Niv Buchbinder.

Submodular Functions Definition Given a ground set N, a set function f : 2 N  R assigns a number to every subset of the ground set. A set function is submodular if: o f(A + u) – f(A) ≥ f(B + u) – f(B) ∀ A  B  N, u  B or o f(A) + f(B) ≥ f(A  B) + f(A  B) ∀ A, B  N. Definition Given a ground set N, a set function f : 2 N  R assigns a number to every subset of the ground set. A set function is submodular if: o f(A + u) – f(A) ≥ f(B + u) – f(B) ∀ A  B  N, u  B or o f(A) + f(B) ≥ f(A  B) + f(A  B) ∀ A, B  N. Submodular functions can be found in: Combinatorics (2 examples soon) Machine Learning Submodular functions can be found in: Combinatorics (2 examples soon) Machine Learning 2 Image Processing Algorithmic Game Theory

Example 1: Cut Function A directed graph G = (V, E) with capacities c e  0 on the arcs. For every S  V: Observation: f(S) is a non-negative submodular function. A directed graph G = (V, E) with capacities c e  0 on the arcs. For every S  V: Observation: f(S) is a non-negative submodular function. 3 f(S) = 3

Example 2: Coverage Function Elements E = {e 1, e 2, …, e n } and sets s 1, s 2, …, s m  E For every S = {s i 1, s i 2, …, s i k }: Observation: f(S) is a non-negative (monotone) submodular function. Elements E = {e 1, e 2, …, e n } and sets s 1, s 2, …, s m  E For every S = {s i 1, s i 2, …, s i k }: Observation: f(S) is a non-negative (monotone) submodular function. 4 S1S1 S2S2 S5S5 S3S3 S4S4

Submodular Maximization with a Cardinality Constraint Given a non-negative submodular function f : 2 N  ℝ and an integer k, find a set S of size at most k maximizing f(S). Generalizes: Max-k-Coverage and Max-Cut with specified cut size. Submodular Maximization with a Cardinality Constraint Given a non-negative submodular function f : 2 N  ℝ and an integer k, find a set S of size at most k maximizing f(S). Generalizes: Max-k-Coverage and Max-Cut with specified cut size. Submodular Maximization Problems Unconstrained Submodular Maximization Given a non-negative submodular function f : 2 N  ℝ, find a set S  N maximizing f(S). Generalizes: Max-(Directed)-Cut. Unconstrained Submodular Maximization Given a non-negative submodular function f : 2 N  ℝ, find a set S  N maximizing f(S). Generalizes: Max-(Directed)-Cut. 5 Other Constraints Exactly k elements, matroid, knapsack… Other Constraints Exactly k elements, matroid, knapsack…

Our Main Question 6 Reasons for “not necessary” Most approximation algorithms can be derandomized. “Not necessary” is the default … Reasons for “not necessary” Most approximation algorithms can be derandomized. “Not necessary” is the default … Reasons for “necessary” Currently most (best) known algorithms are randomized. Algorithms are assumed to access the function via a value oracle. This makes it difficult to apply standard techniques (e.g., conditional expectations). Algorithms based on the multilinear extension are inherently randomized. Reasons for “necessary” Currently most (best) known algorithms are randomized. Algorithms are assumed to access the function via a value oracle. This makes it difficult to apply standard techniques (e.g., conditional expectations). Algorithms based on the multilinear extension are inherently randomized. Algorithms should be polynomial in |N|. Representation of f might be very large. Assume access via a value oracle: Given a subset A  N, returns f(A). Algorithms should be polynomial in |N|. Representation of f might be very large. Assume access via a value oracle: Given a subset A  N, returns f(A).

History and Results: Unconstrained Maximization 7 Randomized Approximation Algorithms 0.4 – non-oblivious local search [Feige et al. 07] 0.41 – simulated annealing [Oveis Gharan and Vondrak 11] 0.42 – structural continuous greedy [Feldman et al. 11] 0.5 – double greedy [Buchbinder et al. 12] Randomized Approximation Algorithms 0.4 – non-oblivious local search [Feige et al. 07] 0.41 – simulated annealing [Oveis Gharan and Vondrak 11] 0.42 – structural continuous greedy [Feldman et al. 11] 0.5 – double greedy [Buchbinder et al. 12] Deterministic Approximation Algorithms 0.33 – local search [Feige et al. 07] 0.4 – recurisve local search [Dobzinski and Mor 15] 0.5 – derandomized double greedy [this work] Deterministic Approximation Algorithms 0.33 – local search [Feige et al. 07] 0.4 – recurisve local search [Dobzinski and Mor 15] 0.5 – derandomized double greedy [this work] Approximation Hardness 0.5 – information theoretic based [Feige et al. 07] Approximation Hardness 0.5 – information theoretic based [Feige et al. 07]

History and Results: Cardinality Constraint 8 Approximation Algorithms 0.25 – local search [Lee et al. 10] – fractional local search [Vondrak 13] – simulated annealing [Oveis Gharan and Vondrak 11] – measured continuous greedy [Feldman et al. 11] (faster) – random greedy [Buchbinder et al. 14] – “wide” random greedy [Buchbinder et al. 14] Approximation Algorithms 0.25 – local search [Lee et al. 10] – fractional local search [Vondrak 13] – simulated annealing [Oveis Gharan and Vondrak 11] – measured continuous greedy [Feldman et al. 11] (faster) – random greedy [Buchbinder et al. 14] – “wide” random greedy [Buchbinder et al. 14] Our Result (e -1 ) – derandomized random greedy [this work] Our Result (e -1 ) – derandomized random greedy [this work] Approximation Hardness 0.5 – for unconstrained maximization [Feige et al. 07] – symmetry gap [Oveis Gharan and Vondrak 11] Approximation Hardness 0.5 – for unconstrained maximization [Feige et al. 07] – symmetry gap [Oveis Gharan and Vondrak 11] Deterministic

The Profile of the Algorithms 9 Works in iterations. In every iteration:  Starts with some state S.  Randomly switches to a new state from a set N(S).  For every S’  N(S), let p(S, S’) be the probability that the algorithm switches from S to S’. In every iteration:  Starts with some state S.  Randomly switches to a new state from a set N(S).  For every S’  N(S), let p(S, S’) be the probability that the algorithm switches from S to S’. The analysis works whenever the probabilities p(S, S’) obey k linear constraints that might depend on S, where k is polynomial:

Derandomization – Naïve Attempt 10 Idea Explicitly store the distribution over the current state of the algorithm. Idea Explicitly store the distribution over the current state of the algorithm. (S 0, 1) (S 1, p) (S 2, 1 - p) (S 3, q 1 ) (S 6, q 4 ) (S 4, q 2 ) (S 5, q 3 ) The initial state The number of states can increase exponentially with the iterations.

Strategy 11 S S S1S1 S1S1 S3S3 S3S3 p(S, S 1 ) The state S i gets to the distribution of the next iteration only if p(S, S i ) > 0. We want probabilities that:  obey the constraints.  are mostly zeros. The state S i gets to the distribution of the next iteration only if p(S, S i ) > 0. We want probabilities that:  obey the constraints.  are mostly zeros. S2S2 S2S2 p(S, S 2 )p(S, S 3 )

Expectation to the Rescue 12 The analysis of the algorithm works when: (D is the current distribution). The analysis of the algorithm works when: (D is the current distribution). Often it is enough for the constraints to hold in expectation over D.

Expectation to the Rescue (cont.) 13 Some Justifications  We now require the analysis to work only for the expected output set.  Can often follow from the linearity of the expectation.  The new constraints are defined using multiple states (and their probabilities): Not natural/accessible for the randomized algorithm.  True for the two algorithms we derandomize. Some Justifications  We now require the analysis to work only for the expected output set.  Can often follow from the linearity of the expectation.  The new constraints are defined using multiple states (and their probabilities): Not natural/accessible for the randomized algorithm.  True for the two algorithms we derandomize.

Finding a good solution Has a solution (the probabilities used by the original algorithm). Bounded. A basic feasible solution contains at most one non-zero variable for every constraint: One non-zero variable for every current state. k additional non-zero variables. A basic feasible solution contains at most one non-zero variable for every constraint: One non-zero variable for every current state. k additional non-zero variables. The size of the distribution can increase by at most k at every iteration. 14

In Conclusion 15 Deterministic Algorithm Explicitly stores a distribution over states. In every iteration:  Uses the previous LP to calculate the probabilities to move from one state to another.  Calculates the distribution for the next iteration based on these probabilities. Deterministic Algorithm Explicitly stores a distribution over states. In every iteration:  Uses the previous LP to calculate the probabilities to move from one state to another.  Calculates the distribution for the next iteration based on these probabilities. Performance The analysis of the original (randomized) algorithm still works. The size of the distribution grows linearly in k – polynomial time algorithm. Performance The analysis of the original (randomized) algorithm still works. The size of the distribution grows linearly in k – polynomial time algorithm. Sometimes the LP can be solved quickly, resulting in a quite fast algorithm.

Open Problems 16 Derandomizing additional algorithms for submodular max. problems.  In particular, derandomizing algorithms involving the multilinear extension. Derandomizing additional algorithms for submodular max. problems.  In particular, derandomizing algorithms involving the multilinear extension. Obtaining faster deterministic algorithms for the problems we considered. Using our technique to derandomize algorithms from other fields.