Presentation is loading. Please wait.

Presentation is loading. Please wait.

Submodular Set Function Maximization via the Multilinear Relaxation & Dependent Rounding Chandra Chekuri Univ. of Illinois, Urbana-Champaign.

Similar presentations


Presentation on theme: "Submodular Set Function Maximization via the Multilinear Relaxation & Dependent Rounding Chandra Chekuri Univ. of Illinois, Urbana-Champaign."— Presentation transcript:

1 Submodular Set Function Maximization via the Multilinear Relaxation & Dependent Rounding Chandra Chekuri Univ. of Illinois, Urbana-Champaign

2 Max weight independent set N a finite ground set w : N ! R + weights on N I µ 2 N is an independence family of subsets I is downward closed: A 2 I and B ½ A ) B 2 I max w(S) s.t S 2 I

3 Independence families stable sets in graphs matchings in graphs and hypergraphs matroids and intersection of matroids packing problems: feasible {0,1} solutions to A x · b where A is a non-negative matrix

4 Max weight independent set max weight stable set in graphs max weight matchings max weight independent set in a matroid max weight independent set in intersection of two matroids max profit knapsack etc max w(S) s.t S 2 I

5 This talk f is a non-negative submodular set function on N Motivation: several applications mathematical interest max f(S) s.t. S 2 I

6 Submodular Set Functions A function f : 2 N ! R + is submodular if AB j f(A+j) – f(A) ¸ f(B+j) – f(B) for all A ½ B, i 2 N\B f(A+j) – f(A) ≥ f(A+i+j) – f(A+i) for all A  N, i, j  N\A

7 Submodular Set Functions A function f : 2 N ! R + is submodular if AB j f(A+j) – f(A) ¸ f(B+j) – f(B) for all A ½ B, i 2 N\B f(A+j) – f(A) ≥ f(A+i+j) – f(A+i) for all A  N, i, j  N\A Equivalently: f(A) + f(B) ≥ f(A  B) + f(A  B) 8 A,B  N

8 G=(V,E) undirected graph f : 2 V ! R + where f(S) = | δ (S)| Cut functions in graphs S

9 Coverage in Set Systems X 1, X 2,..., X n subsets of set U f : 2 {1,2,..., n} ! R + where f(A) = | [ i in A X i | X1X1 X2X2 X3X3 X4X4 X5X5 X1X1 X2X2 X3X3 X4X4 X5X5

10 Submodular Set Functions Non-negative submodular set functions f(A) ≥ 0 8 A ) f(A) + f(B) ¸ f(A [ B) (sub-additive) Monotone submodular set functions f( ϕ ) = 0 and f(A) ≤ f(B) for all A  B Symmetric submodular set functions f(A) = f(N\A) for all A

11 Other examples Cut functions in hypergraphs (symmetric non-negative) Cut functions in directed graphs (non-negative) Rank functions of matroids (monotone) Generalizations of coverage in set systems (monotone) Entropy/mutual information of a set of random variables...

12 Example: Max-Cut f is cut function of a given graph G=(V,E) I = 2 V : unconstrained NP-Hard max f(S) s.t S 2 I

13 Example: Max k-Coverage X 1,X 2,...,X n subsets of U and integer k N = {1,2,...,n} f is the set coverage function (monotone) I = { A µ N : |A| · k } (cardinality constraint) NP-Hard max f(S) s.t S 2 I

14 Approximation Algorithms A is an approx. alg. for a maximization problem: A runs in polynomial time for all instances I of the problem A (I) ¸ ® OPT(I) ® ( · 1) is the worst-case approximation ratio of A

15 Techniques f is a non-negative submodular set function on N Greedy Local Search Multilinear relaxation and rounding max f(S) s.t. S 2 I

16 Greedy and Local-Search [Nemhauser-Wolsey-Fisher’78, Fisher-Nemhauser-Wolsey’78] Work well for “combinatorial” constraints: matroids, intersection of matroids and generalizations Recent work shows applicability to non-monotone functions [Feige-Mirrokni-Vondrak’07] [Lee-Mirrokni- Nagarajan-Sviridenko’08] [Lee-Sviridenko-Vondrak’09] [Gupta etal, 2010]

17 Motivation for mathematical programming approach Quest for optimal results Greedy/local search not so easy to adapt for packing constraints of the form Ax · b Known advantages of geometric and continuous optimization methods and the polyhedral approach

18 Math. Programming approach max w(S) s.t S 2 I max w ¢ x s.t x 2 P( I ) Exact algorithm: P( I ) = convexhull( { 1 S : S 2 I }) x i 2 [0,1] indicator variable for i

19 Math. Programming approach max w(S) s.t S 2 I max w ¢ x s.t x 2 P( I ) Round x * 2 P( I ) to S * 2 I Exact algorithm: P( I ) = convexhull( { 1 S : S 2 I }) Approx. algorithm: P( I ) ¾ convexhull( { 1 S : S 2 I }) P( I ) solvable : can do linear optimization over it

20 Math. Programming approach max f(S) s.t S 2 I max F( x ) s.t x 2 P( I ) Round x * 2 P( I ) to S * 2 I P( I ) ¶ convexhull( { 1 S : S 2 I }) and solvable

21 Math. Programming approach What is the continuous extension F ? How to optimize with objective F ? How do we round ? max f(S) s.t S 2 I max F( x ) s.t x 2 P( I ) Round x * 2 P( I ) to S * 2 I

22 Some results [Calinescu-C-Pal-Vondrak’07]+[Vondrak’08]=[CCPV’09] Theorem: There is a randomized (1-1/e) ' 0.632 approximation for maximizing a monotone f subject to any matroid constraint. [C-Vondrak-Zenklusen’09] Theorem: (1-1/e- ² )-approximation for monotone f subject to a matroid and a constant number of packing/knapsack constraints.

23 What is special about 1-1/e? Greedy gives (1-1/e)-approximation for the problem max { f(S) | |S| · k } when f is monotone [NWF’78] Obtaining a (1-1/e + ² )-approximation requires exponentially many value queries to f [FNW’78] Unless P=NP no (1-1/e + ² )-approximation for special case of Max k-Coverage [Feige’98] New results give (1-1/e) for any matroid constraint improving ½. Moreover, algorithm is interesting and techniques have been quite useful.

24 Submodular Welfare Problem n items/goods (N) to be allocated to k players each player has a submodular utility function f i (A i ) is the utility to i if A i is allocation to i) Goal: maximize welfare of allocation  i f i (A i ) Can be reduced to a single f and a (partition) matroid constraint and hence (1-1/e) approximation

25 Some more results [C-Vondrak-Zenklusen’11] Extend approach to non-monotone f Rounding framework via contention resolution schemes Several results from framework including the ability to handle intersection of different types of constraints

26 Math. Programming approach What is the continuous extension F ? How to optimize with objective F ? How do we round ? max f(S) s.t S 2 I max F( x ) s.t x 2 P( I ) Round x * 2 P( I ) to S * 2 I

27 Multilinear extension of f [CCPV’07] inspired by [Ageev-Sviridenko] For f : 2 N ! R + define F : [0,1] N ! R + as x = (x 1, x 2,..., x n )  [0,1] N R: random set, include i independently with prob. x i F( x ) = E [ f(R) ] =  S  N f(S)  i  S x i  i  N\S (1-x i )

28 Why multilinear extension? Ideally a concave extension to maximize Could choose (“standard”) concave closure f + of f Evaluating f + ( x ) is NP-Hard!

29 Properties of F F( x ) can be evaluated (approximately) by random sampling F is a smooth submodular function  2 F/  x i  x j ≤ 0 for all i,j. Recall f(A+j) – f(A) ≥ f(A+i+j) – f(A+i) for all A, i, j F is concave along any non-negative direction vector  F/  x i ≥ 0 for all i if f is monotone

30 Math. Programming approach What is the continuous extension F ? ✔ How to optimize with objective F ? How do we round ? max f(S) s.t S 2 I max F( x ) s.t x 2 P( I ) Round x * 2 P( I ) to S * 2 I

31 Maximizing F max { F(x) |  x i · k, x i 2 [0,1] } is NP-Hard

32 Approximately maximizing F [Vondrak’08] Theorem: For any monotone f, there is a (1-1/e) approximation for the problem max { F( x ) | x  P } where P  [0,1] N is any solvable polytope. Algorithm: Continuous-Greedy

33 Approximately maximizing F [C-Vondrak-Zenklusen’11] Theorem: For any non-negative f, there is a ¼ approximation for the problem max { F( x ) | x  P } where P  [0,1] n is any down-closed solvable polytope. Remark: 0.325-approximation can be obtained Algorithm: Local-Search variants

34 Local-Search based algorithm Problem: max { F( x ) | x 2 P }, P is down-monotone x * = a local optimum of F in P Q = { z 2 P | z · 1 - x * } y * = a local optimum of F in Q Output better of x * and y *

35 Local-Search based algorithm Problem: max { F( x ) | x 2 P }, P is down-monotone x * = a local optimum of F in P Q = { z 2 P | z · 1 - x * } y * = a local optimum of F in Q Output better of x * and y * Theorem: Above algorithm gives a ¼ approximation.

36 Math. Programming approach What is the continuous extension F ? ✔ How to optimize with objective F ? ✔ How do we round ? max f(S) s.t S 2 I max F( x ) s.t x 2 P( I ) Round x * 2 P( I ) to S * 2 I

37 Rounding Rounding and approximation depend on I and P( I ) Two results: For matroid polytope a special rounding A general approach via contention resolution schemes

38 Rounding in Matroids Matroid M = (N, I ) Independence polytope: P( M ) = convhull({ 1 S | S 2 I }) given by following system [Edmonds]   i 2 S x i · rank M (S) 8 S µ N x 2 [0,1] N

39 Rounding in Matroids [Calinescu-C-Pal-Vondrak’07] Theorem: Given any point x in P( M ), there is a randomized polynomial time algorithm to round x to a vertex x* (hence an indep set of M ) such that E [ x * ] = x F( x* ) ≥ F( x ) [C-Vondrak-Zenklusen’09] Different rounding with additional properties and apps.

40 Rounding F( x * ) = E [f(R)] where R is obtained by independently rounding each i with probability x * i R unlikely to be in I max F( x ) s.t x 2 P( I ) Round x * 2 P( I ) to S * 2 I

41 Rounding F( x * ) = E [f(R)] where R is obtained by independently rounding each i with probability x * i R unlikely to be in I Obtain R’ µ R s.t. R’ 2 I and E [f(R’)] ¸ c f(R) max F( x ) s.t x 2 P( I ) Round x * 2 P( I ) to S * 2 I

42 A simple question? 0.9 1 0.4 0.6 0.4 1 0.7 0.3 0.6 0.1 x is a convex combination of spanning trees R: pick each e 2 E independently with probability x e Question: what is the expected size of a maximal forest in R? (n - # of connected components)

43 A simple question? x is a convex combination of spanning trees of G R: pick each e 2 E independently with probability x e Question: what is the expected size of a maximal forest in R? (n - # of connected components) Answer: ¸ (1-1/e) (n-1)

44 Related question x is a convex combination of spanning trees of G R: pick each e 2 E independently with probability x e Want a (random) forest R’ µ R s.t. for every edge e Pr[e 2 R’ | e 2 R] ¸ c

45 Related question x is a convex combination of spanning trees of G R: pick each e 2 E independently with probability x e Want a (random) forest R’ µ R s.t. for every edge e Pr[e 2 R’ | e 2 R] ¸ c ) there is a forest of size  e c x e = c (n-1) in R

46 Related question x is a convex combination of spanning trees of G R: pick each e 2 E independently with probability x e Want a (random) forest R’ µ R s.t. for every edge e Pr[e 2 R’ | e 2 R] ¸ c Theorem: c = (1-1/e) is achievable & optimal [CVZ’11] (true for any matroid)

47 Contention Resolution Schemes I an independence family on N P( I ) a relaxation for I and x 2 P( I ) R: random set from independent rounding of x CR scheme for P( I ): given x, R outputs R’ µ R s.t. 1.R’ 2 I 2.and for all i, Pr[i 2 R’ | i 2 R] ¸ c

48 Rounding and CR schemes Theorem: A monotone CR scheme for P( I ) can be used to round s.t. E [f(S * )] ¸ c F( x * ) Via FKG inequality max F( x ) s.t x 2 P( I ) Round x * 2 P( I ) to S * 2 I

49 Remarks [CVZ’11] Several existing rounding schemes are CR schemes CR schemes for different constraints can be combined for their intersection CR schemes through correlation gap and LP duality

50 Math. Programming approach Problem reduced to finding a good relaxation P( I ) and a contention resolution scheme for P( I ) max f(S) s.t S 2 I max F( x ) s.t x 2 P( I ) Round x * 2 P( I ) to S * 2 I

51 Concluding Remarks Substantial progress on submodular function maximization problems in the last few years New tools and connections including a general framework via the multilinear relaxation Increased awareness and more applications Several open problems still remain

52 Thanks!


Download ppt "Submodular Set Function Maximization via the Multilinear Relaxation & Dependent Rounding Chandra Chekuri Univ. of Illinois, Urbana-Champaign."

Similar presentations


Ads by Google