Download presentation

Presentation is loading. Please wait.

Published byKurt Capers Modified about 1 year ago

1
Local Computation Algorithms Ning Xie, CSAIL MIT Based on joint works with Noga Alon, Ronitt Rubinfeld, Gil Tamir and Shai Vardi TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAAAAAAAAAAAAAAAAAAAA

2
Large answers Large answers Combinatorial optimization/search Combinatorial optimization/search scheduling, coloring, SAT, LP, etc scheduling, coloring, SAT, LP, etc Coding theoretic Coding theoretic Linear algebra Linear algebra BUT, only need small portion of answer? BUT, only need small portion of answer? Large data setsand large solutions

3
Example problem Maximal independent sets Maximal independent sets given a sequence v 1, v 2, , v k (assuming k<

4
Previous and related work Locally decodable codes (Katz-Trevisan) Locally decodable codes (Katz-Trevisan) Online reconstruction (Ailon, Chazelle, Kale, Liu, Peres, Saks, Online reconstruction (Ailon, Chazelle, Kale, Liu, Peres, Saks, Seshadhri) Distributed computing (local algorithms) Distributed computing (local algorithms) Local Computation Algorithms

5
Rest of the talk 1. Model of local computation algorithms (LCA) 1. Model of local computation algorithms (LCA) 2. An LCA for maximal independent set 2. An LCA for maximal independent set 3. An LCA for hypergraph coloring 3. An LCA for hypergraph coloring

6
Part 1: Local Computation Algorithms (LCAs) F: a computation problem F: a computation problem input x (|x|=n) input x (|x|=n) set of solutions is F(x)={y 1, y 2, …, y s } set of solutions is F(x)={y 1, y 2, …, y s } Want: a randomized algorithm A that implements the oracle access to some y i Want: a randomized algorithm A that implements the oracle access to some y i y1y1 y2y2 ysys

7
Local computation algorithm Algorithm A input tape (RAM) random tape (RAM) work tape

8
Local computation algorithm Algorithm A input tape (RAM) random tape (RAM) work tape i1i1

9
Local computation algorithm Algorithm A input tape (RAM) random tape (RAM) work tape i1i1 yi1yi1

10
Local computation algorithm Algorithm A input tape (RAM) random tape (RAM) work tape i1i1 yi1yi1 i2i2

11
Local computation algorithm Algorithm A input tape (RAM) random tape (RAM) work tape i1i1 yi1yi1 i2i2 yi2yi2

12
Local computation algorithm Algorithm A input tape (RAM) random tape (RAM) work tape i1i1 yi1yi1 i2i2 yi2yi2...

13
Local computation algorithm Algorithm A input tape (RAM) random tape (RAM) work tape i1i1 yi1yi1 i2i2 yi2yi2... iqiq

14
Local computation algorithm Algorithm A input tape (RAM) random tape (RAM) work tape i1i1 yi1yi1 i2i2 yi2yi2... iqiq yiqyiq

15
Local computation algorithms (cont.) For any sequence of queries i 1, i 2,..., i q, A can answer y i j with For any sequence of queries i 1, i 2,..., i q, A can answer y i j with at most t(n) time at most t(n) time at most s(n) space at most s(n) space All queries are consistent: there exists some y 2 F(x) that agrees with all answers All queries are consistent: there exists some y 2 F(x) that agrees with all answers A is correct for all queries w.p. 1- δ (n) A is correct for all queries w.p. 1- δ (n) A is a (t(n),s(n),δ(n)) local computation algorithm

16
Would like LCAs to have … Both t(n) and s(n) are at most poly-logarithmic Both t(n) and s(n) are at most poly-logarithmic Support parallelism Support parallelism Query order oblivious Query order oblivious

17
A generic LCA Generate a random string r Generate a random string r On query i, do the following deterministically On query i, do the following deterministically read some bits of input x and random string r read some bits of input x and random string r compute y i compute y i

18
Part 2: Maximal Independent Set Input: An undirected graph G(V, E) Input: An undirected graph G(V, E) Output: A subset of vertices which is a maximal independent set (MIS) Output: A subset of vertices which is a maximal independent set (MIS) Query: is v in MIS? Query: is v in MIS?

19
1 st ingredient: Parnas-Ron Local distributed approx. algorithm sublinear approx. algorithm Local distributed approx. algorithm sublinear approx. algorithm Query and (usually) time complexity: O(d k ) Query and (usually) time complexity: O(d k ) d: max. degree of graph d: max. degree of graph k: max. rounds of the distributed algorithm k: max. rounds of the distributed algorithm

20
2 nd ingredient: Luby ’ s distributed algorithm for MIS repeat O(logn) times in parallel repeat O(logn) times in parallel each vertex selects itself w.p. 1/2d each vertex selects itself w.p. 1/2d if v selects itself and none in N(v) is selected if v selects itself and none in N(v) is selected add v to MIS add v to MIS remove v and N(v) from the graph remove v and N(v) from the graph

21
Combining the first 2 ingredients Luby+Parnas-Ron n O(1) running time Luby+Parnas-Ron n O(1) running time But we want poly-logarithmic … But we want poly-logarithmic … Run Luby ’ s algorithm a small num. of rounds [Marko-Ron] Run Luby ’ s algorithm a small num. of rounds [Marko-Ron] Large independent set but not maximal … Large independent set but not maximal …

22
3 rd ingredient: Beck ’ s idea [Beck ’ 91] Algorithmic approach to the Lovasz Local Lemma [Beck ’ 91] Algorithmic approach to the Lovasz Local Lemma First run randomized algorithm to get a “ partially good ” solution First run randomized algorithm to get a “ partially good ” solution Show that remaining problems are small enough for brute-force search Show that remaining problems are small enough for brute-force search

23
Two-phase algorithm Phase 1: run Luby ’ s algorithm for O(d log d) rounds Phase 1: run Luby ’ s algorithm for O(d log d) rounds Hope: remaining graph has only small components Hope: remaining graph has only small components Phase 2: run greedy (LFMIS) on the connected components in which v i lies Phase 2: run greedy (LFMIS) on the connected components in which v i lies

24
Original graph

25
After Phase 1

26
Running Phase 2

27

28
After Phase 2

29
Why fast? Phase 1 Apply Parnas-Ron reduction to this local distributed algorithm Apply Parnas-Ron reduction to this local distributed algorithm Number of rounds is O(d·log d) (instead of O(log n)) Number of rounds is O(d·log d) (instead of O(log n)) Running time of Phase 1 is d O(d·log d) Running time of Phase 1 is d O(d·log d)

30
Why fast? Phase 2 Main Lemma: With prob. at least 1-1/n, after running Luby ’ s algorithm O(d·log d) rounds, all connected components of surviving vertices are of size O(poly(d)·log n) Main Lemma: With prob. at least 1-1/n, after running Luby ’ s algorithm O(d·log d) rounds, all connected components of surviving vertices are of size O(poly(d)·log n) Proof: “ Beck-like ” analysis Proof: “ Beck-like ” analysis Running time of Phase 2 is O(poly(d)·log n) Running time of Phase 2 is O(poly(d)·log n)

31
What about consistency? What about consistency? Remember all previous random bits used in Luby ’ s algorithm (each vertex, each round) Remember all previous random bits used in Luby ’ s algorithm (each vertex, each round) Na ï ve implementation: linear space Na ï ve implementation: linear space But our algorithm is local … But our algorithm is local … O(log n)-wise independence suffices for Beck O(log n)-wise independence suffices for Beck (Alon-Babai-Itai) k-wise independent random variables with seed length O(k·log n) (Alon-Babai-Itai) k-wise independent random variables with seed length O(k·log n) 4th ingredient: k-wise independence Making the space complexity small

32
Put everything together … Theorem: For any degree-bounded graph, there is an (O(log n), O(log 2 n), 1/n) local computation algorithm for MIS.

33
Part 3: Hypergraph coloring Hypergraph H=(V, E) Hypergraph H=(V, E) V: finite set of vertices V: finite set of vertices E: a family of subsets of V (called edge) E: a family of subsets of V (called edge) H is k-uniform (every edge is of size k) H is k-uniform (every edge is of size k) every edge intersects at most d other edges every edge intersects at most d other edges Size of the problem: N=# of edges Size of the problem: N=# of edges H is 2-colorable if we can color all the vertices red and blue so that no edge is monochromatic H is 2-colorable if we can color all the vertices red and blue so that no edge is monochromatic

34
A 3-regular hypergraph

35
1 st ingredient: Alon ’ s algorithm (Beck ’ 91, Alon ’ 91) Under certain conditions, there is a quasi-linear algorithm that finds a two- coloring of a hypergraph (Beck ’ 91, Alon ’ 91) Under certain conditions, there is a quasi-linear algorithm that finds a two- coloring of a hypergraph The algorithm runs in 2 (or 3) phases, depending on performance requirement The algorithm runs in 2 (or 3) phases, depending on performance requirement

36
Overview of Alon ’ s algorithm Phase 1 coloring Phase 1 coloring Uniformly at random color all vertices sequentially Uniformly at random color all vertices sequentially An edge becomes dangerous if too many of its vertices colored monochromatically An edge becomes dangerous if too many of its vertices colored monochromatically Save all the rest uncolored vertices in that edge Save all the rest uncolored vertices in that edge An edge is called survived if it does not have both colors at the end of Phase 1 An edge is called survived if it does not have both colors at the end of Phase 1 Phase 2 coloring Phase 2 coloring brute-force search for a 2-coloring for each connected component of survived edges brute-force search for a 2-coloring for each connected component of survived edges

37
Phase 1 coloring

38

39

40

41

42
Phase 2 coloring

43

44
Overview of Alon ’ s algorithm Key Lemma: At the end of Phase 1, almost surely all connected components of survived edges are of size O(log N) Key Lemma: At the end of Phase 1, almost surely all connected components of survived edges are of size O(log N)

45
A simple observation Alon ’ s algorithm works for any ordering of the vertices Alon ’ s algorithm works for any ordering of the vertices So we can simply take the order that queries arrive! So we can simply take the order that queries arrive!

46
Are we done? Space complexity can be linear as we need to “ remember ” all previous answers Space complexity can be linear as we need to “ remember ” all previous answers Not query-oblivious Not query-oblivious

47
New algorithm Run Alon ’ s algorithm Run Alon ’ s algorithm Instead of using query order to color vertices Instead of using query order to color vertices Use random ranking order [Nguyen-Onak] Use random ranking order [Nguyen-Onak] Recursively color only vertices that we have to Recursively color only vertices that we have to

48
2 nd ingredient: Nguyen-Onak Randomly assign a rank r(v) 2 [0,1] for each vertex Randomly assign a rank r(v) 2 [0,1] for each vertex v ’ s color depends only on color ’ s of neighbors with lower rank v ’ s color depends only on color ’ s of neighbors with lower rank In expectation, only need O(e d ) recursive calls In expectation, only need O(e d ) recursive calls But we need worst case bound But we need worst case bound

49
Coloring dependency

50

51

52
Query tree

53

54
Query tree

55
Query tree

56
Query tree

57
Bounding the size of query tree Query tree has bounded degree Query tree has bounded degree H is k-uniform and each edge intersects at most d other edges max. degree D=kd H is k-uniform and each edge intersects at most d other edges max. degree D=kd Lemma: For any vertex v, with prob. at least 1- 1/N 2, the query tree rooted at v has size at most c(D)(log N) D+1, where c(D) is some constant Lemma: For any vertex v, with prob. at least 1- 1/N 2, the query tree rooted at v has size at most c(D)(log N) D+1, where c(D) is some constant

58
Proof of the lemma Partition the tree into (D+1) levels based on the values of their random ranks Partition the tree into (D+1) levels based on the values of their random ranks Upper bound the sizes of trees on each level Upper bound the sizes of trees on each level Apply results in branching processes theory (Galton- Waton process) Apply results in branching processes theory (Galton- Waton process)

59
Obs. 1: the exact values of random ranks do not matter, only relative orderings do Obs. 1: the exact values of random ranks do not matter, only relative orderings do Obs. 2: our computations are local, so limited independence suffices Obs. 2: our computations are local, so limited independence suffices Definition: An ordering function [N] N is k- wise independent if for any index subset of size k, all the k! relative orderings of these k integers happen with equal prob. Definition: An ordering function [N] N is k- wise independent if for any index subset of size k, all the k! relative orderings of these k integers happen with equal prob. 3rd ingredient: k-wise independent orderings What about space complexity?

60
Reducing the space complexity Construction of k-wise independent ordering (seed length=polylog(N)) Construction of k-wise independent ordering (seed length=polylog(N)) Replace true random strings with k-wise independent random strings for the random colors of vertices (k=O(log n), seed length=polylog(N)) Replace true random strings with k-wise independent random strings for the random colors of vertices (k=O(log n), seed length=polylog(N)) total randomness is at most poly-logarithmic

61
Put everything together … Theorem: under certain conditions, there is a (polylog n, polylog n, 1/n)-local computation algorithm for hypergraph coloring.

62
Conclusions Propose the notion of local computation algorithms (LCAs) Propose the notion of local computation algorithms (LCAs) Develop some techniques for designing LCAs Develop some techniques for designing LCAs Open question: LCAs for more problems? Open question: LCAs for more problems?

63
Thank You!

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google