Download presentation

Presentation is loading. Please wait.

1
**Local Computation Algorithms**

Ning Xie, CSAIL MIT Based on joint works with Noga Alon, Ronitt Rubinfeld, Gil Tamir and Shai Vardi TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAAAAAAAAAAAAAAAAAAAAAAAAA

2
**Large data sets and large solutions Large answers**

Combinatorial optimization/search scheduling, coloring, SAT, LP, etc Coding theoretic Linear algebra BUT, only need small portion of answer?

3
**In general there are many different legal solutions…**

Example problem In general there are many different legal solutions… Maximal independent sets given a sequence v1, v2, , vk (assuming k<<n), which are in an MIS? All answers must be consistent!

4
**Previous and related work**

Locally decodable codes (Katz-Trevisan) Online reconstruction (Ailon, Chazelle, Kale, Liu, Peres, Saks, Seshadhri) Distributed computing (local algorithms) Local Computation Algorithms

5
**Rest of the talk 1. Model of local computation algorithms (LCA)**

2. An LCA for maximal independent set 3. An LCA for hypergraph coloring

6
**Part 1: Local Computation Algorithms (LCAs)**

F: a computation problem input x (|x|=n) set of solutions is F(x)={y1, y2, …, ys} Want: a randomized algorithm A that implements the oracle access to some yi y1 y2 ys

7
**Local computation algorithm**

input tape (RAM) random tape (RAM) work tape Algorithm A

8
**Local computation algorithm**

input tape (RAM) i1 random tape (RAM) work tape Algorithm A

9
**Local computation algorithm**

input tape (RAM) i1 yi1 random tape (RAM) work tape Algorithm A

10
**Local computation algorithm**

input tape (RAM) i1 yi1 i2 random tape (RAM) work tape Algorithm A

11
**Local computation algorithm**

input tape (RAM) i1 yi1 i2 random tape (RAM) yi2 work tape Algorithm A

12
**Local computation algorithm**

input tape (RAM) i1 yi1 i2 random tape (RAM) yi2 ... work tape ... Algorithm A

13
**Local computation algorithm**

input tape (RAM) i1 yi1 i2 random tape (RAM) yi2 ... work tape ... Algorithm A iq

14
**Local computation algorithm**

input tape (RAM) i1 yi1 i2 random tape (RAM) yi2 ... work tape ... Algorithm A iq yiq

15
**Local computation algorithms (cont.)**

For any sequence of queries i1, i2, ..., iq, A can answer yij with at most t(n) time at most s(n) space All queries are consistent: there exists some y2 F(x) that agrees with all answers A is correct for all queries w.p. 1-δ(n) A is a (t(n),s(n),δ(n)) local computation algorithm

16
**Would like LCAs to have…**

Both t(n) and s(n) are at most poly-logarithmic Support parallelism Query order oblivious

17
**A generic LCA Generate a random string r**

On query i, do the following deterministically read some bits of input x and random string r compute yi

18
**Part 2: Maximal Independent Set**

Input: An undirected graph G(V, E) Output: A subset of vertices which is a maximal independent set (MIS) Query: is v in MIS?

19
**1st ingredient: Parnas-Ron**

Local distributed approx. algorithm sublinear approx. algorithm Query and (usually) time complexity: O(dk) d: max. degree of graph k: max. rounds of the distributed algorithm

20
**2nd ingredient: Luby’s distributed algorithm for MIS**

repeat O(logn) times in parallel each vertex selects itself w.p. 1/2d if v selects itself and none in N(v) is selected add v to MIS remove v and N(v) from the graph

21
**Combining the first 2 ingredients**

Luby+Parnas-Ron nO(1) running time But we want poly-logarithmic… Run Luby’s algorithm a small num. of rounds [Marko-Ron] Large independent set but not maximal…

22
**3rd ingredient: Beck’s idea**

[Beck’91] Algorithmic approach to the Lovasz Local Lemma First run randomized algorithm to get a “partially good” solution Show that remaining problems are small enough for brute-force search

23
Two-phase algorithm Phase 1: run Luby’s algorithm for O(d log d) rounds Hope: remaining graph has only small components Phase 2: run greedy (LFMIS) on the connected components in which vi lies

24
Original graph

25
After Phase 1

26
Running Phase 2

27
Running Phase 2

28
After Phase 2

29
Why fast? Phase 1 Apply Parnas-Ron reduction to this local distributed algorithm Number of rounds is O(d·log d) (instead of O(log n)) Running time of Phase 1 is dO(d·log d)

30
Why fast? Phase 2 Main Lemma: With prob. at least 1-1/n, after running Luby’s algorithm O(d·log d) rounds, all connected components of surviving vertices are of size O(poly(d)·log n) Proof: “Beck-like” analysis Running time of Phase 2 is O(poly(d)·log n)

31
**4th ingredient: k-wise independence Making the space complexity small**

What about consistency? Remember all previous random bits used in Luby’s algorithm (each vertex, each round) Naïve implementation: linear space But our algorithm is local… O(log n)-wise independence suffices for Beck (Alon-Babai-Itai) k-wise independent random variables with seed length O(k·log n)

32
**Put everything together…**

Theorem: For any degree-bounded graph, there is an (O(log n), O(log2n), 1/n) local computation algorithm for MIS.

33
**Part 3: Hypergraph coloring**

Hypergraph H=(V, E) V: finite set of vertices E: a family of subsets of V (called edge) H is k-uniform (every edge is of size k) every edge intersects at most d other edges Size of the problem: N=# of edges H is 2-colorable if we can color all the vertices red and blue so that no edge is monochromatic

34
A 3-regular hypergraph

35
**1st ingredient: Alon’s algorithm**

(Beck’91, Alon’91) Under certain conditions, there is a quasi-linear algorithm that finds a two-coloring of a hypergraph The algorithm runs in 2 (or 3) phases, depending on performance requirement

36
**Overview of Alon’s algorithm**

Phase 1 coloring Uniformly at random color all vertices sequentially An edge becomes dangerous if too many of its vertices colored monochromatically Save all the rest uncolored vertices in that edge An edge is called survived if it does not have both colors at the end of Phase 1 Phase 2 coloring brute-force search for a 2-coloring for each connected component of survived edges

37
Phase 1 coloring

38
Phase 1 coloring

39
Phase 1 coloring

40
Phase 1 coloring

41
Phase 1 coloring

42
Phase 2 coloring

43
Phase 2 coloring

44
**Overview of Alon’s algorithm**

Key Lemma: At the end of Phase 1, almost surely all connected components of survived edges are of size O(log N)

45
A simple observation Alon’s algorithm works for any ordering of the vertices So we can simply take the order that queries arrive!

46
Are we done? Space complexity can be linear as we need to “remember” all previous answers Not query-oblivious

47
**New algorithm Run Alon’s algorithm**

Instead of using query order to color vertices Use random ranking order [Nguyen-Onak] Recursively color only vertices that we have to

48
**2nd ingredient: Nguyen-Onak**

Randomly assign a rank r(v)2[0,1] for each vertex v’s color depends only on color’s of neighbors with lower rank In expectation, only need O(ed) recursive calls But we need worst case bound

49
Coloring dependency

50
Coloring dependency

51
Coloring dependency

52
Query tree

53
Query tree 0.7 0.4 0.8 0.6

54
Query tree 0.7 0.4 0.6

55
Query tree 0.7 0.4 0.6 0.5 0.8 0.2 0.5 0.6 0.1

56
Query tree 0.7 0.4 0.6 0.5 0.2 0.1

57
**Bounding the size of query tree**

Query tree has bounded degree H is k-uniform and each edge intersects at most d other edges max. degree D=kd Lemma: For any vertex v, with prob. at least 1-1/N2, the query tree rooted at v has size at most c(D)(log N)D+1 , where c(D) is some constant

58
Proof of the lemma Partition the tree into (D+1) levels based on the values of their random ranks Upper bound the sizes of trees on each level Apply results in branching processes theory (Galton-Waton process)

59
**3rd ingredient: k-wise independent orderings **

What about space complexity? Obs. 1: the exact values of random ranks do not matter, only relative orderings do Obs. 2: our computations are local, so limited independence suffices Definition: An ordering function [N] N is k-wise independent if for any index subset of size k, all the k! relative orderings of these k integers happen with equal prob.

60
**Reducing the space complexity**

Construction of k-wise independent ordering (seed length=polylog(N)) Replace true random strings with k-wise independent random strings for the random colors of vertices (k=O(log n), seed length=polylog(N)) total randomness is at most poly-logarithmic

61
**Put everything together…**

Theorem: under certain conditions, there is a (polylog n, polylog n, 1/n)-local computation algorithm for hypergraph coloring.

62
**Conclusions Propose the notion of local computation algorithms (LCAs)**

Develop some techniques for designing LCAs Open question: LCAs for more problems?

63
Thank You!

Similar presentations

OK

Semi-Supervised Learning in Gigantic Image Collections Rob Fergus (NYU) Yair Weiss (Hebrew U.) Antonio Torralba (MIT) TexPoint fonts used in EMF. Read.

Semi-Supervised Learning in Gigantic Image Collections Rob Fergus (NYU) Yair Weiss (Hebrew U.) Antonio Torralba (MIT) TexPoint fonts used in EMF. Read.

© 2019 SlidePlayer.com Inc.

All rights reserved.

To make this website work, we log user data and share it with processors. To use this website, you must agree to our Privacy Policy, including cookie policy.

Ads by Google