Presentation is loading. Please wait.

Presentation is loading. Please wait.

Near-Linear Approximation Algorithms for Geometric Hitting Sets Pankaj Agarwal Esther Ezra Micha Sharir Duke Duke Tel-Aviv University University University.

Similar presentations


Presentation on theme: "Near-Linear Approximation Algorithms for Geometric Hitting Sets Pankaj Agarwal Esther Ezra Micha Sharir Duke Duke Tel-Aviv University University University."— Presentation transcript:

1 Near-Linear Approximation Algorithms for Geometric Hitting Sets Pankaj Agarwal Esther Ezra Micha Sharir Duke Duke Tel-Aviv University University University

2 Range Spaces Range space (X, R) : X – Ground set. R – Ranges: Subsets of X. |R|  2 |X| Abstract form: Hypergraphs. X – vertices. R – hyperedges.

3 specification: X   d, R = set of simply-shaped regions in  d X – Points on the real line. R – Intervals. X – Points on the plane. R – halfplanes. X – Points on the plane. R – Disks. Geometric Range Spaces X finite: Discrete model, X infinite: Continuous model

4 The hitting-set problem A hitting set for (X, R) is a subset H  X, s.t., for any Q  R, Q  H  . Goal: find smallest hitting set. Useful applications: art-gallery, sensor networking, and more.

5 Hardness of hitting sets Finding a hitting set of smallest size is NP-hard, even for geometric range spaces! Use an approximation algorithm instead. Abstract range spaces: Greedy algorithm [Chvatal 79]. Geometric range spaces: Iterative reweighed scheme [Bronimann-Goodrich95], [Clarkson 93].

6 The greedy algorithm  = size of the smallest hitting set (optimum). m = |X|, n = |R|. The algorithm chooses a single point in each iteration. Approximation factor: O(log m) Number of iterations: O(  log m). Running time per iteration: O(nm) (naïve). Overall: O*(nm  ) Best known approximation achieved in polynomial time! Geometric range spaces: Obtain improved approximation factor. p In some cases, can be improved to O*(n+m)

7 Iterative reweighted scheme [Bronimann-Goodrich-95], [Clarkson-93]  = size of the smallest hitting set. m = |X|, n = |R|. Approximation factor: O(log  ). Sometimes, the approximation factor is even smaller! Number of iterations: O(  log (m/  )). Running time per iteration: O*(n  + m) (naïve). Overall: O*(m  + n  2 ) Goal: Improve the running time. A (near) linear-time algorithm? In some cases, can be improved to O*(n+m) Improved!

8 Our result Planar regions: Obtain a O*(log n) approximation in near-linear time, when the union complexity of R is near-linear. Applied in both discrete and continuous models. Specifically: Union complexity of any subset R’  R, |R’| = r is O(r  (r)). Obtain a hitting set for (X, R) of size O(   (  ) log n) in (randomized expected) time O*(m + n).  (  ) is a slowly growing function. The running time is O*(n) for the continuous model. The union boundary

9 Our result: Axis-parallel boxes in  d Discrete model: Approximation factor O(log n) Running time: O*(m + n) Continuous model: Approximation factor O(log d-1  ) Running time: O(n log n) Fast implementation for the iterative reweighting scheme: Approximation factor O(log  ) in any dimension d. Running time: O*(m + n +  d+1 ) Union complexity: quadratic O(loglog  ), for d=2,3. [Aronov etal. 2009] Discrete model

10 Improved approximation factors (achieved in near-linear time) Ranges bound d=2 Disks (pseudo-disks)O(log n)  -fat wedgesO(log n)  -fat trianglesO(log n loglog n) Locally  -fat objectsO(polylog n) d  1 Axis-parallel boxes (discrete)O(log n) Axis-parallel boxes (continuous)O(log d-1  )

11 Planar regions with near-linear union complexity

12 Arrangement and depth R = {Q 1, … Q n } set of planar regions. The arrangement A(R) of R: Subdivision of the plane induced by R. # vertices/edges/2D cells (Complexity): O(n 2 ). Vertical decomposition of A(R): Decomposition of the cells of A(R) into pseudo-trapezoidal cells. The depth  (p, R) of a point p   2 in A(R) = # regions in R that contain p in their interior. p  (p,R)=3 

13 (1/r)-cuttings for A(R) r - the parameter of the cutting, 1  r  n. (1/r)-cuttings for A(R): Set of pairwise disjoint pseudo-trapezoidal cells, s.t. 1.They cover A(R). 2.Each cell is crossed by  n/r boundary regions of R. Simple Construction [Clarkson Shor 89]: Choose a random sample K  R of O(r log r) regions. Form the planar arrangement A(K) of K. Construct the vertical decomposition of A(K). # cells: O(r 2 log 2 r) #cells = f(r)

14 Properties of (1/r)-cuttings Theorem [Matousek 92], [Agarwal etal. 2000]: Put  = max p   2 {  (p, R)}. There exists (1/r)-cutting, s.t., # cells : O(q r  (r/q)), where q =  (r/n) + 1. In particular: For r = min {c  n / , n}. # cells of the cutting: O(r  (r)) = O(n/   (n/  )). When   c, we have r = n, and then # cells: O(n  (n)). union complexity c > 0 sufficiently large constant.

15 While (R   )  = max p   2 {  (p, R)}, choose r = min { c  n / , |R|}. Construct a (1/r)-cutting for A(R). For each cell  of the cutting, choose an arbitrary point p  . Eliminate all regions in R stabbed by p. R’ = set of remaining regions. Claim: max p   2 {  (p, R’)}   /2. R  R’ End The algorithm for the continuous model  p

16 A (remaining) region Q  R’ cannot fully contain a cell  of the cutting. If Q meets , the boundary of Q must cross . max p   2 {  (p, R’)}  max # boundary regions crossing . max p   2 {  (p, R’)}  n/r =  /c   /2. The algorithm terminates after log   log n steps! The maximum depth in A(R’)  Q p

17 The approximation factor In each step, the size of the hitting set is bounded by the number of cells: O(r  (r)) = O(n/   (n/  )). The overall size is: O(n/   (n/  ) log n). Observation: n/   . Each point in the optimum can stab   regions. This property holds in each step of the algorithm. The hitting set size: O(   (  ) log n). The optimum

18 The discrete model The points of the hitting set are chosen from a given set X. Apply a more intricate variant of the algorithm:  = max p  X {  (p, R)}, choose r = min {c  n/ , n}. Use (1/r)-cutting to cover only the portions of A(R) at depth  . Choose a point in each non-empty cell of the cutting. In each step, the cutting size is O(n/   (n/  )) = O(   (  )). Overall size: O(   (  ) log n).

19 Open problems Improve the approximation factor (in near-linear time): Ideally, O(  (  )), but anything better than O(  (  ) log n) is already ‘exciting’. Dynamic hitting sets: Maintain hitting sets under insertions/deletions of regions.

20 merci beaucoup!

21 Implementation Two major tasks: Construct the shallow cutting. Efficiently report all regions stabs by the points p    X for each non-empty cell  of the cutting. Use a variant of the randomized incremental construction of [Agarwal etal.] to construct A  l (R). Randomly permute the regions in R, and apply the algorithm for the first r steps.

22 randomized incremental construction In each step i maintain: The cells  in the decomposition of A(R i ) that meet A  l (R). The conflict lists: store for each  the sets of regions R , whose boundary crosses , and for each region the set of cells that it meets. For each cell  maintain the level of one of its (arbitrarily chosen) interior points p with respect to A(R). R i = first i elements

23 Inserting a new region  i = region inserted at the i-th step. Use the conflict lists to: –Find all cells  in the decomposition of A(R i ) that  i meets. –Split these cells, re-decompose them, update the conflict lists, and update the level information of the new cells  ’. Then test if each of the new cells  ’ meets A  l (R). Theorem: Running time: O(n log 2 r + r l )  (n/ l )). For our choice of l, r, the running time is: O(n log 2   (  ) log 2  ) A single step of the algorithm

24 Axis-parallel rectangles: Discrete model The union complexity of axis parallel rectangles is quadratic in the worst case. Observation: The complexity of the union of axis parallel rectangles, all of which meet a common vertical line is only linear. The previous algorithm can be applied in this case.

25 Project all rectangles and points onto the x-axis: Obtain a set I of n intervals. Compute in O(n log n) time an optimal hitting Q set for I. The set L of the vertical lines x = q, q  Q intersect all rectangles in R. |L| = |Q|  . The plane decomposition

26 The lines l in L partition the plane into atomic slabs. Construct a balanced binary tree over These slabs. A rectangle  is stored at the highest node v of the tree for which l v meets .  is associated with a unique node v. R v = the set of the rectangles stored at v. vv  lvlv

27 The algorithm In each level i of the tree, the complexity of the union of the rectangles  stored at that level is linear! Apply previous algorithm at each level i. Hitting set size: O(  log n), for a single level. Overall size: O(  log  log n). Running time: O((m+n) log 2  log n). R v, R u are pairwise disjoint, for v  u.

28 The continuous model: Much simpler Construct a tree decomposition as above. The points of the hitting set are chosen on the lines l v : apply the 1-dim (exact) algorithm for the projection of R v on l v To find a hitting set Q v. For each pair of nodes v  u at the same level i, the hitting sets Q v, Q u are disjoint.  v at level i |Q v |   vv  lvlv

29 The continuous model Overall size: O(  log  ). Running time: O(n log n). The algorithm can be extended to any dimension d, using induction on d: Project all boxes onto the x d -axis. Apply the 1-dim (exact) algorithm on the x d -axis. Obtains a tree decomposition with the (d-1)-hyperplanes orthogonal to the x d -axis. Solve the problem recursively on each hyperplane. At the i-th level obtain a hitting set of size O(  log d-2  ). Overall size: O(  log d-1  ).

30 Thank you

31 Input: C – set of clients c. A - locations of antennas a, each of which with a unit sensing radius. Goal: Find minimum-size set of antennas that serve all clients. Hitting-set instance: X – antennas locations R – D ( c,1): unit disks with client-centers. Sensor networking application a covers c iff a is inside D ( c,1) c a c a

32 Approximation for geometric hitting sets Geometric range spaces: Achieve improved approximation factor! Approximation factor: O(1 + log  ), Sometimes, the approximation factor is even smaller! Points and disks or pseudo-disks in 2D: O(1). Points and halfspaces in 2D and 3D: O(1). Points and axis-parallel boxes in 2D and 3D: O(log log  ). This is achieved via  -nets

33  -nets for range spaces Given: A range space (X, R), assume X is finite, |X| = n. A parameter 0 <  < 1, An  -net for (X, R) is a subset N  X that hits every range Q  R, with |Q  X|   n. N is a hitting set for all the ``heavy'' ranges. Example: Points and intervals on the real line: |N| = 1/ .  n n Bound does not depend on n.

34 Approximation for geometric hitting sets The Bronimann-Goodrich technique / LP-relaxation If (X, R) admits an  -net of size f(1/  ), then there exists a polynomial-time approximation algorithm that reports a hitting set of size O(f(  )). Idea: Assign weights on X s.t each range Q  R becomes heavy. Construct an  -net for the weighed range space. Each range is hit by the  -net. Small-size  -nets imply small approximation factors!

35  -net theorem What is the size of  -nets in geometric range spaces? Theorem [Haussler-Welzl, 87]: If the ranges are simply-shaped regions, then, for any  > 0, there exists an  -net N  X of size: O(1/  log (1/  )). Moreover, a random sample of that size is an  -net, with constant probability. Theorem [Komlos, Pach, Woeginger 92]: The bound is tight! No lower bound better than  (1/  ) is known in geometry. Bound does not depend on n. Artificial construction (non-geometric!).

36 The Bronimann-Goodrich technique Number of iterations: O(  log (m/  )). Performance of the algorithm: (Weighted) net-finder: O(m) Verifier: O(n |N| + m)  O*(n  + m) (naïve) Overall: O*(m  + n  2 ) Improvement in some cases: Verifier: O*(m + n) Overall: O*((n + m)  ) |N| = O(  log  ) Axis-parallel rectangles and planar regions with near-linear union complexity

37 Arrangement and levels R = {Q 1, … Q n } set of planar regions. A(R) – the arrangement of R. The depth  (p, R) of a point p   2 in A(R) = # regions in R that contain p in their interior. The l -level of A(R) = set of all points with depth l. In particular: The 0-level is the closure of the complement of the union of R. A  l (R) = the points at level  l in A(R). p  (p,R)=3

38 Complexity of A  l (R) Using Clarkson & Shor: The complexity of A  l ( R ) is: O( l 2 f(n/ l )) = O(n l  (n/ l ) ) Vertical decomposition of A( R ) (or A  l ( R ) ) : Partition each cell of A( R ) (or A  l ( R ) ) into pseudo-trapezoidal cells. The decomposition has the same complexity as A( R ) (or A  l ( R ) ). Vertical decomposition of level  1 Union complexity = n/ l  (n/ l ). Need to assume constant description complexity

39 1/r-cuttings r - the parameter of the cutting, 1  r  n. Construct (1/r)-cuttings: Choose a random sample K  R of O(r log r) regions. Form the planar arrangement A(K) of K Construct the vertical decomposition of A(K). All cells cover A(R). With high probability, each cell meets  n/r boundary regions. Improvement [Chazelle-Friedman]:: The number of cells can be decreased to O(r 2 ). # cells in the cutting is O(r 2 log 2 r)

40 (1/r)-cuttings for A(R) r - the parameter of the cutting, 1  r  n. 1.Choose a random sample K  R of O(r log r) regions. 2.Form the planar arrangement A(k) of k: Overall complexity: O(r 2 log 2 r). 3.Construct the vertical decomposition of A(K). Number of cells: O(r 2 log 2 r) Theorem [Clarkson & Shor], [Haussler & Welzl] : Each cell is crossed by  n/r boundary regions of R, with high probability. Improvement: Use two-level sampling [Chazelle-Friedman]: The number of cells can be decreased to O(r 2 ).

41 Improved 1/r-cuttings Theorem [Chazelle-Friedman]: The size of the cutting can be improved to O(r 2 ). Proof sketch (apply a two-level sampling): First step: Choose a random sample K  R of O(r) regions, and construct the vertical decomposition. Second (repair) step: For each “heavy” cell  that meets t  n/r boundary regions, for t > 1, construct a (1/t)-cutting within  : Choose a random sample K  of O(t log t) regions. Construct the vertical decomposition of A(K  ). Clip these cells within . The number of the heavy cells is small!

42 Shallow cuttings A shallow cutting is a (1/r)-cutting that covers A  l (R). Construct shallow cuttings: Discard all cells of the full (1/r)- cutting that do not meet A  l (R). Theorem [Matousek, Agarwal etal.] The size of the shallow cutting is O(q r  (r/q)), where q = l (r/n) + 1. When l = n, # cells is O(r 2 ).

43  = max p  X {  (p, R)}, choose r = c n / , c > 0 sufficiently large constant. Construct a (1/r)-cutting for level   of A(R). For each cell  with   X  , choose an arbitrary point p    X. Eliminate all regions in R stabbed by p. R’ = set of remaining regions. Claim: max p  X {  (p, R’)}   /2. Recurse with R’. Bottom of recursion:   c 1 (some constant): Construct the entire A   (R), and choose a single point in each non-empty cell. The algorithm for the discrete model 

44 The maximum depth in A(R’) A (remaining) region in R’ cannot fully contain a non-empty cell  of the cutting. The boundary of these regions cross cells . max p  X {  (p, R’)} = max # crossing boundary regions in a cell  max p  X {  (p, R’)}  n/r =  /c   /2. The algorithm terminates after log   log n steps!  R’

45 The size of the hitting set in each step By the shallow cutting theorem, in each step, the size of the hitting set is bounded by the number of cells: O(q r  (r/q)), q = l (r/n) + 1. Put l = , r = c n / , the size is O(n/   (n/  )). Informal description: The averaged depth  ’ in the arrangement of the sample is O(1). According to Clarkson & Shor, the complexity of that level is: O(r  ’  (r/  ’) ) = O(n/   (n/  )).

46 The approximation factor The overall size is: O(n/   (n/  ) log n). Observation: n/   . Each point in the optimum can stab   regions. This property holds in each step of the algorithm. The hitting set size: O(   (  ) log n). The optimum


Download ppt "Near-Linear Approximation Algorithms for Geometric Hitting Sets Pankaj Agarwal Esther Ezra Micha Sharir Duke Duke Tel-Aviv University University University."

Similar presentations


Ads by Google