Near-Linear Approximation Algorithms for Geometric Hitting Sets Pankaj Agarwal Esther Ezra Micha Sharir Duke Duke Tel-Aviv University University University.

Slides:



Advertisements
Similar presentations
Guy EvenZvi LotkerDana Ron Tel Aviv University Conflict-free colorings of unit disks, squares, & hexagons.
Advertisements

Inapproximability of Hypergraph Vertex-Cover. A k-uniform hypergraph H= : V – a set of vertices E - a collection of k-element subsets of V Example: k=3.
Lower bounds for epsilon-nets
Incremental Linear Programming Linear programming involves finding a solution to the constraints, one that maximizes the given linear function of variables.
Efficient access to TIN Regular square grid TIN Efficient access to TIN Let q := (x, y) be a point. We want to estimate an elevation at a point q: 1. should.
Fast Algorithms For Hierarchical Range Histogram Constructions
Approximations of points and polygonal chains
Augmenting Data Structures Advanced Algorithms & Data Structures Lecture Theme 07 – Part I Prof. Dr. Th. Ottmann Summer Semester 2006.
Dynamic Planar Convex Hull Operations in Near- Logarithmic Amortized Time TIMOTHY M. CHAN.
I/O-Algorithms Lars Arge Fall 2014 September 25, 2014.
Greedy Algorithms Greed is good. (Some of the time)
Shakhar Smorodinsky Courant Institute, New-York University (NYU) On the Chromatic Number of Some Geometric Hypergraphs.
By Groysman Maxim. Let S be a set of sites in the plane. Each point in the plane is influenced by each point of S. We would like to decompose the plane.
CS774. Markov Random Field : Theory and Application Lecture 17 Kyomin Jung KAIST Nov
I/O-Algorithms Lars Arge Aarhus University February 27, 2007.
Lecture 12 : Special Case of Hidden-Line-Elimination Computational Geometry Prof. Dr. Th. Ottmann 1 Special Cases of the Hidden Line Elimination Problem.
Counting and Representing Intersections Among Triangles in R 3 Esther Ezra and Micha Sharir.
Polynomial-Time Approximation Schemes for Geometric Intersection Graphs Authors: T. Erlebach, L. Jansen, and E. Seidel Presented by: Ping Luo 10/17/2005.
2 -1 Chapter 2 The Complexity of Algorithms and the Lower Bounds of Problems.
Approximate Range Searching in the Absolute Error Model Guilherme D. da Fonseca CAPES BEX Advisor: David M. Mount.
Duality and Arrangements Computational Geometry, WS 2007/08 Lecture 6 Prof. Dr. Thomas Ottmann Algorithmen & Datenstrukturen, Institut für Informatik Fakultät.
On the Union of Cylinders in Esther Ezra Duke University On the Union of Cylinders in  3 Esther Ezra Duke University.
Shakhar Smorodinsky Courant Institute, NYU Online conflict-free coloring work with Amos Fiat, Meital Levy, Jiri Matousek, Elchanan Mossel, Janos Pach,
Analysis of greedy active learning Sanjoy Dasgupta UC San Diego.
Implicit Hitting Set Problems Richard M. Karp Harvard University August 29, 2011.
I/O-Algorithms Lars Arge University of Aarhus March 1, 2005.
I/O-Algorithms Lars Arge Spring 2009 March 3, 2009.
I/O-Algorithms Lars Arge Aarhus University March 5, 2008.
Almost Tight Bound for a Single Cell in an Arrangement of Convex Polyhedra in R 3 Esther Ezra Tel-Aviv University.
OBBTree: A Hierarchical Structure for Rapid Interference Detection Gottschalk, M. C. Lin and D. ManochaM. C. LinD. Manocha Department of Computer Science,
Lecture 8 : Arrangements and Duality Computational Geometry Prof. Dr. Th. Ottmann 1 Duality and Arrangements Duality between lines and points Computing.
Small-size  -nets for Axis- Parallel Rectangles and Boxes Boris Aronov Esther Ezra Micha sharir polytechnic Duke Tel-Aviv Institute of NYU University.
Almost tight bound for the union of fat tetrahedra in R 3 Esther Ezra Micha Sharir Duke University Tel-Aviv University.
Lecture 6: Point Location Computational Geometry Prof. Dr. Th. Ottmann 1 Point Location 1.Trapezoidal decomposition. 2.A search structure. 3.Randomized,
Randomness in Computation and Communication Part 1: Randomized algorithms Lap Chi Lau CSE CUHK.
The Complexity of Algorithms and the Lower Bounds of Problems
Duality and Arrangements Computational Geometry, WS 2006/07 Lecture 7 Prof. Dr. Thomas Ottmann Algorithmen & Datenstrukturen, Institut für Informatik Fakultät.
Efficient Partition Trees Jiri Matousek Presented By Benny Schlesinger Omer Tavori 1.
Domain decomposition in parallel computing Ashok Srinivasan Florida State University COT 5410 – Spring 2004.
UNC Chapel Hill M. C. Lin Point Location Reading: Chapter 6 of the Textbook Driving Applications –Knowing Where You Are in GIS Related Applications –Triangulation.
Basics Set systems: (X,F) where F is a collection of subsets of X. e.g. (R 2, set of half-planes) µ: a probability measure on X e.g. area/volume is a.
Design Techniques for Approximation Algorithms and Approximation Classes.
1 Introduction to Approximation Algorithms. 2 NP-completeness Do your best then.
14/13/15 CMPS 3130/6130 Computational Geometry Spring 2015 Windowing Carola Wenk CMPS 3130/6130 Computational Geometry.
On the union of cylinders in 3-space Esther Ezra Duke University.
Ch. 6 - Approximation via Reweighting Presentation by Eran Kravitz.
Edge-disjoint induced subgraphs with given minimum degree Raphael Yuster 2012.
Computing the Volume of the Union of Cubes in R 3 Pankaj K. Agarwal Haim Kaplan Micha Sharir.
Approximation algorithms for TSP with neighborhoods in the plane R 郭秉鈞 R 林傳健.
Center for Graphics and Geometric Computing, Technion 1 Computational Geometry Chapter 8 Arrangements and Duality.
Open Problem: Dynamic Planar Nearest Neighbors CSCE 620 Problem 63 from the Open Problems Project
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
Mohamed Hefeeda 1 School of Computing Science Simon Fraser University, Canada Efficient k-Coverage Algorithms for Wireless Sensor Networks Mohamed Hefeeda.
Arrangements and Duality Sanjay Sthapit Comp290 10/6/98.
Analysis of Algorithms CS 477/677 Instructor: Monica Nicolescu Lecture 7.
Arrangements and Duality Motivation: Ray-Tracing Fall 2001, Lecture 9 Presented by Darius Jazayeri 10/4/01.
Linear Program Set Cover. Given a universe U of n elements, a collection of subsets of U, S = {S 1,…, S k }, and a cost function c: S → Q +. Find a minimum.
Substructures in Geometric Arrangements and  -nets Esther Ezra Duke University.
9/8/10CS 6463: AT Computational Geometry1 CS 6463: AT Computational Geometry Fall 2010 Triangulations and Guarding Art Galleries Carola Wenk.
Center for Graphics and Geometric Computing, Technion 1 Computational Geometry Chapter 9 Line Arrangements.
Common Intersection of Half-Planes in R 2 2 PROBLEM (Common Intersection of half- planes in R 2 ) Given n half-planes H 1, H 2,..., H n in R 2 compute.
Geometric Arrangements: Substructures and Algorithms Esther Ezra Advisor: Micha Sharir.
Polygon Triangulation
Almost tight bound for the union of fat tetrahedra in R3 Esther Ezra Micha Sharir Duke University Tel-Aviv University.
CMPS 3130/6130 Computational Geometry Spring 2017
Computational Geometry
Orthogonal Range Searching and Kd-Trees
Enumerating Distances Using Spanners of Bounded Degree
Depth Estimation via Sampling
Clustering.
Presentation transcript:

Near-Linear Approximation Algorithms for Geometric Hitting Sets Pankaj Agarwal Esther Ezra Micha Sharir Duke Duke Tel-Aviv University University University

Range Spaces Range space (X, R) : X – Ground set. R – Ranges: Subsets of X. |R|  2 |X| Abstract form: Hypergraphs. X – vertices. R – hyperedges.

specification: X   d, R = set of simply-shaped regions in  d X – Points on the real line. R – Intervals. X – Points on the plane. R – halfplanes. X – Points on the plane. R – Disks. Geometric Range Spaces X finite: Discrete model, X infinite: Continuous model

The hitting-set problem A hitting set for (X, R) is a subset H  X, s.t., for any Q  R, Q  H  . Goal: find smallest hitting set. Useful applications: art-gallery, sensor networking, and more.

Hardness of hitting sets Finding a hitting set of smallest size is NP-hard, even for geometric range spaces! Use an approximation algorithm instead. Abstract range spaces: Greedy algorithm [Chvatal 79]. Geometric range spaces: Iterative reweighed scheme [Bronimann-Goodrich95], [Clarkson 93].

The greedy algorithm  = size of the smallest hitting set (optimum). m = |X|, n = |R|. The algorithm chooses a single point in each iteration. Approximation factor: O(log m) Number of iterations: O(  log m). Running time per iteration: O(nm) (naïve). Overall: O*(nm  ) Best known approximation achieved in polynomial time! Geometric range spaces: Obtain improved approximation factor. p In some cases, can be improved to O*(n+m)

Iterative reweighted scheme [Bronimann-Goodrich-95], [Clarkson-93]  = size of the smallest hitting set. m = |X|, n = |R|. Approximation factor: O(log  ). Sometimes, the approximation factor is even smaller! Number of iterations: O(  log (m/  )). Running time per iteration: O*(n  + m) (naïve). Overall: O*(m  + n  2 ) Goal: Improve the running time. A (near) linear-time algorithm? In some cases, can be improved to O*(n+m) Improved!

Our result Planar regions: Obtain a O*(log n) approximation in near-linear time, when the union complexity of R is near-linear. Applied in both discrete and continuous models. Specifically: Union complexity of any subset R’  R, |R’| = r is O(r  (r)). Obtain a hitting set for (X, R) of size O(   (  ) log n) in (randomized expected) time O*(m + n).  (  ) is a slowly growing function. The running time is O*(n) for the continuous model. The union boundary

Our result: Axis-parallel boxes in  d Discrete model: Approximation factor O(log n) Running time: O*(m + n) Continuous model: Approximation factor O(log d-1  ) Running time: O(n log n) Fast implementation for the iterative reweighting scheme: Approximation factor O(log  ) in any dimension d. Running time: O*(m + n +  d+1 ) Union complexity: quadratic O(loglog  ), for d=2,3. [Aronov etal. 2009] Discrete model

Improved approximation factors (achieved in near-linear time) Ranges bound d=2 Disks (pseudo-disks)O(log n)  -fat wedgesO(log n)  -fat trianglesO(log n loglog n) Locally  -fat objectsO(polylog n) d  1 Axis-parallel boxes (discrete)O(log n) Axis-parallel boxes (continuous)O(log d-1  )

Planar regions with near-linear union complexity

Arrangement and depth R = {Q 1, … Q n } set of planar regions. The arrangement A(R) of R: Subdivision of the plane induced by R. # vertices/edges/2D cells (Complexity): O(n 2 ). Vertical decomposition of A(R): Decomposition of the cells of A(R) into pseudo-trapezoidal cells. The depth  (p, R) of a point p   2 in A(R) = # regions in R that contain p in their interior. p  (p,R)=3 

(1/r)-cuttings for A(R) r - the parameter of the cutting, 1  r  n. (1/r)-cuttings for A(R): Set of pairwise disjoint pseudo-trapezoidal cells, s.t. 1.They cover A(R). 2.Each cell is crossed by  n/r boundary regions of R. Simple Construction [Clarkson Shor 89]: Choose a random sample K  R of O(r log r) regions. Form the planar arrangement A(K) of K. Construct the vertical decomposition of A(K). # cells: O(r 2 log 2 r) #cells = f(r)

Properties of (1/r)-cuttings Theorem [Matousek 92], [Agarwal etal. 2000]: Put  = max p   2 {  (p, R)}. There exists (1/r)-cutting, s.t., # cells : O(q r  (r/q)), where q =  (r/n) + 1. In particular: For r = min {c  n / , n}. # cells of the cutting: O(r  (r)) = O(n/   (n/  )). When   c, we have r = n, and then # cells: O(n  (n)). union complexity c > 0 sufficiently large constant.

While (R   )  = max p   2 {  (p, R)}, choose r = min { c  n / , |R|}. Construct a (1/r)-cutting for A(R). For each cell  of the cutting, choose an arbitrary point p  . Eliminate all regions in R stabbed by p. R’ = set of remaining regions. Claim: max p   2 {  (p, R’)}   /2. R  R’ End The algorithm for the continuous model  p

A (remaining) region Q  R’ cannot fully contain a cell  of the cutting. If Q meets , the boundary of Q must cross . max p   2 {  (p, R’)}  max # boundary regions crossing . max p   2 {  (p, R’)}  n/r =  /c   /2. The algorithm terminates after log   log n steps! The maximum depth in A(R’)  Q p

The approximation factor In each step, the size of the hitting set is bounded by the number of cells: O(r  (r)) = O(n/   (n/  )). The overall size is: O(n/   (n/  ) log n). Observation: n/   . Each point in the optimum can stab   regions. This property holds in each step of the algorithm. The hitting set size: O(   (  ) log n). The optimum

The discrete model The points of the hitting set are chosen from a given set X. Apply a more intricate variant of the algorithm:  = max p  X {  (p, R)}, choose r = min {c  n/ , n}. Use (1/r)-cutting to cover only the portions of A(R) at depth  . Choose a point in each non-empty cell of the cutting. In each step, the cutting size is O(n/   (n/  )) = O(   (  )). Overall size: O(   (  ) log n).

Open problems Improve the approximation factor (in near-linear time): Ideally, O(  (  )), but anything better than O(  (  ) log n) is already ‘exciting’. Dynamic hitting sets: Maintain hitting sets under insertions/deletions of regions.

merci beaucoup!

Implementation Two major tasks: Construct the shallow cutting. Efficiently report all regions stabs by the points p    X for each non-empty cell  of the cutting. Use a variant of the randomized incremental construction of [Agarwal etal.] to construct A  l (R). Randomly permute the regions in R, and apply the algorithm for the first r steps.

randomized incremental construction In each step i maintain: The cells  in the decomposition of A(R i ) that meet A  l (R). The conflict lists: store for each  the sets of regions R , whose boundary crosses , and for each region the set of cells that it meets. For each cell  maintain the level of one of its (arbitrarily chosen) interior points p with respect to A(R). R i = first i elements

Inserting a new region  i = region inserted at the i-th step. Use the conflict lists to: –Find all cells  in the decomposition of A(R i ) that  i meets. –Split these cells, re-decompose them, update the conflict lists, and update the level information of the new cells  ’. Then test if each of the new cells  ’ meets A  l (R). Theorem: Running time: O(n log 2 r + r l )  (n/ l )). For our choice of l, r, the running time is: O(n log 2   (  ) log 2  ) A single step of the algorithm

Axis-parallel rectangles: Discrete model The union complexity of axis parallel rectangles is quadratic in the worst case. Observation: The complexity of the union of axis parallel rectangles, all of which meet a common vertical line is only linear. The previous algorithm can be applied in this case.

Project all rectangles and points onto the x-axis: Obtain a set I of n intervals. Compute in O(n log n) time an optimal hitting Q set for I. The set L of the vertical lines x = q, q  Q intersect all rectangles in R. |L| = |Q|  . The plane decomposition

The lines l in L partition the plane into atomic slabs. Construct a balanced binary tree over These slabs. A rectangle  is stored at the highest node v of the tree for which l v meets .  is associated with a unique node v. R v = the set of the rectangles stored at v. vv  lvlv

The algorithm In each level i of the tree, the complexity of the union of the rectangles  stored at that level is linear! Apply previous algorithm at each level i. Hitting set size: O(  log n), for a single level. Overall size: O(  log  log n). Running time: O((m+n) log 2  log n). R v, R u are pairwise disjoint, for v  u.

The continuous model: Much simpler Construct a tree decomposition as above. The points of the hitting set are chosen on the lines l v : apply the 1-dim (exact) algorithm for the projection of R v on l v To find a hitting set Q v. For each pair of nodes v  u at the same level i, the hitting sets Q v, Q u are disjoint.  v at level i |Q v |   vv  lvlv

The continuous model Overall size: O(  log  ). Running time: O(n log n). The algorithm can be extended to any dimension d, using induction on d: Project all boxes onto the x d -axis. Apply the 1-dim (exact) algorithm on the x d -axis. Obtains a tree decomposition with the (d-1)-hyperplanes orthogonal to the x d -axis. Solve the problem recursively on each hyperplane. At the i-th level obtain a hitting set of size O(  log d-2  ). Overall size: O(  log d-1  ).

Thank you

Input: C – set of clients c. A - locations of antennas a, each of which with a unit sensing radius. Goal: Find minimum-size set of antennas that serve all clients. Hitting-set instance: X – antennas locations R – D ( c,1): unit disks with client-centers. Sensor networking application a covers c iff a is inside D ( c,1) c a c a

Approximation for geometric hitting sets Geometric range spaces: Achieve improved approximation factor! Approximation factor: O(1 + log  ), Sometimes, the approximation factor is even smaller! Points and disks or pseudo-disks in 2D: O(1). Points and halfspaces in 2D and 3D: O(1). Points and axis-parallel boxes in 2D and 3D: O(log log  ). This is achieved via  -nets

 -nets for range spaces Given: A range space (X, R), assume X is finite, |X| = n. A parameter 0 <  < 1, An  -net for (X, R) is a subset N  X that hits every range Q  R, with |Q  X|   n. N is a hitting set for all the ``heavy'' ranges. Example: Points and intervals on the real line: |N| = 1/ .  n n Bound does not depend on n.

Approximation for geometric hitting sets The Bronimann-Goodrich technique / LP-relaxation If (X, R) admits an  -net of size f(1/  ), then there exists a polynomial-time approximation algorithm that reports a hitting set of size O(f(  )). Idea: Assign weights on X s.t each range Q  R becomes heavy. Construct an  -net for the weighed range space. Each range is hit by the  -net. Small-size  -nets imply small approximation factors!

 -net theorem What is the size of  -nets in geometric range spaces? Theorem [Haussler-Welzl, 87]: If the ranges are simply-shaped regions, then, for any  > 0, there exists an  -net N  X of size: O(1/  log (1/  )). Moreover, a random sample of that size is an  -net, with constant probability. Theorem [Komlos, Pach, Woeginger 92]: The bound is tight! No lower bound better than  (1/  ) is known in geometry. Bound does not depend on n. Artificial construction (non-geometric!).

The Bronimann-Goodrich technique Number of iterations: O(  log (m/  )). Performance of the algorithm: (Weighted) net-finder: O(m) Verifier: O(n |N| + m)  O*(n  + m) (naïve) Overall: O*(m  + n  2 ) Improvement in some cases: Verifier: O*(m + n) Overall: O*((n + m)  ) |N| = O(  log  ) Axis-parallel rectangles and planar regions with near-linear union complexity

Arrangement and levels R = {Q 1, … Q n } set of planar regions. A(R) – the arrangement of R. The depth  (p, R) of a point p   2 in A(R) = # regions in R that contain p in their interior. The l -level of A(R) = set of all points with depth l. In particular: The 0-level is the closure of the complement of the union of R. A  l (R) = the points at level  l in A(R). p  (p,R)=3

Complexity of A  l (R) Using Clarkson & Shor: The complexity of A  l ( R ) is: O( l 2 f(n/ l )) = O(n l  (n/ l ) ) Vertical decomposition of A( R ) (or A  l ( R ) ) : Partition each cell of A( R ) (or A  l ( R ) ) into pseudo-trapezoidal cells. The decomposition has the same complexity as A( R ) (or A  l ( R ) ). Vertical decomposition of level  1 Union complexity = n/ l  (n/ l ). Need to assume constant description complexity

1/r-cuttings r - the parameter of the cutting, 1  r  n. Construct (1/r)-cuttings: Choose a random sample K  R of O(r log r) regions. Form the planar arrangement A(K) of K Construct the vertical decomposition of A(K). All cells cover A(R). With high probability, each cell meets  n/r boundary regions. Improvement [Chazelle-Friedman]:: The number of cells can be decreased to O(r 2 ). # cells in the cutting is O(r 2 log 2 r)

(1/r)-cuttings for A(R) r - the parameter of the cutting, 1  r  n. 1.Choose a random sample K  R of O(r log r) regions. 2.Form the planar arrangement A(k) of k: Overall complexity: O(r 2 log 2 r). 3.Construct the vertical decomposition of A(K). Number of cells: O(r 2 log 2 r) Theorem [Clarkson & Shor], [Haussler & Welzl] : Each cell is crossed by  n/r boundary regions of R, with high probability. Improvement: Use two-level sampling [Chazelle-Friedman]: The number of cells can be decreased to O(r 2 ).

Improved 1/r-cuttings Theorem [Chazelle-Friedman]: The size of the cutting can be improved to O(r 2 ). Proof sketch (apply a two-level sampling): First step: Choose a random sample K  R of O(r) regions, and construct the vertical decomposition. Second (repair) step: For each “heavy” cell  that meets t  n/r boundary regions, for t > 1, construct a (1/t)-cutting within  : Choose a random sample K  of O(t log t) regions. Construct the vertical decomposition of A(K  ). Clip these cells within . The number of the heavy cells is small!

Shallow cuttings A shallow cutting is a (1/r)-cutting that covers A  l (R). Construct shallow cuttings: Discard all cells of the full (1/r)- cutting that do not meet A  l (R). Theorem [Matousek, Agarwal etal.] The size of the shallow cutting is O(q r  (r/q)), where q = l (r/n) + 1. When l = n, # cells is O(r 2 ).

 = max p  X {  (p, R)}, choose r = c n / , c > 0 sufficiently large constant. Construct a (1/r)-cutting for level   of A(R). For each cell  with   X  , choose an arbitrary point p    X. Eliminate all regions in R stabbed by p. R’ = set of remaining regions. Claim: max p  X {  (p, R’)}   /2. Recurse with R’. Bottom of recursion:   c 1 (some constant): Construct the entire A   (R), and choose a single point in each non-empty cell. The algorithm for the discrete model 

The maximum depth in A(R’) A (remaining) region in R’ cannot fully contain a non-empty cell  of the cutting. The boundary of these regions cross cells . max p  X {  (p, R’)} = max # crossing boundary regions in a cell  max p  X {  (p, R’)}  n/r =  /c   /2. The algorithm terminates after log   log n steps!  R’

The size of the hitting set in each step By the shallow cutting theorem, in each step, the size of the hitting set is bounded by the number of cells: O(q r  (r/q)), q = l (r/n) + 1. Put l = , r = c n / , the size is O(n/   (n/  )). Informal description: The averaged depth  ’ in the arrangement of the sample is O(1). According to Clarkson & Shor, the complexity of that level is: O(r  ’  (r/  ’) ) = O(n/   (n/  )).

The approximation factor The overall size is: O(n/   (n/  ) log n). Observation: n/   . Each point in the optimum can stab   regions. This property holds in each step of the algorithm. The hitting set size: O(   (  ) log n). The optimum