Presentation is loading. Please wait.

Presentation is loading. Please wait.

An Approximation Algorithm for Binary Searching in Trees Marco Molinaro Carnegie Mellon University joint work with Eduardo Laber (PUC-Rio)

Similar presentations


Presentation on theme: "An Approximation Algorithm for Binary Searching in Trees Marco Molinaro Carnegie Mellon University joint work with Eduardo Laber (PUC-Rio)"— Presentation transcript:

1 An Approximation Algorithm for Binary Searching in Trees Marco Molinaro Carnegie Mellon University joint work with Eduardo Laber (PUC-Rio)

2 Searching in sorted lists 3 14... Sorted list of numbers Marked number m Find the marked number using queries ‘ x ≤ m? ’ 5 10 6

3 Searching in sorted lists 3 6 10 14... 5 Search strategy: procedure that indicates which number should be queried next Can be represented by a decision tree (DT) # queries to find m = path length 10 145 6 10 14 6 3 5 3 > > > > >≤ ≤ ≤ ≤ ≤ DT 10 5 6 6

4 Searching in sorted lists We are given the probability of each number being the marked one Expected number of queries of a strategy = expected path length of the corresponding decision tree Efficient strategy is one with minimum expected path 3 6 10 14... 5 10 145 6 10 14 6 3 5 3 > > > > >≤ ≤ ≤ ≤ ≤ 0,05 0,1 0.2 0,10,5... 0,5 0,05 0,1 0,2 0,1

5 Searching in trees Tree with exactly one marked node m We can query an arc and find out which endpoint is closer to the marked node

6 Searching in trees Search strategy: procedure that indicates which arc should be queried next Can be represented by a decision tree (c,d) (a,b)(f,h) (d,f) f b ~a ~f ~d ~h ~b ~c ~f DT h f d b a c

7 Searching in trees Search strategy: procedure that indicates which arc should be queried next Can be represented by a decision tree # queries to find m = path length (c,d) (a,b)(f,h) (d,f) f b ~a ~f ~d ~h ~b ~c ~f DT h f d b a c (c,d) (f,h) (d,f) f

8 Searching in trees We are given the probability of each node being the marked one Expected number of queries is the expected path length of the corresponding decision tree The goal is to find a DT with minimum expected path (c,d) (a,b)(f,h) (d,f) f b ~a ~f ~d ~h ~b ~c ~f h f a b d c.2.1.3

9 Searching in trees Def: Given a tree T and weights w, compute a decision tree for searching in T with minimum expected path from root to leaves w.r.t. w Motivation  Generalizes searches in totally ordered structures to (one type of) partially ordered structures  Application to software testing and filesystem synchronization

10 Related work Searching in sorted lists  Worst-case Binary search is optimal  Average-case Knuth [Acta Informatica 71]: O(n 2 ) de Prisco, de Santis [IPL 93]: good approximation in linear time

11 Related work Searching in trees  Worst-case Ben-Asher et al. [SIAM J. Comput. 99]: O(n 4 log 3 n) Onak, Parys [FOCS 06]: O(n 3 ) Mozes et al. [SODA 08]: O(n)  Average-case Kosaraju et al. [WADS 99]: O(log n) -approximation

12 Related work Searching in posets  Worst-case Arkin et al. [Int. J. Comput. Geometry Appl. 98]: O(log n) -approximation Carmo et al. [TCS 04]  Finding optimal strategy is NP-Hard  Constant-factor approximation for random posets  Average-case Kosaraju et al. [WADS 99]: O(log n) -approximation

13 Our results First constant-factor approximation for searching in trees (average-case metric) Linear running time

14 Overview We know how to search in sorted lists with probabilities Searching in paths = searching in ordered lists

15 Overview Search strategy

16 Algorithm 1. Find a (heavy) path 2. Compute a decision tree for this path 3. Append decision trees for querying the hanging arcs 4. Recursively find strategies for the hanging subtrees and append them

17 Analysis T – input tree w(u) – likelihood of node u being the marked one w(T’) = ∑ u є T’ w(u) T i j – Hanging subtrees of T Cost of a decision tree – expected path length input tree T subtrees T i j

18 Analysis – upper bound ALGO( T ) = expected path of the computed DT = cost(■) + cost(■) + cost(■) ≤ H + w(T) + ∑ i,j j w(T i j ) + ∑ i,j ALGO (T i j ) entropy of { w(u) } input tree T decision tree

19 Analysis – lower bounds When H >> w(T)  UB and LB1 When H ≤ w(T)  UB and (LB1 + LB2) only when H is large for all H, ALGO( T ) ≤ α OPT( T ) UB: LB1: LB2:

20 Analysis – entropy lower bound OPT( T ) = from root to ( ■ ) + from ( ■ ) to ( ■ ) + from ( ■ ) to leaves from root to ( ■ ): using Shannon’s lossless coding theorem, we can lower bound by H / log 3 – w ( T ) from ( ■ ) to ( ■ ):  There are at most 2 purple nodes per level from (■) to leaves:  Every query to arcs in the trees T i j are descendants of purple nodes  Costs at least as much as searching inside the trees T i j, namely ∑ i,j OPT( T i j ) D* ≥  These paths cost

21 Analysis – alternative lower bound OPT( T ) ≥ from root to ( ■ ) + from ( ■ ) to leaves from root to ( ■ ):  Costs = ∑ i,j distance to i-th purple node. w ( T i j )  At most one purple node can have distance 0  w ( T i j ) ≤ w(T)/2  Costs at least w(T)/2 from (■) to leaves:  Costs at least as much as searching inside the trees T i j, namely ∑ i,j OPT (T i j ) D*

22 Efficient implementation Most steps take linear time In order to find a good strategy, the algorithm uses sorting of weights  Use linear time approximate sorting The algorithm can be implemented in linear time

23 Conclusions First constant-factor approximation for searching in trees (average-case) Linear running time Open questions  Is searching in trees polynomially solvable?  Improved approximations for more general posets

24 Thank you!


Download ppt "An Approximation Algorithm for Binary Searching in Trees Marco Molinaro Carnegie Mellon University joint work with Eduardo Laber (PUC-Rio)"

Similar presentations


Ads by Google