Presentation is loading. Please wait.

Presentation is loading. Please wait.

Hierarchical Well-Separated Trees (HST) Edges’ distances are uniform across a level of the tree Stretch  = factor by which distances decrease from root.

Similar presentations


Presentation on theme: "Hierarchical Well-Separated Trees (HST) Edges’ distances are uniform across a level of the tree Stretch  = factor by which distances decrease from root."— Presentation transcript:

1 Hierarchical Well-Separated Trees (HST) Edges’ distances are uniform across a level of the tree Stretch  = factor by which distances decrease from root to leaf Distortion = factor by which distance between 2 points increases when HST is used to traverse instead of direct distance – Upper bound is O(  log  n) Diagram from Fakcharoenphol, Rao & Talwar 2003

2 Pure Randomized vs. Fractional Algorithms “Fractional view” = keep track only of marginal distributions of some quantities Lossy compared to pure randomized Which marginals to track? Claim: for some algorithms, fractional view can be converted back to randomized algorithm with little loss

3 For node j, let T(j) = leaves of the subtree of T rooted in j At time step t, for leaf i, p i t = probability of having a server at i If there is a request at i on time t, p i t should be 1 Expected number of servers across T(j) = k t (j) =  i ∈ T(j) p i t Movement cost to get servers at j =  j ∈ T W(j) |k t (j) – k t-1 (j)| Fractional View of K-server Problem Parts of diagram from Bansal 2011 j T(j)T(j) i

4 The Allocation Problem Decide how to (re-)distribute  servers among d locations (each location of uniform distance from a center and may request arbitrary no. of servers) Each location i has a request denoted as {h t (0), h t (1), … h t (  )} h t (j) = cost of serving request using j servers Ex.: request at i=0 is {∞, 2, 1, 0, 0} (monotonic decrease) Total cost = hit cost + movement cost Parts of diagram from Bansal 2011 I can work with 1, but I’d like 3!

5 Fractional View of Allocation Problem Let x i,j t = (non-negative) probability of having j servers at location i at time t Sum of probabilities  j x i,j t = 1 No. of servers used must not exceed no. available  i  j j ∙ x i,j t ≤  Hit cost incurred =  j h t (j) ∙ x i,j t Movement cost incurred =  i  j (|  j’<j x i,j t –  j’<j x i,j t-1 |) Note: fractional A.P. too weak to obtain randomized A.P. algorithm – But we don’t really care about A.P., we care about K-server problem!

6 From Allocation to K-Server Theorem 2: It suffices to have a (1+ ,  (  ))- competitive fractional AP algorithm on uniform metric to get a k-server algorithm that is O(β l )-competitive algorithm (Coté et al. 2008) Theorem 1: Bansal et al.’s k-server algorithm has a competitive ratio of Õ(log 2 k log 3 n)

7 The Main Algorithm 1.Embed the n points into a distribution  over  - HSTs with stretch  =  (log n log(k log n)) – (No time to discuss, this step is essentially from the paper of Fakcharoenphol, Rao & Talwar 2003) 2.According to distribution , pick a random HST T – Extra step: Transform the HST to a weighted HST (We’ll briefly touch on this) Diagram from Bansal 2011

8 The Main Algorithm 3.Solve the (fractional) allocation problem on T’s root node + immediate children, then recursively solve the same problem on each child – Intuitive application of Theorem 2 – d = immediate children of a given node – At root node:  = all k servers – At internal node i:  = resulting (re)allocation of servers from i’s parent Allocation instances Diagram from Bansal 2011

9 Detour: Weighted HST Degenerate case of normal HST: Depth l = O(n) (can happen if n points are on a line with geometrically increasing distances)

10 Detour: Weighted HST Solution: allow lengths of edges to be non-uniform Allow distortion from leaf-to-leaf to be at most 2  /(  –1) Depth l = O(log n) Consequence: Uniform A.P. becomes weighted-star A.P.

11 Proving the Main Algorithm Theorem 1: Bansal et al.’s k-server algorithm has a competitive ratio of Õ(log 2 k log 3 n) Idea of proof: How does competitive ratio and distortion evolve as we transform: Fractional allocation algorithm ↓ Fractional k-server algorithm on HST ↓ Randomized k-server algorithm on HST

12 Supplemental Theorems Theorem 3: For  > 0, there exists a fractional A.P. algorithm on a weighted-star metric that is (1+ , O(log(k/  )))- competitive (Refinement of theorem 2, to be discussed by Tanvirul) Theorem 4: If T is a weighted  -HST with depth l, if Theorem 3 holds, then there is a fractional k-server algorithm that is O( l log(k l ))-competitive as long as  =  ( l log(k l )) Theorem 5: If T is a  -HST with  >5, then any fractional k- server algorithm on T converts to a randomized k-server algorithm on T that is about as competitive (only O(1) loss) Theorem 6: If T is a  -HST with n leaves and any depth, it can transform to a weighted  -HST with identical leaves but with depth O(log n) and leaf-to-leaf distance distorted only by at most 2  /(  –1)

13 Proof of Theorem 1 1.Embed the n points into a distribution  over  - HSTs with stretch  =  (log n log(k log n)) – Distortion at O(  log  n) – Resulting HSTs may have depth l up to O(n) 2.According to distribution , pick a random HST T and transform to a weighted HST – From Theorem 6, depth l reduced to O(log n) – Stretch  is now  ( l log (k l )))

14 Proof of Theorem 1 3.Solve the (fractional) allocation problem on T’s root node + immediate children, then recursively solve the allocation problem on children – This is explicitly Theorem 2 refined by Theorem 3 – Stretch  =  ( l log (k l ))), so Theorem 4 is applicable! – Transform to a fractional k-server algorithm with competitiveness = O( l log (k l ))) = O(log n log (k log n)) – Applying Theorem 5, we get similar competitiveness for the randomized k-server algorithm

15 Proof of Theorem 1 Expected distortion to optimal solution Opt* M, given the cost of the solution on T, c T : E  [c T ] = O(  log  n) ∙ Opt* M Alg M ≤ Alg T ≤ O(log n log (k log n)) ∙ c T E  [Alg M ] = O(log n log (k log n)) ∙ E  [c T ] = O(log n log (k log n)) ∙ O(  log  n) ∙ Opt* M

16 Proof of Theorem 1 E  [Alg M ] = O(log n log (k log n)) ∙ O(  log  n) ∙ Opt* M This implies a competitive ratio of: O(log n log (k log n)) ∙ O(  log  n) = O(log n log (k log n)) ∙ O(  (log n / log  )) = O{[log 3 n (log (k log n)) 2 ] / log log n} = O(log 2 k log 3 n log log n) = Õ(log 2 k log 3 n)


Download ppt "Hierarchical Well-Separated Trees (HST) Edges’ distances are uniform across a level of the tree Stretch  = factor by which distances decrease from root."

Similar presentations


Ads by Google