Presentation is loading. Please wait.

Presentation is loading. Please wait.

Static Optimality and Dynamic Search Optimality in Lists and Trees

Similar presentations


Presentation on theme: "Static Optimality and Dynamic Search Optimality in Lists and Trees"— Presentation transcript:

1 Static Optimality and Dynamic Search Optimality in Lists and Trees
Avrim Blum Shuchi Chawla Adam Kalai 1/6/2002

2 List Update Problem Unordered list Access for xi takes i time = = = =
4 7 2 9 12 3 6 8 = No = No = No = Yes Query for element 9 Unordered list Access for xi takes i time

3 List Update Problem Unordered list Access for xi takes i time
9 4 7 2 12 3 6 8 Query for element 9 Unordered list Access for xi takes i time Moving up accessed item is FREE! Other reorderings cost 1 per transposition Goal: minimize total cost What should the reordering policy be?

4 Binary Search Tree “In-order” tree Search cost = depth of element
7 2 12 4 5 9 14 15 “In-order” tree Search cost = depth of element Cost of rotations = 1 per rotation Replacement policy = ?

5 How good is a reordering algorithm?
Compare against the best “offline” algorithm Dynamic competitive ratio Best offline algorithm that cannot change state of the list/tree -- “static” Static competitive ratio

6 Known results.. List update Trees Classic Machine Learning results
Dynamic ratio [Albers et al’95, Teia’93] : u.b.- 1.6; l.b.- 1.5 Trees Splay trees [Sleator Tarjan 85] : static ratio ~ 3 No known result on dynamic ratio Classic Machine Learning results Experts analysis [LW’94]: static ratio (1+ ) Computationally inefficient Recent new result [Kalai Vempala’01] Efficient alg for Strong Static Optimality for trees

7 Our results.. List Update
(1+ ) static ratio – efficient variant of Experts We call this “Strong” static optimality Combining strong static optimality and dynamic optimality to get best of both Search trees Constant Dynamic ratio for trees Given free rotations ignoring computation time

8 Revisiting Experts Algorithm [Littlestone Warmuth ’94]
N “expert” algorithms We want to be (1+) wrt the best algorithm Weighted Majority algorithm Assign weights to each expert by how well it performs Pick probabilistically according to weights Cost incurred < (1+)m + ln(N)/(1-) Applying this to list update Each list configuration is an expert Too many experts – n! Can we reduce computation?

9 Experts for List Update
Weight for every static list – too much computation The BIG idea Assign weights to every item in list and still make an experts style analysis work!

10 Candidate algorithm Initialize wi for item i
If ith element accessed, update wi Order elements using some rule based on wi Need to analyze probability of getting a particular static list

11 Idea #2: List Factoring [Borodin ElYaniv]
Distribute cost of accessing among pairs of elements Observation: The relative order of x and y in the list does not depend upon accesses to others Implication: Only need to analyze a two element list!

12 Algorithm for two element list
List – (x,y) Experts – (x,y) and (y,x) weights – wx, wy – correspond to the respective experts! Algorithm: Initialize wx, wy to rx & ry R [1..1/] If x accessed, wx <- wx+1 else wy <- wy+1 Always keep the element with higher weight in front

13 Efficient Experts algorithm for List Update
Select ri R [1..1/] for element i Initialize wi <- ri If ith element accessed, wi <- wi+1 Order elements in decreasing order of weight (1+) static competitive [Kalai Vempala’01] give a similar result for trees

14 Combining Static & Dynamic optimality
A has strong static optimality, B has dynamic optimality Combine the two to get the best of both Apply Experts again Technical difficulties Cannot estimate weights – running both simultaneously defeats our purpose Experts maintain state - Huge cost of switching Don’t switch very often

15 How to estimate weights?
The Bandits approach [Burch 00] Bandit can sample at most one slot machine at a time Run the (so far) better expert Assume good behavior from the other - Pessimistic approach After a few runs, we have sampled each one sufficiently Details in the paper

16 Binary Search Trees Splay trees achieve constant static ratio
No results on dynamic optimality Simplifying the problem… Allow free rotations Allow unlimited computation We give a constant dynamic ratio

17 Outline of our approach
Design a probability distribution p over accesses: Low offline cost => greater probability Assume p reflects reality and predict the next access from it. Construct tree based on conditional probability of next access Low offline cost => node closer to root => low online cost

18 An observation about offline BSTs
An access sequence with offline cost k can be expressed in 12k bits At most 212k sequences of offline cost k.

19 An observation about offline BSTs
An access sequence with offline cost k can be expressed in 12k bits Express access sequence as rotations performed by an offline algorithm Start with a fixed tree, make some assumptions about offline algorithm – gives extra factor of 2 Express rotations from one tree to another using 6 bits per rotation At most 212k sequences of offline cost k.

20 Probability distribution on accesses
 Distribution on access sequence a : p(a) > 2-13k where k = offline cost of a Use this to calculate conditional probability of next access Construct tree such that Cost of accessing =  ln(1/p(a)) = O(k)

21 What next? Can we make this algorithm computationally feasible?
True dynamic optimality for BST Strong static optimality solved recently [KV’01] Lessons to take home Experts analysis is a useful tool for data structures Generic algorithm too slow


Download ppt "Static Optimality and Dynamic Search Optimality in Lists and Trees"

Similar presentations


Ads by Google