3Online AlgorithmsAn online algorithm is one that can process its input piece-by-piece, without having the entire input available from the start.In contrast, an offline algorithm is given the whole problem data from the beginning and is required to output an answer which solves the problem at hand.Input: a sequence of requests.Task: process the requests as efficiently as possibleOnline:i‘th request has to be processed before future requests are knownOffline:All requests are known in advance
4How to measure quality of online algorithms? Assume some a priori knowledge about request sequence, e.g., „requests are chosen randomly“Assume worst case measure, compare online cost to offline costOnline : standard competitive analysis – competitive ratioOnline randomized:
5Toy Example: Ski rental problem I go skiing one year after the other, until I am no longer interested in skiing. I do not know in advance, when I loose interest.Bying skis costs D € (and I can use them for ever), renting them costs 1€.Question: When should I buy skis?Answer: Immediately, if I am interested for at least D years, never otherwise.This can only be done offline!
6Toy Example: Ski rental problem What to do online?I buy when I go skiing for a D‘th year.Cost offline : DCost online : 2DThis strategy is 2-competitive
8Paging: A basic problem for an operating system Main memorySize k=6DiskPaging: given a main memory that can hold k pages.Input: sequence of pages to be used by the processorGoal: Need as few page faults ( requests for pages that have to be moved from disk to main memory) as possibleThe algorithm has to store the requested page in main memory in case of a page fault, and has to choose a page to be removed from main memory.
10Two online strategies: PagingTwo online strategies:Theorem: LRU and FIFO are k-competitive.Theorem: This competitive ratio is best possible.
11Furthermore, LRU turns out to be better than FIFO. PagingIn practise, both algorithms are much better, the observed competitive ratio decreases with increasing memory size k.Furthermore, LRU turns out to be better than FIFO.Reasons: In practise, request sequences exhibit locality, i.e., they tend to use the same pages more often, and have dependencies among pages.(„If page A is accessed, then it is likely that page B will be accessed shortly afterwards“)Way out: Model restrictions to the „adversary“, i.e. the bad guy that generates the worst case sequences.This is done using access graphs.
13Page Migration Model (1) Page migration – Classical online problemprocessors connected by a networkThere are costs of communication associated with eachedge. Cost of communication between pair of nodes =cost of the cheapest path between these nodes.Costs of communication fulfill the triangle inequality.v3v2v4v5v1v6v7
14Page Migration Model (2) Alternative view:processors in a metric spaceIndivisible memory page of size in the local memory ofone processor (initially at )v3v2v4v5v1v6v7
15Page Migration Model (3) Input: sequence of processors,dictated by a request adversary- processor which wants to access (read or write)one unit of data from the memory page.After serving a request an algorithm may move the pageto a new processor.v3v2v4v5v1v6v7
16Page Migration (cost model) The page is at node .Serving a request issued at costsMoving the page to node costs
17A randomized algorithm Memoryless coin-flipping algorithm CF [Westbrook 92]Theorem: CF is 3-competitive against an adaptive-onlineadversaryRemark: This ratio is optimal against adaptive-onlineAdversary (may see the outcomes of the coinflips)In each step after serving a request issued at ,move page to with probability
18Proof of competitiveness of CF We run CF and OPT „in parallel” on the input sequenceWe define potential functionThere are two events to consider in each stepRequest occurs at a node ,CF and OPT serve the requests, part 1CF optionally moves the pageOPT optionally moves the page } part 2For each part separately, we prove that
19Proof of competitiveness of CF Note:This is a telescopic sum.Thus the cancel outand we get the competitive ratio 3.
20Competitiveness of CF, a step Page in and resp.Request occurs atCF and OPT serve the requestsCF optionally moves the page to part 1OPT optionally moves the page part 2to
21Competitiveness of CF – part 1 Request occurs atCost of serving requests:in CF : a, in OPT : bExpected cost of moving the page:Potential before:Exp. potential after:Exp. change of the potential:
23Deterministic algorithm Algorithm Move-To-Min (MTM) [Awerbuch, Bartal, Fiat 93]Theorem: MTM is 7-competitiveRemark: The currently best deterministic algorithm achievescompetitive ratio of 4.086After each steps, choose to be the nodewhich minimizes , and move to .( is the best place for the page in the last steps)
24Results on page migration The best known bounds:AlgorithmLower boundDeterministic[Bartal, Charikar, Indyk ‘96][Chrobak, Larmore, Reingold, Westbrook ‘94]Randomized:Obliviousadversary[Westbrook ‘91]Adaptive-online adversary
26- Exploit Locality - Scenario Networks have low bandwidth, global objects are small, access is fine grained.typical for parallel processor networks, partially also for the internet.bottleneck: link-congestiontask: distribute global objects (maybe dynamically) amongprocessors such that• an application (sequence of read/write access to globalvariables) can be executed using small link-congestion• storage overhead is small.- Exploit Locality -
27Basic StrategyDesign strategy for treesProduce strategy for target-network by tree embedding
28Dynamic ModelApplication: Sequence of read / write requests from processorsto objects. Each processor decides solely based onits local knowledge. distributed online-strategyGoal: Develop strategy that produces only by a factor c morecongestion than an optimal offline strategy. c-competitive strategy(and by a factor m more storage per processor (m, c) – competitive strategy )
29Dynamic strategy for trees v writes to x :v creates (or updates) copy of x in v,and invalidates all other copies (consistency!)v reads x:v reads the closest copy of x and creates copies in every processor on the path back to v.(Remark: Data Tracking in trees is easy!)
37Example and AnalysisConsider phase write (v0), read (v1), read (v2), ... , read (vk-1), write (vk)vkv1v0v2v3 Each strategy has to use each link of the red subtree at least once.Our strategy uses each of these links at most three times. Strategy is 3-competitive for trees
38Other networks Goals contradict?!? Idea: Simulate suitable tree in target-network M.tree embedding:Goals: - small dilation(in order to reduce overall load)- randomized embedding(in order to reduce congestion)Goals contradict?!?
39Tree embedding Randomized, locality preserving embedding! Example: nxn-mesh123M‘vleaves: nodes of the meshlink-capacity: # links leaving the submesh
40Result for meshes The static and dynamic strategies are O (log(n))-competitive in nxn-meshes, w.h.p.Finding an optimal static placement for several variables isNP-hard already on 3x3-meshes.
42ConclusionsProvably efficient protocols for datamanagement in networksDifferent models for differentapplication scenariosExperimental evaluation (Presto, DIVA)Some open problems:- startup times- combination with load balancing- randomized, locality preservingembedding and routing in dynamicnetworks