Presentation is loading. Please wait.

Presentation is loading. Please wait.

WS 2006-07 Prof. Dr. Th. Ottmann Algorithmentheorie 16 – Persistenz und Vergesslichkeit.

Similar presentations


Presentation on theme: "WS 2006-07 Prof. Dr. Th. Ottmann Algorithmentheorie 16 – Persistenz und Vergesslichkeit."— Presentation transcript:

1 WS 2006-07 Prof. Dr. Th. Ottmann Algorithmentheorie 16 – Persistenz und Vergesslichkeit

2 2WS 2006-07 Overview  Motivation: Oblivious and persistent structures  Examples: Arrays, search trees  Making structures (partially) persistent: Structure-copying, path-copying-, DSST-method  Application: Point location  Oblivious structures: Randomized and uniquely represented structures, c-level jump lists

3 3WS 2006-07 Motivation A structure storing a set of keys is called oblivious, if it is not possible to infer its generation history from its current shape. A structure is called persistent, if it supports access to multiple versions. Partially persistent: All versions can be accessed but only the newest version can be modified. Fully persistent: All versions can be accessed and modified. Confluently persistent: Two or more old versions can be combined into one new version.

4 4WS 2006-07 Example: Arrays Array: 24 815174347…… Uniquely represented structure, hence, oblivious! Access: In time O(log n) by binary search. Update (Insertion, Deletion):  (n) Caution: Storage structure may still depend on generation history!

5 5WS 2006-07 Example: Natural search trees Only partially oblivious!  Insertion history can sometimes be reconstructed.  Deleted keys are not visible. Access, insertion, deletion of keys may take time  (n) 1, 3, 5, 75, 1, 3, 7 1 3 5 7 3 1 5 7

6 6WS 2006-07 Simple methods for making structures persistent  Structure-copying method: Make a copy of the data structure each time it is changed. Yields full persistence at the price of  (n) time and space per update to a structure of size n.  Store a log-file of all updates! In order to access version i, first carry out i updates, starting with the initial structure, and generate version i.  (i) time per access, O(1) space and time per update.  Hybrid-method: Store the complete sequence of updates and additionally each k-th version for a suitably chosen k. Result: Any choice of k causes blowup in either storage space or access time. Are there any better methods?

7 7WS 2006-07 Making data structures persistent Several constructions to make various data structures persistent have been devised, but no general approach has been taken until the seminal paper by Driscoll, Sarnak, Sleator and Tarjan, 1986. They propose methods to make linked data structures partially as well as fully persistent. Let’s first have a look at how to make structures consisting of linked nodes (trees, directed graphs,..) partially persistent.

8 8WS 2006-07 Fat node method - partial persistence  Record all changes made to node fields in the nodes.  Each fat node contains the same fields as an ephemeral node and a version stamp  Add the modification history to every node: each field in a node contains a list of version-value pairs

9 9WS 2006-07 Fat node method - partial persistence Modifications Ephemeral update step i creates new node: create a new fat node with version stamp i and original field values Ephemeral update step i changes a field: store the field value plus a timestamp Each node knows what its value was at any previous point in time Access field f of version i Choose the value with maximum version stamp no greater than i

10 10WS 2006-07 Fat node method - analysis  Time cost per access gives O(log m) slowdown per node (using binary search on the modification history)  Time and Space cost per update step is O(1) (to store the modification along with the timestamp at the end of the modification history)

11 11WS 2006-07 Fat node method - Example A partially persistent search tree. Insertions:5,3,13,15,1,9,7,11,10, followed by deletion of item 13. 5 1-10 313 23 44 15 1 5 6 9 7 77 11 8 10 9

12 12WS 2006-07 Path-copying method - partial persistence  Make a copy of the node before changing it to point to the new child. Cascade the change back until root is reached. Restructuring costs O(height_of_tree) per update operation  Every modification creates a new root.  Maintain an array of roots indexed by timestamps.

13 13WS 2006-07 Path-copying method 5 1 7 3 0 version 0:

14 14WS 2006-07 Path-copying method 55 1 17 33 2 01 version 1: Insert (2)

15 15WS 2006-07 Path-copying method 55 5 1 117 33 3 2 4 012 version 1: Insert (2) version 2: Insert (4)

16 16WS 2006-07 Path-copying method Restructuring costs: O(log n) per update operation, if tree is maintained balanced. 55 5 1 117 33 3 2 4 012 version 1: Insert (2) version 2: Insert (4)

17 17WS 2006-07 Node-copying method: DSST DSST-method: Extend each node by a time-stamped modification box ? All versions before time t All versions after time t Modification boxes initially empty are filled bottom up k t: rp lp rp

18 18WS 2006-07 DSST method 5 1 3 7 version 0

19 19WS 2006-07 DSST method 5 1 3 2 7 1 lp version 0:

20 20WS 2006-07 DSST method 5 1 3 2 3 4 7 1 lp version 1: Insert (2) version 2: Insert (4)

21 21WS 2006-07 DSST method The amortized costs (time and space) per update operation are O(1) 5 1 3 2 3 4 7 2 rp 1 lp version 1: Insert (2) version 2: Insert (4)

22 22WS 2006-07 Node-copying method - partial persistence Modification If modification box empty, fill it. Otherwise, make a copy of the node, using only the latest values, i.e. value in modification box plus the value we want to insert, without using modification box Cascade this change to the node’s parent If the node is a root, add the new root to a sorted array of roots Access time gets O(1) slowdown per node, plus additive O(log m) cost for finding the correct root

23 23WS 2006-07 Node-copying method - Example A partially persistent search tree. Insertions: 5,3,13,15,1,9,7,11,10, followed by deletion of item 13. 5 1-2 2 2 3 5 13 3-9 15 4 1 5 13 6 9 7 7 9 11 8 10 9 5 11 10

24 24WS 2006-07 Node-copying method - partial persistence The amortized costs (time and space) per modification are O(1). Proof: Using the potential technique

25 25WS 2006-07 Potential technique The potential is a function of the entire data structure Definition potential function: A measure of a data structure whose change after an operation corresponds to the time cost of the operation The initial potential has to be equal to zero and non-negative for all versions The amortized cost of an operation is the actual cost plus the change in potential Different potential functions lead to different amortized bounds

26 26WS 2006-07 Node-copying method - partial persistence Definitions Live nodes: they form the latest version ( reachable from the root of the most recent version), dead otherwise Full live nodes: live nodes whose modification boxes are full

27 27WS 2006-07 Node-copying method - potential paradigm The potential function f (T): the number of full live nodes in T (initially zero) The amortized cost of an operation is the actual cost plus the change in potential Δ f =? Each modification involves k number of copies, each with a O(1) space and time cost, and one change to a modification box with O(1) time cost Change in potential after update operation i: Δ f = Space: O(k + Δ f), time: O(k + 1 + Δ f) Hence, a modification takes O(1) amortized space and O(1) amortized time

28 28WS 2006-07 Application: Planar Pointlocation Suppose that the Euclidian plane is subdivided into polygons by n line segments that intersect only at their endpoints. Given such a polygonal subdivision and an on-line sequence of query points in the plane, the planar point location problem, is to determine for each query point the polygon containing it. Measure an algorithm by three parameters: 1) The preprocessing time. 2) The space required for the data structure. 3) The time per query.

29 29WS 2006-07 Planar point location -- example

30 30WS 2006-07 Planar point location -- example

31 31WS 2006-07 Solving planar point location (Cont.) Partition the plane into vertical slabs by drawing a vertical line through each endpoint. Within each slab the lines are totally ordered. Allocate a search tree per slab containing the lines at the leaves with each line associate the polygon above it. Allocate another search tree on the x-coordinates of the vertical lines

32 32WS 2006-07 Solving planar point location (Cont.) To answer query first find the appropriate slab then search the slab to find the polygon

33 33WS 2006-07 Planar point location -- example

34 34WS 2006-07 Planar point location -- analysis Query time is O(log n) How about the space ?  (n 2 ) And so could be the preprocessing time

35 35WS 2006-07 Planar point location -- bad example Total # lines O(n), and number of lines in each slab is O(n).

36 36WS 2006-07 Planar point location & persistence So how do we improve the space bound ? Key observation: The lists of the lines in adjacent slabs are very similar. Create the search tree for the first slab. Then obtain the next one by deleting the lines that end at the corresponding vertex and adding the lines that start at that vertex How many insertions/deletions are there alltogether ? 2n

37 37WS 2006-07 Planar point location & persistence (cont) Updates should be persistent since we need all search trees at the end. Partial persistence is enough. Well, we already have the path copying method, lets use it. What do we get ? O(n logn) space and O(n log n) preprocessing time. We can improve the space bound to O(n) by using the DSST method.

38 38WS 2006-07 Methods for making structures oblivious Unique representation of the structure:  Set/size uniqueness: For each set of n keys there is exactly one structure which can store such a set.  The storage is order unique, i.e. the nodes of the strucure are ordered and the keys are stored in ascending order in nodes with ascending numbers. Randomise the structure: Assure that the expectation for the occurrence of a structure storing a set M of keys is independent of the way how M was generated. Observation: The address-assingment of pointers has to be subject under a randomised regime!

39 39WS 2006-07 Example of a randomised structure Z-stratified search tree On each stratum, randomly choose the distribution of trees from Z. Insertion? Deletion? … … …….... …..

40 40WS 2006-07 Uniquely represented structures (a) Generation history determines structure (b) Set-uniqueness:Set determines structure 1, 3, 5, 7 5, 1, 3, 7 1, 3, 5, 7 1 3 5 7 3 1 5 7 1 3 5 7

41 41WS 2006-07 Uniquely represented structures (c) Size-uniqueness:Size determines structure 1, 3, 5, 7 2, 4, 5, 8 Common structure Order-uniqueness: Fixed ordering of nodes determines where the keys are to be stored. 1 3 2 4 2 4 5 8 1 3 5 7

42 42WS 2006-07 Set- and order-unique structures Lower bounds? Assumptions: A dictionary of size n is represented by a graph of n nodes. -Node degree finite (fixed), -Fixed order of the nodes, -i-th node stores i-largest key. Operations allowed to change a graph: Creation | Removal of a node Pointer change Exchange of keys Theorem: For each set- and order-unique representation of a dictionary with n keys, at least one of the operations access, insertion, or deletion must require time  (n 1/3 ).

43 43WS 2006-07 Uniquely represented dictionaries Problem: Find set-unique oder size-unique representations of the ADT „dictionary“ Known solutions: (1)set-unique, oder-unique Aragon/Seidel, FOCS 1989: Randomized Search Trees universal hash-function Update as for priority search trees! Search, insert, delete can be carried out in O(log n) expected time. (s, h(s)) priority s  X

44 44WS 2006-07 The Jelly Fish (2) L. Snyder, 1976, set-unique, oder-unique Upper Bound: Jelly Fish, search insert delete in time O(  n). body:  n nodes  n tentacles of length  n each 10 5 1 2 3 6 7 8 11 12

45 45WS 2006-07 Lower bound for tree-based structures set-unique, oder-unique Lower bound: For “ tree-based ” structures the following holds: Update-time · Search-time = Ω (n) Number of nodes n ≤ h  L + 1 L ≥ (n – 1)/h At least L-1 keys must have moved from leaves to internal nodes. Therefore, update requires time Ω(L). Delete x 1 Insert x n+1 > x n L leaves · x n x 1 h

46 46WS 2006-07 Cons-structures (3) Sunder/Tarjan, STOC 1990, Upper bound: (Nearly) full, binary search trees Einzige erlaubte Operation für Updates: Search time O(log n) Einfügen Entfernen in Zeit O(  n) möglich · · · · 3115353 L R x LR x Cons,,

47 47WS 2006-07 Jump-lists (Half-dynamic) 2-level jump-list 2-level jump-liste of size n Search:O(i) = O( ) time Insertion: Deletion: O( ) time tail 0i2in (n-1)/i·i 23578101112141719

48 48WS 2006-07 Jump-lists: Dynamization 2-level-jump-list of size n search:O(i) = O(  n) time insert delete : O(  n) time Can be made fully dynamic: (i-1) 2 i2i2 n(i+1) 2 (i+2) 2

49 49WS 2006-07 3-level jump-lists level 2 Search(x): locate x by following level-2-pointers identifying i 2 keys among which x may occur, level-1-pointers identifying i keys among which x may occur, level-0-pointers identifying x time: O(i) = O(n 1/3 ) 0i2ii 2 i 2 +i2·i 2

50 50WS 2006-07 3-level jump-lists level 2 Update requires Changing of 2 pointers on level 0 Changing of i pointers on level 1 Changing of all i pointers onlevel 2 Update time O(i) = O(n 1/3 ) 0i2ii 2 i 2 +i2·i 2

51 51WS 2006-07 c-level jump-lists Let Lower levels: level 0: all pointers of length 1:... level j: all pointers of legth i j-1 :... level c/2 :... Upper levels: level j: connect in a in list all nodes 1, 1·i j-1 +1, 2· i j-1 +1, 3· i j-1 +1,... level c:

52 52WS 2006-07 c-level jump-lists Theorem: For each c ≥ 3, the c-level jump-list is a size and order- unique representation of dictionaries with the following characteristics: Space requirement O(c·n) Access time O(c·n 1/c ) Update time, if n is even, if n is odd


Download ppt "WS 2006-07 Prof. Dr. Th. Ottmann Algorithmentheorie 16 – Persistenz und Vergesslichkeit."

Similar presentations


Ads by Google