Presentation is loading. Please wait.

Presentation is loading. Please wait.

History Independent Data-Structures. What is History Independent Data-Structure ? Sometimes data structures keep unnecessary information. –not accessible.

Similar presentations


Presentation on theme: "History Independent Data-Structures. What is History Independent Data-Structure ? Sometimes data structures keep unnecessary information. –not accessible."— Presentation transcript:

1 History Independent Data-Structures

2 What is History Independent Data-Structure ? Sometimes data structures keep unnecessary information. –not accessible via the legitimate interface of the data structure. –can be restored from the data-structure layout. The core problem: history of operations applied on the data-structure may be revealed.

3 History Independence - Motivation A privacy issue if an adversary gains control over the data-structure layout - Laptop was stolen. Sometimes you just send the data-structure over the web … Word documents Search indexes inside a data-structure List of Students/ grades etc.

4 Example Data structure with three operations: Insert(D, x) Remove(D, x) Print(D) Used for a wedding invitee list. Naive Implementation – an array. Insert – adds last entry. Remove entry i – move entries i+1 to n backwards (wiser implementation - linked list on an array) Layout implies the order. For example, who was invited last !

5 Weak History Independence [Naor, Teague]: A Data structure implementation is (weakly) History Independent if: Any two sequences of operations S 1 and S 2 that yield the same content induce the same distribution on memory layout. Security: Nothing gained from layout beyond the content.

6 Example – cont. Making the previous data structure weakly history independent: Insert(x): (say, n elements in data-structure) –Choose uniformly at random r  {1,2,…,n+1} –Set A[n+1]  A[r]; A[r]  x Remove entry i: A[i]  A[n] The array is a uniformly chosen permutation on the elements

7 Weak History Independence Problems No Information leaks if adversary gets layout once (e.g., the laptop was stolen). But what if adversary may get layout several times ? Information on content modifications leaks. We want: no more information leakage.

8 Strong History Independence ¼ Pair of sequences S 1, S 2 ¼ two lists of stop points in S 1, S 2 If content is the same in each pair of corresponding stop points Then: Joint distribution of memory layouts at stop points is identical in the two sequences. [Naor-Teague]: A Data structure implementation is (Strongly) History Independent if: Security: We cannot distinguish between any such two sequences.

9 Strong History Independence S 1 = ins(1), ins(2), ins(3), ins(4) S 2 = ins(2), ins(1), ins(5), ins(4), ins(3), del(5) First stopSecond stop First stopSecond stop We should not be able to tell from the layouts which of the two sequences happened

10 Example – cont. Recall example: Insert(x) : (say, n elements in database) –Choose uniformly at random r  {1,2,…,n+1} –Set A[n+1]  A[r]; A[r]  x Remove entry i: A[i]  A[n] Is this implementation strongly history independent ? No !

11 Example – cont. 12345 Assume you get the layout of the array twice: First time you see: Second time you see: 52341 What could not happen: The empty sequenceRemove(4), Insert(4) Lots of other constraints…

12 Example – last Making the data structure strongly history independent We can keep the array aligned left and sorted. Each content has only one possible layout. Problem: The time complexity of Insert and Remove is Ω(n), (“Usually” shift Ω (n) elements during insert or delete)

13 History of History Independence [Micciancio97] Weak history independent 2-3 trees (motivated by the problem of private incremental cryptography [BGG95]). [Naor-Teague01] History-independent hash-table, union-find. Weak history-independent memory allocation. History independent Dynamic Perfect Hashing

14 History of History Independence [Hartline et al. 02] Strong history independence means canonical layout. Relaxation of strong history independence. History independent memory resize. [Buchbinder, Petrank 03] Lower bounds on Strong History independent data-structures. History independent heaps.

15 What’s Next 1.2-3 Trees 2.HashTables 3.Strong History Independence means Canonical Representation. 4.Lower Bounds on strong history independence. 5.Lower Bounds on relaxed strong history independence. 6.Obtaining a weak history independent heap.

16 2-3 Trees and Cryptography You sign a word document. Now you make a small change – do you need to resign the whole document? Maybe you want to deliver the document to a few people with only small changes. Do you need to compute several signatures? You don’t want to work hard!!

17 2-3 Trees and Cryptography Solution: Partition the document into small blocks. Sign each block. Construct a 2-3 tree on the blocks. Each internal node is a signature of its children along with the size of its sub-tree.  When delete/add/edit a block only O(log n) small signatures are needed. Problem: The structure of the 2-3 tree reveals some edit information!

18 2-3 Trees Why does the standard implementation of 2-3 Trees not History Independent? 1 1234 Insert: 1,2,3,4,5 12123 12345

19 2-3 Trees 12345 S 1 = Insert: 1,2,3,4,5 1 23 45 Insert 1 We may distinguish between the two sequences 2345 Remove 1 S 2 = Insert: 1,2,3,4,5, Remove 1, Insert 1

20 2-3 Trees – Solution The number of children of each internal node is 2 or 3 with equal probability. (except the last one) CreateTree – O(n) Find – O(log n) Insert/Remove – O(log n) expected time.

21 2-3 Trees – CreateTree 123451 23 45 12345 Prob = 1/2Prob = 1/4 3 nodes in level 2 Process should be continued

22 2-3 Trees – CreateTree cont. 12345 Prob = 1/8 12345 12345 Prob = 1/4

23 2-3 Trees – Solution We want: Insert/Remove generate the same distribution as CreateTree.  History independent Idea: When inserting/removing a leaf: –The previous leaves/nodes are Ok. –Fix the next leaves by new coin tosses.

24 2-3 Trees – inserting a new node 2345 1 23 45 Prob = 1/2 2345 1 23 45 Continue on …

25 2-3 Trees – Insert/Remove Complexity proof Ideas: In each two successive iterations we synchronize with previous grouping with constant probability. The number of nodes “touched” in each level is O(1). The total number of nodes “touched” in all levels is O(log n).

26 What’s Next 1.2-3 Trees 2.HashTables 3.Strong History Independence means Canonical Representation. 4.Lower Bounds on strong history independence. 5.Lower Bounds on relaxed strong history independence. 6.Obtaining a weak history independent heap.

27 Hash-Tables Standard implementation (open address): Choose hash functions: h 1, h 2, h 3 … Insert(x): If h i (x) is occupied – try h i+1 (x) … Delete(x): Mark the cell as deleted - No actual delete.

28 Hash-Tables - Problems The deleted items still appear! If h 1 (x) = h 1 (y) then we can know whether x or y where inserted first  the one that was hashed by h 2 Solution: No deletions. When h i (x) = h i (y): decide to rehash x or y by some predetermined order between them.  The hash-table has canonical form  Strong History independent

29 What’s Next 1.2-3 Trees 2.HashTables 3.Strong History Independence means Canonical Representation. 4.Lower Bounds on strong history independence. 5.Lower Bounds on relaxed strong history independence. 6.Obtaining a weak history independent heap.

30 Strong History Independence = Canonical Representation Definition [content graph]: The content graph of data-structure: Vertices: The possible contents. Edges: C 1  C 2 if  operation OP and parameters σ such that OP(C 1, σ)= C 2. Definition [well behaved]: An abstract data- structure is well behaved if its content graph is strongly connected.

31 Strong History Independence = Canonical Representation Lemma: For any strongly history independent implementation of a well behaved data- structure:  layout L,  operation Op, Op(L) yields only one possible layout. Corollary: Any strongly history independent implementation of well-behaved data- structure is canonical.

32 Canonical Representation: Proof cont. Corollary: Any strongly history independent implementation of well-behaved data- structure is canonical. Proof sketch (assuming the lemma): Let S be a sequence of operations yielding content C. Each operation in S generates one layout.  By induction S yields one possible layout. By strong history independence any other sequence yielding C creates the same layout.

33 Canonical Representation Proof of Lemma Lemma: For any strongly history independent implementation of a well behaved data- structure:  layout L,  operation Op, Op(L) yields only one possible layout. Assuming well-behaved, any operation Op has a sequence OP -1 that “reverses” Op. Assuming strong history independence we may set any two sequences with stop points.

34 Canonical Representation Proof of Lemma Proof sketch: Fix any layout L, fix any operation Op. We need to show that Op(L) yields a single specific layout L’. Let S be any sequence of operation yielding L with probability > 0. Consider the following sequences with the following ‘stop’ points: S 1 = S S 2 = S ◦ Op ◦ OP -1 12 1 2 The two stop points are the same in S 1.  The same layout must also appear in S 2.

35 Canonical Representation Proof of Lemma S 1 = S S 2 = S ◦ Op ◦ OP -1 12 12 Suppose L appears after S. L must appear again at the end of S 2. Otherwise, we could distinguish between the two sequences. For any L i =Op(L), Op -1 must transform L i to L with probability 1. L L2L2 L1L1 LkLk Op L Op -1 Op Op -1

36 Canonical Representation: Proof Now let’s extend the sequence and modify stop points: S 3 = S ◦ Op 1212 S 4 = S ◦ Op ◦ Op -1 ◦ Op Suppose some L i =Op(L) appears after S ◦ Op.  L i must appear also at the end of S 4. Otherwise, we could distinguish between the two sequences. After Op -1 the layout is again L. The operation of Op depends only on L. Op cannot “know” which L i to create.  There is only one L i = Op(L) L L2L2 L1L1 LkLk Op L Op -1 Op Op -1

37 What’s Next 1.2-3 Trees 2.HashTables 3.Strong History Independence means Canonical Representation. 4.Lower Bounds on strong history independence. 5.Lower Bounds on relaxed strong history independence. 6.Obtaining a weak history independent heap.

38 Lower Bounds: an example Lemma: D: Data-structure whose content is the set of keys stored inside it. I: Implementation of D that is : comparison-based and canonical. The operation Insert(D, x) requires time Ω (n). This lemma applies for example to: Heaps, Dictionaries, Search trees.

39 Why is Comparison Based implementation important? It is “natural”: –Standard implementations for most data structure operations are like that. –Therefore, we should know not to design this way when seeking strong history independence Library functions are easy to use: –Only implement the comparison operation on data structure elements.

40 Lower Bounds – cont. Proof sketch: comparison-based: keys are treated as ‘black boxes’ according to the comparison order.  The algorithm treats any n keys only according to their total order.  The canonical layout of any n different keys is the same no matter what their real values are. d 1, d 2, … d n - memory addresses of n keys in the layout according to their total order. d’ 1, d’ 2, … d’ n+1 - memory addresses of n+1 keys in the layout according to their total order.

41 Lower Bounds – cont. Δ: The number of indices for which d i  d’ i Consider the content C = { k 2, k 3, …, k n+1 } k 2 < k 3 < … < k n+1 : Case 1 - Δ > n/2 - consider insert(C, k n+2 ): Puts k n+2 in address d’ n+1. Moves each k i (2  i  n+1) from d i-1 to d’ i-1.  The operation moves at least n/2 keys. Case 2 - Δ  n/2 - consider insert(C, k 1 ): Puts k 1 in d’ 1 Moves each k i (2  i  n+1) from d i-1 to d’ i.  The operation moves at least n/2 keys.

42 More Lower Bounds By similar methods we can show: Remove-key requires time Ω(n). For a Heap: –Increase-key requires time Ω(n). –Build-Heap Operation requires time Ω(n log n). For a queue: either Insert-first or Remove- Last requires time Ω(n).

43 What’s Next 1.2-3 Trees 2.HashTables 3.Strong History Independence means Canonical Representation. 4.Lower Bounds on strong history independence. 5.Lower Bounds on relaxed strong history independence. 6.Obtaining a weak history independent heap.

44 Relaxed strong history independence Strong history independence implies very strong lower bounds. How can we relax the definition allowing more efficient data structures ? One possible way [HHMPR02 ]: Allowing the adversary to distinguish between the empty sequence and other sequences. Does this definition implies canonical memory layout ?

45 Relaxed strong history independence (cont.) The relaxed definition does not implies canonical memory layout. Possible implementation of previous data structure: In each operation - choose a new independent uniformly chosen permutation of the elements. 1.Not canonical … 2.Relaxed strong history independent. 3.Each operation - O(n)

46 Relaxed strong history independence Is this relaxation enough ? (for efficient implementations) No We may prove almost the same lower bounds using different property of these data structures.

47 What’s Next 1.2-3 Trees 2.HashTables 3.Strong History Independence means Canonical Representation. 4.Lower Bounds on strong history independence. 5.Lower Bounds on relaxed strong history independence. 6.Obtaining a weak history independent heap.

48 The Binary Heap Binary heap - a simple implementation of a priority queue. The keys are stored in an almost full binary tree. Heap property - For each node i: V(parent(i))  V(i) Assume that all values in the heap are unique. 10 79 3648 152

49 The Binary Heap: Heapify Heapify - used to preserve the heap property. Input: a root and two proper sub-heaps of height  h-1. Output: a proper heap of height h. The node always chooses to sift down to the direction of the larger value. 2 109 3678 154

50 Heapify Operation 2 109 3678 154 79 3648 152

51 Reversing Heapify heapify -1 : “reversing” heapify: Heapify -1 (H: Heap, i: position) Root  v i All the path from the root to node i are shifted down. 10 79 3648 152 The parameter i is a position in the heap H

52 Heapify -1 Operation 10 79 3648 152 2 9 3678 154 Heapify(Heapify -1 (H, i)) = H Property: If all the keys in the heap are unique then for any i:

53 The Binary Heap: Build-heap in O(n) Building a heap - applying heapify on any sub-tree in the heap in a bottom up manner. Time Complexity 10 79 3648 152

54 Reversing Build-heap Build-Heap -1 (H: heap) : Tree If size(H) = 1 then return (H); Choose a node i uniformly at random among the nodes in the heap H; H  Heapify -1 (H, i); Return TREE(root(H), build-heap -1 (H L ), build-heap -1 (H R )); For any random choice: Build-heap(Build-heap -1 (H)) = H Works in a Top-Bottom manner

55 Uniformly Chosen Heaps Build-heap is a Many-To-One procedure. Build-heap -1 is a One-To-Many procedure depending on the random choices. Support(H) : The set of permutations (trees) such that build-heap(T) = H Facts (without proof): 1.For each heap H the size of Support(H) is the same. 2.Build-heap -1 returns one of these heaps uniformly.

56 How to Obtain a Weak History Independent Heap Main idea: keeping a uniformly random heap at all time. We want: 1.Build-heap: Return one of the possible heaps uniformly. 2.Other operations: preserve this property.

57 An Easy Implementation: Build-Heap Apply random permutation on the input elements and then use the standard build-heap. Analysis: Each heap has the same size of Support group  each heap has the same probability. More intuition: Applying random permutation on the elements erases all data about the order of the elements.  There is no information on the history.

58 Another Easy Implementation: Increase-key Standard Increase-key - changes the value of element and sift it up until it gets to the correct place. 9 78 36104 152 98 3674 152

59 Increase-key – cont. The standard increase-key is good for us. 1.The increase-key operation is reversible: decreasing the value of the key back will return the key to its previous location. 2.The number of heaps with n different keys is the same no matter of the actual values of keys.  The increase-key function is 1-1.  If we had uniformly chosen heap then after increase-key it stays uniformly chosen heap.

60 Not So Easy: Extract-max and Insert Extract-max(H) Replace the value at the root with the value of the last leaf. Let the value sift down to the right position. The standard operation of extract-max: Is this good for us ? No !

61 Standard Extract-max is Not Good Three possible heaps with 4 elements: One heap has probability 1/3 while the other has probability of 2/3 ! 4 32 1 1/3 4 32 1 4 23 1 4 31 2 3 12 3 21 3 21

62 Naive Implementation: Extract-max Extract-max(H) 1. T = build-heap -1 (H) 2. Remove the last node v in the tree (T’). 3. H’ = build-heap(T’) 4. If we already removed the maximal value return H’ Otherwise: 5. Replace the root with v and let v sift down to its correct position. build-heap -1 and build-heap works in O(n) … but this implementation is history independent.

63 Analysis: Extract-max Extract-max(H) 1.T = build-heap -1 (H) 2.Remove the last node v in the tree (T’). 3.H’ = build-heap(T’) H’ is a random uniform heap on the n original keys of the heap excluding a random key v. T is a random uniform permutation on the n+1 keys of the heap. T’ is a random uniform permutation on n keys of the heap excluding the random key v.

64 Analysis: Extract-max 4.If we already removed the maximal value return H’ Otherwise: 5.Replace the root with v and let v sift down to its correct position. If we already removed the maximal value we are done. Otherwise: This is just applying increase/decrease-key on the value at the root. (this is a 1-1 function …)

65 Improving Complexity: Extract-max First 3 steps of Extract-max(H) 1.T = build-heap -1 (H) 2.Remove the last node v in the tree. 3.H’ = build-heap(T’) Main problem - steps 1 to 3 that takes O(n). Simple observation reduces the complexity of these steps to O(log 2 (n)) instead of O(n)

66 Reducing the Complexity to O(log 2 (n)) Observation: Most of the operations of build-heap -1 are redundant. they are always canceled by the operation of build- heap. Only the operations applied on nodes lying on the path from the root to the last leaf are really needed. 10 7 9 36 4 8 15 2

67 Reducing the Complexity to O(log 2 (n)) 10 7 9 36 4 8 15 2 Complexity analysis: Each heapify -1 and heapify operation takes at most O(log n). There are O(log n) such operations.

68 Reducing the Complexity: O(log(n)) Expected Time Extract-max(H) 1. T = build-heap -1 (H) 2. Remove the last node v in the tree (T’). 3. H’ = build-heap(T’) 4. If we already removed the maximal value return H’ Otherwise: 5. Replace the root with v and let v sift down to its correct position. We actually remove the last value of a uniformly chosen permutation and build back the heap

69 Reducing the Complexity: O(log(n)) Expected Time This is the most complex part Main ideas: We can show that there are actually O(1) operations of heapify -1 and heapify that make a differnce (in average over the random choices made by the algorithm in each step). We can detect these operations and apply only them.

70 Reducing the Complexity: O(log(n)) Expected Time Main lemma: When applying build-heap on a uniformly chosen permutation: The height of the last value in the permutation is O(1). Proof idea: Backward analysis on build-heap -1 instead of build-heap.

71 The Insert Operation The standard implementation of insert is not good for us. Good implementation must use randomization in order to be efficient (otherwise it should be canonical …) Making insert history independent is also not easy. The general method is similar to Extract-max.

72 Naive Implementation: Insert Insert(H, v) 1.Choose uniformly a random number 1≤i≤n+1 2.Let v i be the value in the heap in that place. 3. If i=n+1 skip to step 5 4.H’  Increase-key(H, i, v) H’ is a uniformly chosen heap without the value v i The value v i not in the heap is a randomly chosen value.

73 Naive Implementation: Insert Insert(H, v) 5.T = build-heap -1 (H’) 6.T’  T “+” Add the value v i to the n+1 position 7. H = build-heap(T’) 8.Return (H) T is a uniformly chosen permutation without the value v i. T’ is a uniformly chosen permutation with the value v i. H is a uniformly chosen heap with the value v.

74 Insert Operation – Reducing complexity The general ideas are similar to Extract-max. Reducing the complexity to O(log 2 n) by running heapify -1 and heapify only on the path to the newly added node. The most difficult part is again reducing the complexity from O(log 2 n) to O(log n) expected time ** notice that we insert a random key into a random heap.

75 Conclusions 1.Demanding strong history independence usually requires a high efficiency penalty in the comparison based model. 2.Weak history independent heap in the comparison-based model without penalty, Complexity: build-heap - O(n) worst case. increase-key - O(log n) worst case. extract-max, insert- O(log n) expected time, O(log 2 n) worst case.

76 Bounds Summary OperationWeak History Independence Strong History Independence heap: insertO(log n)Ω(n) heap: increase-keyO(log n)Ω(n) heap: extract-maxO(log n)No lower bound heap: build-heapO(n)Ω(n log n) queue: max{ insert- first, remove-last} O(1)Ω(n)

77 Memory allocation Assume we allocate fixed size records. We would like: After an arbitrary number of allocate/delete the memory “dump” do not reveal information about the allocations. Main idea: When allocate a cell k: Choose random number 1≤i≤k Put the cell in the ith place. Copy the ith cell to the kth position. Require to change all incoming pointers into the two cells.

78 Memory allocation Require to change all incoming pointers into the two cells. Can be done using doubly linked pointers. We can make any pointer based with bounded in degree, fixed size record data structure history independent. If its “shape” is history independent Example: 2-3 trees we saw.

79 Memory allocation: non-fixed size Main idea: We partition the allocation into sizes of [2 i, 2 i+1 ). The larger allocations are more left according to their order. Each group is uniformly ordered as fixed size. When we allocate: We round up the size. We make place moving records in “smaller” groups. Allocation of size ‘s’ in time O(s log s).

80 Open Questions 1.Can we show separation between weak and strong History independence in the non- comparison model ? 2.History independent implementation of other, more complex, data structures. 3.Strong History independent implementations that do not require canonical representation – Union find. Thank you


Download ppt "History Independent Data-Structures. What is History Independent Data-Structure ? Sometimes data structures keep unnecessary information. –not accessible."

Similar presentations


Ads by Google