Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lower and Upper Bounds on Obtaining History Independence

Similar presentations


Presentation on theme: "Lower and Upper Bounds on Obtaining History Independence"— Presentation transcript:

1 Lower and Upper Bounds on Obtaining History Independence
Niv Buchbinder and Erez Petrank Technion, Israel

2 What is History Independent Data-Structure ?
Sometimes data structures keep unnecessary information. not accessible via the legitimate interface of the data structure, can be restored from the data-structure layout. A privacy issue if an adversary gains control over the data-structure layout. The core problem: history of operations applied on the data-structure may be revealed.

3 Example Data structure with three operations: Insert(D, x)
Remove(D, x) Print(D) Used for a wedding invitee list. Naive Implementation – an array. Insert – adds last entry. Remove entry i – move entries i+1 to n backwards (wiser implementation - linked list on an array) Layout implies the order. For example, who was invited last !

4 Weak History Independence
[Naor, Teague]: A Data structure implementation is (weakly) History Independent if: Any two sequences of operations S1 and S2 that yield the same content induce the same distribution on memory layout. Security: Nothing gained from layout beyond the content.

5 The array is a uniformly chosen permutation on the elements
Example – cont. Making the previous data structure weakly history independent: Insert(x): (say, n elements in data-structure) Choose uniformly at random r {1,2,…,n+1} Set A[n+1]  A[r]; A[r]  x Remove entry i: A[i]  A[n] The array is a uniformly chosen permutation on the elements

6 Weak History Independence Problems
No Information leaks if adversary gets layout once (e.g., the laptop was stolen). But what if adversary may get layout several times ? Information on content modifications leaks. We want: no more information leakage.

7 Strong History Independence
[Naor-Teague]: A Data structure implementation is (Strongly) History Independent if: Pair of sequences S1, S2 two lists of stop points in S1, S2 If content is the same in each pair of corresponding stop points Then: Joint distribution of memory layouts at stop points is identical in the two sequences. Security: We cannot distinguish between any such two sequences.

8 Strong History Independence
First stop Second stop S1 = ins(1), ins(2), ins(3), ins(4) S2 = ins(2), ins(1), ins(5), ins(4), ins(3), del(5) We should not be able to tell from the layouts which of the two sequences happened

9 Is this implementation strongly history independent ?
Example – cont. Recall example: Insert(x) : (say, n elements in database) Choose uniformly at random r {1,2,…,n+1} Set A[n+1]  A[r]; A[r]  x Remove entry i: A[i]  A[n] Is this implementation strongly history independent ? No !

10 Example – cont. Assume you get the layout of the array twice:
First time you see: Second time you see: 1 2 3 4 5 5 2 3 4 1 What could not happen: The empty sequence Remove(4), Insert(4) Lots of other constraints…

11 Each content has only one possible layout.
Example – last Making the data structure strongly history independent We can keep the array aligned left and sorted. Each content has only one possible layout. Problem: The time complexity of Insert and Remove is Ω(n), (“Usually” shift Ω(n) elements during insert or delete)

12 A Short History of History Independence
[Micciancio97] Weak history independent 2-3 tree (motivated by the problem of private incremental cryptography [BGG95]). [Naor-Teague01] History-independent hash-table, union-find. Weak history-independent memory allocation. All above results are efficient. [HHMPR02] Strong history independence means canonical layout. Relaxation of strong history independence. History independent memory resize.

13 Our Results Strong history independence implies canonical memory layout. Separations between strong & weak (lower bounds): Strong requires a much higher efficiency penalty in the comparison based model. Proving (almost) the same lower bounds to a relaxed version of strong history independence. Implementations (upper bounds): The heap has a weakly history independent implementation with no time complexity penalty.

14 Weak History Independence Strong History Independence
Bounds Summary Operation Weak History Independence Strong History Independence heap: insert O(log n) Ω(n) heap: increase-key heap: extract-max No lower bound heap: build-heap O(n) Ω(n log n) queue: max{ insert-first, remove-last} O(1)

15 Why is Comparison Based implementation important?
It is “natural”: Standard implementations for most data structure operations are like that. Therefore, we should know not to design this way when seeking strong history independence Library functions are easy to use: Only implement the comparison operation on data structure elements.

16 What’s Next Strong History Independence means Canonical Representation. Lower Bounds on strong history independence. Lower Bounds on relaxed strong history independence. Obtaining a weak history independent heap.

17 Strong History Independence = Canonical Representation
Definition [content graph]: The content graph of data-structure: Vertices: The possible contents. Edges: C1  C2 if  operation OP and parameters σ such that OP(C1, σ)= C2. Definition [well behaved]: An abstract data-structure is well behaved if its content graph is strongly connected.

18 Strong History Independence = Canonical Representation
Lemma: For any strongly history independent implementation of a well behaved data-structure:  layout L,  operation Op, Op(L) yields only one possible layout. Corollary: Any strongly history independent implementation of well-behaved data-structure is canonical.

19 Canonical Representation: Proof cont.
Corollary: Any strongly history independent implementation of well-behaved data-structure is canonical. Proof sketch (assuming the lemma): Let S be a sequence of operations yielding content C. Each operation in S generates one layout.  By induction S yields one possible layout. By strong history independence any other sequence yielding C creates the same layout.

20 Canonical Representation Proof of Lemma
Lemma: For any strongly history independent implementation of a well behaved data-structure:  layout L,  operation Op, Op(L) yields only one possible layout. Assuming well-behaved, any operation Op has a sequence OP-1 that “reverses” Op. Assuming strong history independence we may set any two sequences with stop points.

21 Canonical Representation Proof of Lemma
Proof sketch: Fix any layout L, fix any operation Op. We need to show that Op(L) yields a single specific layout L’. Let S be any sequence of operation yielding L with probability > 0. Consider the following sequences with the following ‘stop’ points: 1 2 1 2 S1 = S S2 = S ◦ Op ◦ OP-1 The two stop points are the same in S1. The same layout must also appear in S2.

22 Canonical Representation Proof of Lemma
1 2 1 2 S1 = S S2 = S ◦ Op ◦ OP-1 Suppose L appears after S. L must appear again at the end of S2. Otherwise, we could distinguish between the two sequences. For any Li =Op(L), Op-1 must transform Li to L with probability 1. L L2 L1 Lk Op Op-1

23 Canonical Representation: Proof
Now let’s extend the sequence and modify stop points: 1 2 1 2 S3 = S ◦ Op S4 = S ◦ Op ◦ Op-1 ◦ Op Suppose some Li=Op(L) appears after S ◦ Op. Li must appear also at the end of S4. Otherwise, we could distinguish between the two sequences. After Op-1 the layout is again L. The operation of Op depends only on L. Op cannot “know” which Li to create. There is only one Li = Op(L) L L2 L1 Lk Op Op-1

24 What’s Next Strong History Independence means Canonical Representation. Lower Bounds on strong history independence. Lower Bounds on relaxed strong history independence. Obtaining a weak history independent heap.

25 Lower Bounds: an example
Lemma: D: Data-structure whose content is the set of keys stored inside it. I: Implementation of D that is : comparison-based and canonical. The operation Insert(D, x) requires time Ω(n). This lemma applies for example to: Heaps, Dictionaries, Search trees.

26 Lower Bounds – cont. Proof sketch:
comparison-based: keys are treated as ‘black boxes’ according to the comparison order.  The algorithm treats any n keys only according to their total order.  The canonical layout of any n different keys is the same no matter what their real values are. d1, d2, … dn - memory addresses of n keys in the layout according to their total order. d’1, d’2, … d’n+1 - memory addresses of n+1 keys in the layout according to their total order.

27 Lower Bounds – cont.  The operation moves at least n/2 keys.
Δ: The number of indices for which di  d’i Consider the content C = {k2, k3, … , kn+1} k2< k3< … < kn+1: Case 1 - Δ > n/2 - consider insert(C, kn+2): Puts kn+2 in address d’n+1. Moves each ki (2  i  n+1) from di-1 to d’i-1.  The operation moves at least n/2 keys. Case 2 - Δ  n/2 - consider insert(C, k1): Puts k1 in d’1 Moves each ki (2  i  n+1) from di-1 to d’i. The operation moves at least n/2 keys.

28 More Lower Bounds By similar methods we can show:
Remove-key requires time Ω(n). For a Heap: Increase-key requires time Ω(n). Build-Heap Operation requires time Ω(n log n). For a queue: either Insert-first or Remove-Last requires time Ω(n).

29 What’s Next Strong History Independence means Canonical Representation. Lower Bounds on strong history independence. Lower Bounds on relaxed strong history independence. Obtaining a weak history independent heap.

30 Relaxed strong history independence
Strong history independence implies very strong lower bounds. How can we relax the definition allowing more efficient data structures ? One possible way [HHMPR02 ]: Allowing the adversary to distinguish between the empty sequence and other sequences. Does this definition implies canonical memory layout ?

31 Relaxed strong history independence (cont.)
The relaxed definition does not implies canonical memory layout. Possible implementation of previous data structure: In each operation - choose a new independent uniformly chosen permutation of the elements. Not canonical … Relaxed strong history independent. Each operation - O(n)

32 Relaxed strong history independence
Is this relaxation enough ? (for efficient implementations) No We may prove almost the same lower bounds using different property of these data structures.

33 What’s Next Strong History Independence means Canonical Representation. Lower Bounds on strong history independence. Lower Bounds on relaxed strong history independence. Obtaining a weak history independent heap.

34 The Binary Heap Binary heap - a simple implementation of a priority queue. The keys are stored in an almost full binary tree. Heap property - For each node i: V(parent(i))  V(i) Assume that all values in the heap are unique. 10 7 9 3 6 4 8 1 5 2

35 The Binary Heap: Heapify
Heapify - used to preserve the heap property. Input: a root and two proper sub-heaps of height  h-1. Output: a proper heap of height h. 2 10 9 3 6 7 8 1 5 4 The node always chooses to sift down to the direction of the larger value.

36 Heapify Operation 2 10 7 9 3 6 4 8 1 5 2 10 9 6 7 8 3 1 5 4

37 Reversing Heapify heapify-1: “reversing” heapify:
Heapify-1(H: Heap, i: position) Root  vi All the path from the root to node i are shifted down. 10 7 9 3 6 4 8 1 5 2 The parameter i is a position in the heap H

38 Heapify-1 Operation Heapify(Heapify-1(H, i)) = H
10 2 10 9 3 6 7 8 1 5 4 7 9 6 4 8 3 1 5 2 Property: If all the keys in the heap are unique then for any i: Heapify(Heapify-1(H, i)) = H

39 The Binary Heap: Build-heap in O(n)
Building a heap - applying heapify on any sub-tree in the heap in a bottom up manner. Time Complexity 10 7 9 3 6 4 8 1 5 2

40 Works in a Top-Bottom manner
Reversing Build-heap Works in a Top-Bottom manner Build-Heap-1(H: heap) : Tree If size(H) = 1 then return (H); Choose a node i uniformly at random among the nodes in the heap H; H  Heapify-1(H, i); Return TREE(root(H), build-heap-1(HL), build-heap-1(HR)); For any random choice: Build-heap(Build-heap-1(H)) = H

41 Uniformly Chosen Heaps
Build-heap is a Many-To-One procedure. Build-heap-1 is a One-To-Many procedure depending on the random choices. Support(H) : The set of permutations (trees) such that build-heap(T) = H Facts (without proof): For each heap H the size of Support(H) is the same. Build-heap-1 returns one of these heaps uniformly.

42 How to Obtain a Weak History Independent Heap
Main idea: keeping a uniformly random heap at all time. We want: Build-heap: Return one of the possible heaps uniformly. Other operations: preserve this property.

43 An Easy Implementation: Build-Heap
Apply random permutation on the input elements and then use the standard build-heap. Analysis: Each heap has the same size of Support group  each heap has the same probability. More intuition: Applying random permutation on the elements erases all data about the order of the elements.  There is no information on the history.

44 Another Easy Implementation: Increase-key
Standard Increase-key - changes the value of element and sift it up until it gets to the correct place. 9 10 9 8 3 6 7 4 1 5 2 7 8 6 10 4 3 1 5 2

45 The standard increase-key is good for us.
Increase-key – cont. The standard increase-key is good for us. The increase-key operation is reversible: decreasing the value of the key back will return the key to its previous location. The number of heaps with n different keys is the same no matter of the actual values of keys. The increase-key function is 1-1.  If we had uniformly chosen heap then after increase-key it stays uniformly chosen heap.

46 Not So Easy: Extract-max and Insert
The standard operation of extract-max: Extract-max(H) Replace the value at the root with the value of the last leaf. Let the value sift down to the right position. Is this good for us ? No !

47 Standard Extract-max is Not Good
Three possible heaps with 4 elements: 4 2 3 1 1/3 4 3 2 1 1/3 4 3 2 1 1/3 4 3 1 2 1/3 3 1 2 3 2 1 3 2 1 One heap has probability 1/3 while the other has probability of 2/3 !

48 Naive Implementation: Extract-max
Extract-max(H) T = build-heap-1(H) Remove the last node v in the tree (T’). H’ = build-heap(T’) If we already removed the maximal value return H’ Otherwise: Replace the root with v and let v sift down to its correct position. build-heap-1 and build-heap works in O(n) … but this implementation is history independent.

49 Analysis: Extract-max
Extract-max(H) T = build-heap-1(H) Remove the last node v in the tree (T’). T is a random uniform permutation on the n+1 keys of the heap. T’ is a random uniform permutation on n keys of the heap excluding the random key v. H’ = build-heap(T’) H’ is a random uniform heap on the n original keys of the heap excluding a random key v.

50 Analysis: Extract-max
If we already removed the maximal value return H’ Otherwise: Replace the root with v and let v sift down to its correct position. If we already removed the maximal value we are done. Otherwise: This is just applying increase/decrease-key on the value at the root. (this is a 1-1 function …)

51 Improving Complexity: Extract-max
First 3 steps of Extract-max(H) T = build-heap-1(H) Remove the last node v in the tree. H’ = build-heap(T’) Main problem - steps 1 to 3 that takes O(n). Simple observation reduces the complexity of these steps to O(log2(n)) instead of O(n)

52 Reducing the Complexity to O(log2(n))
Observation: Most of the operations of build-heap-1 are redundant. they are always canceled by the operation of build-heap. Only the operations applied on nodes lying on the path from the root to the last leaf are really needed. 10 7 9 3 6 4 8 1 5 2

53 Reducing the Complexity to O(log2(n))
Complexity analysis: Each heapify-1 and heapify operation takes at most O(log n). There are O(log n) such operations. 10 7 9 3 6 4 8 1 5 2

54 Reducing the Complexity: O(log(n)) Expected Time
This is the most complex part Main ideas: We can show that there are actually O(1) operations of heapify-1 and heapify that make a differnce (in average over the random choices made by the algorithm in each step). We can detect these operations and apply only them.

55 The Insert Operation The standard implementation of insert is not good for us. Good implementation must use randomization in order to be efficient (otherwise it should be canonical …) Making insert history independent is not easy. The general method is similar to Extract-max. The most difficult part is again reducing the complexity from O(log2n) to O(log n) expected time.

56 Conclusions Demanding strong history independence usually requires a high efficiency penalty in the comparison based model. Weak history independent heap in the comparison-based model without panelty, Complexity: build-heap - O(n) worst case. increase-key - O(log n) worst case. extract-max, insert- O(log n) expected time, O(log2n) worst case.

57 Thank you Open Questions
Can We show separation between weak and strong History independence in the non-comparison model ? History independent implementation of other, more complex, data structures. Thank you


Download ppt "Lower and Upper Bounds on Obtaining History Independence"

Similar presentations


Ads by Google