Presentation is loading. Please wait.

Presentation is loading. Please wait.

ObliviStore High Performance Oblivious Cloud Storage Emil StefanovElaine Shi

Similar presentations


Presentation on theme: "ObliviStore High Performance Oblivious Cloud Storage Emil StefanovElaine Shi"— Presentation transcript:

1 ObliviStore High Performance Oblivious Cloud Storage Emil StefanovElaine Shi emil@cs.berkeley.eduelaine@cs.umd.edu http://www.emilstefanov.net/Research/ObliviousRam/ UC Berkeley UMD

2 Cloud Storage SkyDrive Windows Azure Storage Amazon S3, EBS Dropbox EMC Atmos Mozy iCloud Google Storage

3 Data Privacy Data privacy is a growing concern. So, many organizations encrypt their data. Encryption is not enough. Access patterns leak sensitive information. E.g., 80% of search queries (Islam et. al)

4 Oblivious Storage (ORAM) Goal: Conceal access patterns to remote storage. An observer cannot distinguish a sequence of read/write operations from random. Untrusted Cloud Storage Client Read(x) Write(y, data) Read(z)... etc Proposed by Goldreich and Ostrovsky. [GO96, OS97] Recently: [WS08, PR10, GM10, GMOT11, BMP11, SCSL11, SSS12, GMOT12, KLO12, WR12, LPMRS13, … ]

5 Client ORAM Node Oblivious Load Balancer Hybrid Cloud Private Cloud (trusted) (e.g., corporate cloud) Public Cloud (untrusted) lightweight (stores 0.25% of data) heavyweight (offers scalability)

6 Client Trusted Hardware in the Cloud few machines with trusted hardware entire storage system untrusted ORAM Node Oblivious Load Balancer networking untrusted

7 Contributions Built end-to-end oblivious storage system. – Open source code available. Fully asynchronous design – no blocking on I/O – Efficiently handles thousands of simultaneous operations. High performance (throughput & response time) – High throughput over high latency connections. – Much faster than existing systems. Oblivious load balancing technique for distributing the ORAM workload. Optimized for both SSDs and HDDs.

8 Performance Challenges Untrusted Cloud Client bandwidth cost, response time, block size storage IO cost, seeks Server client storage scalability to multiple servers focus on exact (not asymptotic) performance

9 Security Challenges Goals: Oblivious asynchronous scheduling. – Scheduling should not leak private information. – Oblivious load balancing across multiple machines. Load distribution should be independent of access pattern. Adversary can: – Observe raw storage locations accessed. – Observe network traffic patterns. – Maliciously delay storage and network IO. – Attempt to corrupt data. – etc.

10 Overview

11 Partitioned ORAM

12 Partition Based on Goldreich- Ostrovsky scheme.

13 Reading from a Partition Read one block from each level. One of them is the real block. Client Server

14 Writing to a Partition (shuffling) Shuffle consecutively filled levels. Write into next unfilled level. Server (before) Server (after) Client shuffle blocks

15 Challenge Parallelism Overlapping reading & shuffling Maintaining low client storage Preserving security

16 Architecture

17 Storage Cache Background Shuffler Partitions Server Client Partition Reader Semaphores ORAM Main increment decrement Read (blockId) Write (blockId, block) Eviction Cache ReadPartition(partition, blockId) Fetch(addr) Store(addr, block) CacheIn (addr) CacheOut(addr) Fetch (blockId) Store (partition, block) Fetch(partition) Fetch(addr) CacheIn (addr) Partition States

18 Storage Cache Background Shuffler Partitions Server Client Partition Reader Semaphores ORAM Main increment decrement Eviction Cache ReadPartition(partition, blockId) Fetch(addr) Store(addr, block) CacheIn (addr) CacheOut(addr) Fetch (blockId) Store (partition, block) Fetch(partition) Fetch(addr) CacheIn (addr) Partition States ORAM Read/Write requests enter the system.

19 Storage Cache Background Shuffler Partitions Server Client Partition Reader Semaphores ORAM Main increment decrement Read (blockId) Write (blockId, block) Eviction Cache ReadPartition(partition, blockId) Fetch(addr) Store(addr, block) CacheIn (addr) CacheOut(addr) Fetch (blockId) Store (partition, block) Fetch(partition) Fetch(addr) CacheIn (addr) Partition States The requests are then partitioned.

20 Storage Cache Background Shuffler Partitions Server Client Partition Reader Semaphores ORAM Main increment decrement Read (blockId) Write (blockId, block) Eviction Cache ReadPartition(partition, blockId) Fetch(addr) Store(addr, block) CacheIn (addr) CacheOut(addr) Fetch (blockId) Store (partition, block) Fetch(partition) Fetch(addr) CacheIn (addr) Partition States The partition reader reads levels of the partitions.

21 Storage Cache Background Shuffler Partitions Server Client Partition Reader Semaphores ORAM Main increment decrement Read (blockId) Write (blockId, block) Eviction Cache ReadPartition(partition, blockId) Fetch(addr) Store(addr, block) CacheIn (addr) CacheOut(addr) Fetch (blockId) Store (partition, block) Fetch(partition) Fetch(addr) CacheIn (addr) Partition States The background shuffler writes and shuffles levels of the partitions.

22 Storage Cache Background Shuffler Partitions Server Client Partition Reader Semaphores ORAM Main increment decrement Read (blockId) Write (blockId, block) Eviction Cache ReadPartition(partition, blockId) Fetch(addr) Store(addr, block) CacheIn (addr) CacheOut(addr) Fetch (blockId) Store (partition, block) Fetch(partition) Fetch(addr) CacheIn (addr) Partition States Semaphores bound the client memory.

23 Storage Cache Background Shuffler Partitions Server Client Partition Reader Semaphores ORAM Main increment decrement Read (blockId) Write (blockId, block) Eviction Cache ReadPartition(partition, blockId) Fetch(addr) Store(addr, block) CacheIn (addr) CacheOut(addr) Fetch (blockId) Store (partition, block) Fetch(partition) Fetch(addr) CacheIn (addr) Partition States The storage cache temporarily stores data for the background shuffler and helps ensure consistency.

24 Pipelined Shuffling

25 Background Shuffler Each ORAM Read/Write operation creates a shuffling job. beforeafter block to be written (associated with shuffle job)

26 Without Pipelining Without pipelining it would take over 6 minutes to write a 1 MB file!

27 Pipelining Across One Job

28 Asynchronous Shuffling Pipeline start reads complete all reads shuffle locally start writes complete all writes start reads complete all reads shuffle locally start writes complete all writes start reads complete all reads shuffle locally start writes complete all writes Shuffle Job 1 2 3 time Reserve memory resources before reading blocks Release memory resources after writing blocks Note: meanwhile, blocks may be read by the partition reader

29 Semaphores

30 Carefully designed semaphores – Enforce bound on client memory. – Control de-amortized shuffling speed. – Independent of the access pattern. Eviction – Unshuffled blocks that were recently accessed. Early cache-ins – Blocks read during shuffling of a partition. Shuffling buffer – Blocks currently being shuffled. Shuffling I/O – Pending work for the shuffler.

31 Security Secure in the malicious model. Adversary only observes (informally): – Behavior of synchronous system i.e., without ObliviStore’s optimizations. Proven secure. – Semaphore values Independent of the access pattern. – Timings Independent of the access pattern. Security proof in full online paper.

32 Evaluation

33 Performance 1 node (2x1TB SSD) (300 GB ORAM) (50ms simulated client latency) Speed: 3.2 MB/s

34 Scalability 10 nodes (2x1TB SSD each) (3 TB ORAM) (50ms simulated client latency) Speed: 31.5 MB/s Response time: 66ms (full load)

35 HDD “Friendly” 4 to 10 seeks per operation Works well on both SSDs and HDDs

36 Comparison to other ORAM implementations About 17 times higher throughput than PrivateFS (under a very similar configuration)

37 Lorch et. al. also implemented ORAM. Built on top of real-world secure processors. Lots of overhead from limitations of secure processors – Very limited I/O bandwidth – Very limited computation capabilities Many other ORAM constructions exist, but not many full end-to-end implementations. Comparison to other ORAM implementations

38 Conclusion Fully asynchronous. High performance. Full end-to-end implementation (open source). – Already been used for mining biometric data. (Bringer et. al) Thank you! http://www.emilstefanov.net/Research/ObliviousRam/


Download ppt "ObliviStore High Performance Oblivious Cloud Storage Emil StefanovElaine Shi"

Similar presentations


Ads by Google