Presentation is loading. Please wait.

Presentation is loading. Please wait.

PeerReview: Practical Accountability for Distributed Systems SOSP 07.

Similar presentations


Presentation on theme: "PeerReview: Practical Accountability for Distributed Systems SOSP 07."— Presentation transcript:

1 PeerReview: Practical Accountability for Distributed Systems SOSP 07

2 Why have Accountability?  Nodes can fail  An attacker can compromise a node  Accidental Mis-configuration  Multiple administrative domains

3  Distributed state, incomplete information  General case: Multiple admins with different interests Admin www.sosp2007.org/talks/sosp118-haeberlen.ppt

4 What is Accountability?  Fault = Anything besides expected behavior  Ideal Accountability: Detect a fault Identify the faulty node (Completeness) Correct node can prove its correctness (Accuracy) Expose the faulty node

5 A few advantages:  Deterring faults  Augment fault tolerant systems  Augmenting best-effort systems

6 Challenges: What can/cannot be detected?  Un-observable faults: Node’s internal state CPU overheating, Display failed Need trusted probes!  Observable faults: Affect a correct node causally No trusted entity required!  How to verify if a node reports correctly?  How to distinguish omission from long delays?

7 Request Grant Release

8

9

10

11

12 Accountability: How much can we do?  Completeness: Eventually suspected Eventually exposed  Accuracy No correct node is forever suspected No correct node ever exposed by a correct node

13 FullReview  Characteristics: A trusted entity exists All messages go through trusted entity Each node maintains a log for every other node Check the log Suspect/Expose a deviant node  Complete?  Accurate?  Practical?

14 PeerReview: Practical Accountability  No trusted entity  Nodes only keep their own log May retrieve others when needed  Logs are tamper-evident  Witness nodes: check correctness of a node  Challenge/Response protocol

15 System Model  Each node modeled as: A state machine A detector An application  Assumptions: Deterministic state machine Correct nodes can communicate A reference implementation of node SW A secure signature mechanism available

16 Overview  Nodes maintain a logof I/O  Witnesses of a node audit its log If faulty, gather evidence Make it known

17 Tamper-evident logs  Append-only list of I/O  Log-entries connected in a hash-chain  Authenticator: A signed statement by a node If a node tampers the log, it will be evident  Logs must be complete No entries missed  Logs must be correct No forged entries No multiple logs

18 Fault Detection  Audit Replay the inputs to a reference implementation Output == Log ?  Evidence Transfer Fetch evidence from witnesses Module B Module A Module B =? Log Network Input Output State machine if ≠ Module A

19 PeerReview: Applications  Overlay Multicast Large amounts of data Freeloaders  Network File System Latency-sensitive Data tampering Message loss in the network  Peer-to-peer email DoS attack

20 Results: Multicast with Freeloader

21 Results: Throughput

22 Results:

23 Discussion  What if all witnesses are faulty?  How to choose T trunc, T audit, T buf


Download ppt "PeerReview: Practical Accountability for Distributed Systems SOSP 07."

Similar presentations


Ads by Google