Download presentation
Presentation is loading. Please wait.
Published byMelissa Evans Modified over 9 years ago
1
Johns Hopkins & Purdue 1 12 Jul 05 Scalability, Accountability and Instant Information Access for Network Centric Warfare Department of Computer Science Johns Hopkins University Yair Amir, Claudiu Danilov, Jon Kirsch, John Lane, Jonathan Shapiro Chi-Bun Chan, Cristina Nita-Rotaru, Josh Olsen David Zage Department of Computer Science Purdue University http://www.cnds.jhu.edu
2
Johns Hopkins & Purdue 2 12 Jul 05 Dealing with Insider Threats Project goals: Scaling survivable replication to wide area networks. –Overcome 5 malicious replicas. –SRS goal: Improve latency by a factor of 3. –Self imposed goal: Improve throughput by a factor of 3. Dealing with malicious clients. –Compromised clients can inject authenticated but incorrect data - hard to detect on the fly. –Malicious or just an honest error? Can be useful for both. Exploiting application update semantics for replication speedup in malicious environments. –Weaker update semantics allows for immediate response. Today we focus on scaling survivable replication to wide area networks. Introducing Steward: Survivable Technology for Wide Area Replication.
3
Johns Hopkins & Purdue 3 12 Jul 05 A Distributed Systems Service Model Message-passing system. Clients issue requests to servers, then wait for answers. Replicated servers process the request, then provide answers to clients. Server Replicas 1 o o o 23 N Clients A site
4
Johns Hopkins & Purdue 4 12 Jul 05 State Machine Replication Main Challenge: Ensuring coordination between servers. –Requires agreement on the request to be processed and consistent order of requests. Benign faults: Paxos [Lam98,Lam01]: must contact f+1 out of 2f+1 servers and uses 2 rounds to allow consistent progress. Byzantine faults: BFT [CL99]: must contact 2f+1 out of 3f+1 servers and uses 3 rounds to allow consistent progress.
5
Johns Hopkins & Purdue 5 12 Jul 05 A Replicated Server System Maintaining consistent servers [Sch90] : –To tolerate f benign faults, 2f+1 servers are needed. –To tolerate f malicious faults: 3f+1 servers are needed. Responding to read-only clients’ request [Sch90] : –If the servers support only benign faults: 1 answer is enough. –If the servers can be malicious: the client must wait for f +1 identical answers, f being the number of malicious servers.
6
Johns Hopkins & Purdue 6 12 Jul 05 Peer Byzantine Replication Limitations Construct consistent total order. Limited scalability due to 3 round all-peer exchange. Strong connectivity is required. –2f+1 (out of 3f+1) to allow progress and f+1 to get an answer. Partitions are a real issue. Clients depend on remote information. –Bad news: Provably optimal. We need to pay something to get something else.
7
Johns Hopkins & Purdue 7 12 Jul 05 State of the Art in Byzantine Replication BFT [CL99] Baseline technology
8
Johns Hopkins & Purdue 8 12 Jul 05 Evaluation Network 1: Symmetric Wide Area Network Synthetic network used for analysis and understanding. 5 sites, each of which connected to all other sites with equal latency links. Each site has 4 replicas (except one site with 3 replicas due to current BFT setup). Total – 19 replicas in the system. Each wide area link has a 10Mbits/sec capacity. Varied wide area latencies between 10ms - 400ms.
9
Johns Hopkins & Purdue 9 12 Jul 05 BFT Wide Area Performance Symmetric network. 5 sites. Total of 19 Replicas. Almost out of the box BFT, which is a very good prototype software. Results validated also by our new implementation. Update only performance (no disk writes).
10
Johns Hopkins & Purdue 10 12 Jul 05 Evaluation Network 2: Practical Wide-Area Network Based on a real experimental network (CAIRN). Modeled in the Emulab facility. Capacity of wide area links was modified to be 10Mbits/sec to better reflect current realities. Results will not be shown today. ISIPC ISIPC4 TISWPC ISEPC3 ISEPC UDELPC MITPC 38.8 ms 1.86Mbits/sec 1.4 ms 1.47Mbits/sec 4.9 ms 9.81Mbits/sec 3.6 ms 1.42Mbits/sec 100 Mb/s < 1ms 100 Mb/s <1ms Virginia Delaware Boston San Jose Los Angeles
11
Johns Hopkins & Purdue 11 12 Jul 05 Outline Project goals. Byzantine replication – current state of the art. Steward – a new hierarchical approach. Confining the malicious attack effects to the local site. –BFT-inspired protocol for the local area site. –Threshold Cryptography for trusted sites. Fault tolerant replication for the wide area. –Initial thinking and snags. –A Paxos-based approach. Putting it all together. Evaluation. Summary.
12
Johns Hopkins & Purdue 12 12 Jul 05 Steward: Survivable Technology for Wide Area Replication Each site acts as a trusted logical unit that can crash or partition. Effects of malicious faults are confined to the local site. Between sites: –Fault-tolerant protocol between sites. –Alternatively – Byzantine protocols also between sites. There is no free lunch – we pay with more hardware… Server Replicas 1 o o o 23 3f+1 Clients A site
13
Johns Hopkins & Purdue 13 12 Jul 05 Steward Architecture Local Area Byzantine Replication Monitor Wide Area Fault Tolerant Replication Server Replica 1 Wide area representative Local Area Byzantine Replication Monitor Wide Area Fault Tolerant Replication Server Replica 2 Wide area standby Local Area Byzantine Replication Monitor Wide Area Fault Tolerant Replication Server Replica 3f+1 Wide area standby o o o Wide area network Local area network Local Site Clients
14
Johns Hopkins & Purdue 14 12 Jul 05 Outline Project goals. Byzantine replication – current state of the art. Steward – a new hierarchical approach. Confining the malicious attack effects to the local site. –BFT-inspired protocol for the local area site. –Threshold Cryptography for trusted sites. Fault tolerant replication for the wide area. –Initial thinking and snags. –A Paxos-based approach. Putting it all together. Evaluation. Summary.
15
Johns Hopkins & Purdue 15 12 Jul 05 Constructing a Trusted Entity in the Local Site No trust between participants in a site: –A site acts as one unit that can only crash if the assumptions are met. Main ideas: Use a BFT-like [CL99, YMVAD03] protocol to mask local Byzantine replicas. –Every update or acknowledgement from a site will need to go through some sort of agreement. Use threshold cryptography to make sure local Byzantine replicas cannot misrepresent the site. –Every valid message going out of the site will need to first be signed using at least {f+1 out of n} threshold cryptography.
16
Johns Hopkins & Purdue 16 12 Jul 05 Lessons Learned (1) Vector HMACs vs Signatures: –BFT’s good performance in LAN is attributed also to the use of vector HMACs, facilitated by establishing pair- wise secret keys between local replicas. –Key decision: Use signatures, not HMACs. Computing power trend works against using HMACs. Signatures provide non repudiation, while HMACs do not. Simplifying the protocol during view changes. Vector HMAC is less scalable (mainly in terms of space). Steward is designed for 5-10 years from now. Not every message out requires a complete BFT invocation: –Acknowledgements require a much lighter protocol step.
17
Johns Hopkins & Purdue 17 12 Jul 05 Lessons Learned (2) {f+1 out of n} or {2f+1 out of n} threshold cryptography: –Performance tradeoff: Need f+1 contributing replicas to mask effects of malicious behavior. Need 2f+1 to pass a Byzantine agreement. Either use the last round of BFT and create {2f+1 out of n} signature, or add another round after BFT and create {f+1 out of n} signature. A complete system requires a complete protocol: –Past research focus on the correctness of ordering, but not on issues such as generic reconciliation after network partitions and merges, flow control, etc. –The devil is in the details.
18
Johns Hopkins & Purdue 18 12 Jul 05 Useful By-Product: Threshold Cryptography Library We implemented a library providing support for generating Threshold RSA signatures. Critical component of the Steward architecture. Implementation is based on OpenSSL. Can be used by any application requiring threshold digital signatures. We plan to release it as open source. Let us know if you are interested in such a library.
19
Johns Hopkins & Purdue 19 12 Jul 05 Outline Project goals. Byzantine replication – current state of the art. Steward – a new hierarchical approach. Confining the malicious attack effects to the local site. –BFT-inspired protocol for the local area site. –Threshold Cryptography for trusted sites. Fault tolerant replication for the wide area. –Initial thinking and snags. –A Paxos-based approach. Putting it all together. Evaluation. Summary.
20
Johns Hopkins & Purdue 20 12 Jul 05 Fault Tolerant Replication Engine Reg Prim Trans Prim Exchange States Non Prim Construct Trans Memb Exchange Messages UnNo Last CPC Last CPC Last State Possible Prim No Prim or Trans Memb Recover Trans Memb Reg Memb Trans Memb Reg Memb Update update (Red)Update (Yellow)Update (Green) 1a1b?0 [AT02]
21
Johns Hopkins & Purdue 21 12 Jul 05 Fault Tolerant Replication Throughput Comparison (WAN) [ADMST02] Not Byzantine!!!!!
22
Johns Hopkins & Purdue 22 12 Jul 05 The Paxos Protocol Normal Case, after leader election [Lam98] Key: A simple end-to-end algorithm
23
Johns Hopkins & Purdue 23 12 Jul 05 Lessons Learned (1) Hierarchical architecture vastly reduces the number of messages sent on the wide area network: –Helps both in throughput and latency. Using a fault tolerant protocol on the wide area network reduces the number of mandatory wide area crossings compared with a Byzantine protocol. –BFT-inspired protocols require 3 wide area crossings for updates generated at leader site, and 4 otherwise. –Paxos-based protocols require 2 wide area crossings for updates generated at leader site and 3 otherwise.
24
Johns Hopkins & Purdue 24 12 Jul 05 Lessons Learned (2) All protocol details have to be specified: –Paxos papers lack most of the details… Base operation – specified reasonably well. Leader election – completely unspecified. Reconciliation – completely unspecified. Also not specified (but that is ok) are practical considerations, such as retransmission handling and flow control. The view change / leader election is the most important part, consistency wise: –Determines the liveness criteria of the overall system.
25
Johns Hopkins & Purdue 25 12 Jul 05 Example: Liveness Criteria Strong L1 : –If there exists a time after which there is always some set of running, connected servers S, where |S| is at least a majority, then if a server in the set initiates an update, some member of the set eventually executes the update. L1: –If there exists a set consisting of a majority of servers, and a time after which the set does not experience any communication or process failures, then if a server in the set initiates an update, some member of the set eventually executes the update. Weak L1: –If there exists a set consisting of a majority of servers, and a time after which the set does not experience any communication or process failures, AND the members of the set do not hear from any members outside of the set, then if a server in the set initiates an update, some member of the set eventually executes the update.
26
Johns Hopkins & Purdue 26 12 Jul 05 What’s the difference? Strong L1: –Allows any majority set –Membership of the set can change rapidly, as long as cardinality remains at least a majority. L1: –Requires a stable majority set, but others (beyond the majority) can come and go. Weak L1: –Requires a stable, isolated majority set.
27
Johns Hopkins & Purdue 27 12 Jul 05 Outline Project goals. Byzantine replication – current state of the art. Steward – a new hierarchical approach. Confining the malicious attack effects to the local site. –BFT-inspired protocol for the local area site. –Threshold Cryptography for trusted sites. Fault tolerant replication for the wide area. –Initial thinking and snags. –A Paxos-based approach. Putting it all together. Evaluation. Summary.
28
Johns Hopkins & Purdue 28 12 Jul 05 Steward Architecture Local Area Byzantine Replication Monitor Wide Area Fault Tolerant Replication Server Replica 1 Wide area representative Local Area Byzantine Replication Monitor Wide Area Fault Tolerant Replication Server Replica 2 Wide area standby Local Area Byzantine Replication Monitor Wide Area Fault Tolerant Replication Server Replica 3f+1 Wide area standby o o o Wide area network Local area network Local Site Clients
29
Johns Hopkins & Purdue 29 12 Jul 05 Testing Environment Platform: Dual Intel Xeon CPU 3.2 GHz 64 bits 1 GByte RAM, Linux Fedora Core 3. Library relies on Openssl : -Used OpenSSL 0.9.7a 19 Feb 2003. Baseline operations: -RSA 1024-bits sign: 1.3 ms, verify: 0.07 ms. -Perform modular exponentiation 1024 bits, ~1 ms. -Generate a 1024 bits RSA key ~55ms.
30
Johns Hopkins & Purdue 30 12 Jul 05 Steward Expected Performance Symmetric network. 5 sites. 16 replicas per site. Total of 80 replicas. Methodology: Measuring time in a site, and then running wide area protocol between 5 entities. Each entity performs busy-wait equal (conservatively) to the cost of the local site algorithm, including threshold cryptography. Actual computers: 16 on local area, and then, separately, 5 on wide area. Update only performance (no disk writes).
31
Johns Hopkins & Purdue 31 12 Jul 05 Steward Measured Performance Symmetric network. 5 sites. 16 replicas per site. Total of 80 replicas. Methodology: Leader site has 16 replicas. Each other site has 1 entity that performs busy-wait equal (conservatively) to the cost of a receiver site reply, including threshold cryptography. Actual computers: 20. Update only performance (no disk writes).
32
Johns Hopkins & Purdue 32 12 Jul 05 Head to Head Comparison (1) Symmetric network. 5 sites. 50ms distance between each site. 16 replicas per site. Total of 80 replicas. BFT broke after 6 clients. SRS goal: Factor of 3 improvement in latency. Self imposed goal: Factor of 3 improvement in throughput. Bottom line: Both goals are met once system has more than one client, and considerably excided thereafter.
33
Johns Hopkins & Purdue 33 12 Jul 05 Head to Head Zoom (1) Symmetric network. 5 sites. 50ms distance between each site. 16 replicas per site. Total of 80 replicas. BFT broke after 6 clients. SRS goal: Factor of 3 improvement in latency. Self imposed goal: Factor of 3 improvement in throughput. Bottom line: Both goals are met once system has more than one client, and considerably excided thereafter.
34
Johns Hopkins & Purdue 34 12 Jul 05 Head to Head Comparison (2) Symmetric network. 5 sites. 100ms distance between each site. 16 replicas per site. Total of 80 replicas. BFT broke after 7 clients. SRS goal: Factor of 3 improvement in latency. Self imposed goal: Factor of 3 improvement in throughput. Bottom line: Both goals are met once system has one client per site, and considerably excided thereafter.
35
Johns Hopkins & Purdue 35 12 Jul 05 Head to Head Zoom (2) Symmetric network. 5 sites. 100ms distance between each site. 16 replicas per site. Total of 80 replicas. BFT broke after 7 clients. SRS goal: Factor of 3 improvement in latency. Self imposed goal: Factor of 3 improvement in throughput. Bottom line: Both goals are met once system has one client per site, and considerably excided thereafter.
36
Johns Hopkins & Purdue 36 12 Jul 05 Factoring Queries In So far, we only considered updates. –Worst case scenario from our perspective. How to factor queries into the game? –Best answer: Just measure, but we had no time to build the necessary infrastructure and measure. –Best answer for now: make a conservative prediction. Steward: –A query is answered locally after an {f+1 out of n} Threshold Cryptography operation. Cost: ~11ms. BFT: –A query requires at least some remote answers in this setup. Cost: at least 100ms (for 50ms network), 200ms (for 100ms network). –We could change the setup to include 6 local members in each site (for a total of 30 replicas). That will allow a local answer in BFT with a query cost similar to Steward, but then BFT performance will basically collapse on the updates. Bottom line prediction: –Both goals will be met once the system has more than one client, and will be considerably exceeded thereafter.
37
Johns Hopkins & Purdue 37 12 Jul 05 Impact New ideas Scalability, Accountability and Instant Information Access for Network-Centric Warfare Schedule Resulting systems with at least 3 times higher throughput, lower latency and high availability for updates over wide area networks. Clear path for technology transitions into Military C3I systems such as the Army Future Combat System. http://www.cnds.jhu.edu/funding/srs/ June 04 Dec 04 June 05 Dec 05 C3I model, baseline and demo Component analysis & design Component Implement. System integration & evaluation Final C3I demo and baseline eval First scalable wide-area intrusion-tolerant replication architecture. Providing accountability for authorized but malicious client updates. Exploiting update semantics to provide instant and consistent information access. Comp. eval.
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.