Presentation is loading. Please wait.

Presentation is loading. Please wait.

Overlay, End System Multicast and i3

Similar presentations


Presentation on theme: "Overlay, End System Multicast and i3"— Presentation transcript:

1 Overlay, End System Multicast and i3
General Concept of Overlays Some Examples End-System Multicast Rationale How to construct “self-organizing” overlay Performance in support conferencing applications Internet Indirection Infrastructure (i3) Motivation and Basic ideas Implementation Overview Applications Readings: read the required papers winter 2008 Overlay and i3

2 Overlay Networks winter 2008 Overlay and i3

3 Overlay Networks Focus at the application level winter 2008
Overlay and i3

4 Overlay Networks A logical network built on top of a physical network
Overlay links are tunnels through the underlying network Many logical networks may coexist at once Over the same underlying network And providing its own particular service Nodes are often end hosts Acting as intermediate nodes that forward traffic Providing a service, such as access to files Who controls the nodes providing service? The party providing the service (e.g., Akamai) Distributed collection of end users (e.g., peer-to-peer) winter 2008 Overlay and i3

5 Routing Overlays Alternative routing strategies
No application-level processing at the overlay nodes Packet-delivery service with new routing strategies Incremental enhancements to IP IPv6 Multicast Mobility Security Revisiting where a function belongs End-system multicast: multicast distribution by end hosts Customized path selection Resilient Overlay Networks: robust packet delivery winter 2008 Overlay and i3

6 IP Tunneling IP tunnel is a virtual point-to-point link
Illusion of a direct link between two separated nodes Encapsulation of the packet inside an IP datagram Node B sends a packet to node E … containing another packet as the payload A B E F Logical view: tunnel A B E F Physical view: winter 2008 Overlay and i3

7 6Bone: Deploying IPv6 over IP4
A B E F IPv6 tunnel Logical view: Physical view: C D IPv4 Flow: X Src: A Dest: F data Src:B Dest: E A-to-B: E-to-F: B-to-C: IPv6 inside winter 2008 Overlay and i3

8 MBone: IP Multicast Multicast IP multicast
Delivering the same data to many receivers Avoiding sending the same data many times IP multicast Special addressing, forwarding, and routing schemes Not widely deployed, so MBone tunneled between nodes unicast multicast winter 2008 Overlay and i3

9 End-System Multicast IP multicast still is not widely deployed
Technical and business challenges Should multicast be a network-layer service? Multicast tree of end hosts Allow end hosts to form their own multicast tree Hosts receiving the data help forward to others winter 2008 Overlay and i3

10 RON: Resilient Overlay Networks
Premise: by building application overlay network, can increase performance and reliability of routing application-layer router Princeton Yale Berkeley Two-hop (application-level) Berkeley-to-Princeton route winter 2008 Overlay and i3

11 RON Can Outperform IP Routing
IP routing does not adapt to congestion But RON can reroute when the direct path is congested IP routing is sometimes slow to converge But RON can quickly direct traffic through intermediary IP routing depends on AS routing policies But RON may pick paths that circumvent policies Then again, RON has its own overheads Packets go in and out at intermediate nodes Performance degradation, load on hosts, and financial cost Probing overhead to monitor the virtual links Limits RON to deployments with a small number of nodes winter 2008 Overlay and i3

12 Secure Communication Over Insecure Links
Encrypt packets at entry and decrypt at exit Eavesdropper cannot snoop the data … or determine the real source and destination winter 2008 Overlay and i3

13 Communicating With Mobile Users
A mobile user changes locations frequently So, the IP address of the machine changes often The user wants applications to continue running So, the change in IP address needs to be hidden Solution: fixed gateway forwards packets Gateway has a fixed IP address … and keeps track of the mobile’s address changes gateway winter 2008 Overlay and i3

14 Unicast Emulation of Multicast
End Systems Routers Gatech CMU Stanford Berkeley winter 2008 Overlay and i3

15 Routers with multicast support
IP Multicast Gatech Stanford CMU Berkeley Routers with multicast support No duplicate packets Highly efficient bandwidth usage Key Architectural Decision: Add support for multicast in IP layer winter 2008 Overlay and i3

16 Key Concerns with IP Multicast
Scalability with number of groups Routers maintain per-group state Analogous to per-flow state for QoS guarantees Aggregation of multicast addresses is complicated Supporting higher level functionality is difficult IP Multicast: best-effort multi-point delivery service End systems responsible for handling higher level functionality Reliability and congestion control for IP Multicast complicated Deployment is difficult and slow ISP’s reluctant to turn on IP Multicast winter 2008 Overlay and i3

17 End System Multicast Overlay Tree Gatech Stan1 Stanford Stan2 Berk1
CMU Gatech Stan1 Stanford Stan2 Berk1 CMU Stan1 Stan2 Berk2 Overlay Tree Gatech Berk1 Berkeley winter 2008 Overlay and i3

18 Potential Benefits Scalability Easier to deploy
Routers do not maintain per-group state End systems do, but they participate in very few groups Easier to deploy Potentially simplifies support for higher level functionality Leverage computation and storage of end systems For example, for buffering packets, transcoding, ACK aggregation Leverage solutions for unicast congestion control and reliability winter 2008 Overlay and i3

19 Design Questions Is End System Multicast Feasible?
Target applications with small and sparse groups How to Build Efficient Application-Layer Multicast “Tree” or Overlay Network? Narada: A distributed protocol for constructing efficient overlay trees among end systems Simulation and Internet evaluation results to demonstrate that Narada can achieve good performance winter 2008 Overlay and i3

20 Performance Concerns Delay from CMU to Berk1 increases Gatech
Stan1 Stan2 Berk2 Gatech Berk1 Delay from CMU to Berk1 increases CMU Gatech Stan1 Stan2 Berk1 Berk2 Duplicate Packets: Bandwidth Wastage winter 2008 Overlay and i3

21 What is an efficient overlay tree?
The delay between the source and receivers is small Ideally, The number of redundant packets on any physical link is low Heuristic used: Every member in the tree has a small degree Degree chosen to reflect bandwidth of connection to Internet CMU CMU CMU Stan2 Stan2 Stan2 Stan1 Stan1 Stan1 Berk1 Gatech Berk1 Gatech Berk1 Gatech Berk2 Berk2 Berk2 High latency High degree (unicast) “Efficient” overlay winter 2008 Overlay and i3

22 Why is self-organization hard?
Dynamic changes in group membership Members may join and leave dynamically Members may die Limited knowledge of network conditions Members do not know delay to each other when they join Members probe each other to learn network related information Overlay must self-improve as more information available Dynamic changes in network conditions Delay between members may vary over time due to congestion winter 2008 Overlay and i3

23 Narada Design includes all group members
“Mesh”: Richer overlay that may have cycles and includes all group members Members have low degrees Shortest path delay between any pair of members along mesh is small Step 1 Source rooted shortest delay spanning trees of mesh Constructed using well known routing algorithms Members have low degrees Small delay from source to receivers Step 2 Berk2 Berk1 CMU Gatech Stan1 Stan2 winter 2008 Overlay and i3

24 Narada Components Mesh Management: Mesh Optimization:
Ensures mesh remains connected in face of membership changes Mesh Optimization: Distributed heuristics for ensuring shortest path delay between members along the mesh is small Spanning tree construction: Routing algorithms for constructing data-delivery trees Distance vector routing, and reverse path forwarding winter 2008 Overlay and i3

25 Optimizing Mesh Quality
Members periodically probe other members at random New Link added if Utility Gain of adding link > Add Threshold Members periodically monitor existing links Existing Link dropped if Cost of dropping link < Drop Threshold CMU Stan2 Stan1 A poor overlay topology Gatech1 Berk1 Gatech2 winter 2008 Overlay and i3

26 The terms defined Utility gain of adding a link based on
The number of members to which routing delay improves How significant the improvement in delay to each member is Cost of dropping a link based on The number of members to which routing delay increases, for either neighbor Add/Drop Thresholds are functions of: Member’s estimation of group size Current and maximum degree of member in the mesh winter 2008 Overlay and i3

27 Desirable properties of heuristics
Stability: A dropped link will not be immediately readded Partition Avoidance: A partition of the mesh is unlikely to be caused as a result of any single link being dropped CMU CMU Stan2 Stan1 Stan2 Stan1 Probe Gatech1 Berk1 Gatech1 Berk1 Probe Gatech2 Gatech2 Delay improves to Stan1, CMU but marginally. Do not add link! Delay improves to CMU, Gatech1 and significantly. Add link! winter 2008 Overlay and i3

28 Used by Berk1 to reach only Gatech2 and vice versa. Drop!!
CMU Stan2 Stan1 Berk1 Gatech1 Gatech2 Used by Berk1 to reach only Gatech2 and vice versa. Drop!! CMU Stan2 Stan1 Berk1 Gatech1 Gatech2 An improved mesh !! winter 2008 Overlay and i3

29 Performance Metrics Delay between members using Narada
Stress, defined as the number of identical copies of a packet that traverse a physical link Berk2 CMU Stan1 Stan2 Gatech Berk1 Delay from CMU to Berk1 increases Stan1 Stress = 2 CMU Stan2 Berk1 Gatech Berk2 winter 2008 Overlay and i3

30 Factors affecting performance
Topology Model Waxman Variant Mapnet: Connectivity modeled after several ISP backbones ASMap: Based on inter-domain Internet connectivity Topology Size Between 64 and 1024 routers Group Size Between 16 and 256 Fanout range Number of neighbors each member tries to maintain in the mesh winter 2008 Overlay and i3

31 ESM Conclusions Proposed in 1989, IP Multicast is not yet widely deployed Per-group state, control state complexity and scaling concerns Difficult to support higher layer functionality Difficult to deploy, and get ISP’s to turn on IP Multicast Is IP the right layer for supporting multicast functionality? For small-sized groups, an end-system overlay approach is feasible has a low performance penalty compared to IP Multicast has the potential to simplify support for higher layer functionality allows for application-specific customizations winter 2008 Overlay and i3

32 Supporting Conferencing in ESM
Source rate 2 Mbps 2 Mbps C 0.5 Mbps Unicast congestion control Transcoding A D 2 Mbps (DSL) B Framework Unicast congestion control on each overlay link Adapt to the data rate using transcoding Objective High bandwidth and low latency to all receivers along the overlay winter 2008 Overlay and i3

33 Enhancements of Overlay Design
Two new issues addressed Dynamically adapt to changes in network conditions Optimize overlays for multiple metrics Latency and bandwidth Study in the context of the Narada protocol Techniques presented apply to all self-organizing protocols winter 2008 Overlay and i3

34 Adapt to Dynamic Metrics
Adapt overlay trees to changes in network condition Monitor bandwidth and latency of overlay links Link measurements can be noisy Aggressive adaptation may cause overlay instability time bandwidth raw estimate smoothed estimate discretized estimate transient: do not react persistent: react Capture the long term performance of a link Exponential smoothing, Metric discretization winter 2008 Overlay and i3

35 Optimize Overlays for Dual Metrics
Source rate 2 Mbps 60ms, 2Mbps Receiver X Source 30ms, 1Mbps Prioritize bandwidth over latency Break tie with shorter latency winter 2008 Overlay and i3

36 Example of Protocol Behavior
All members join at time 0 Single sender, CBR traffic Mean Receiver Bandwidth Reach a stable overlay Acquire network info Self-organization Adapt to network congestion winter 2008 Overlay and i3

37 Evaluation Goals Can ESM provide application level performance comparable to IP Multicast? What network metrics must be considered while constructing overlays? What is the network cost and overhead? winter 2008 Overlay and i3

38 Evaluation Overview Compare performance of ESM with
Benchmark (IP Multicast) Other overlay schemes that consider fewer network metrics Evaluate schemes in different scenarios Vary host set, source rate Performance metrics Application perspective: latency, bandwidth Network perspective: resource usage, overhead winter 2008 Overlay and i3

39 Benchmark Scheme IP Multicast not deployed
Sequential Unicast: an approximation Bandwidth and latency of unicast path from source to each receiver Performance similar to IP Multicast with ubiquitous deployment C A B Source winter 2008 Overlay and i3

40 Overlay Schemes Overlay Scheme Choice of Metrics Bandwidth Latency
Bandwidth-Only Latency-Only Random winter 2008 Overlay and i3

41 Experiment Methodology
Compare different schemes on the Internet Ideally: run different schemes concurrently Interleave experiments of schemes Repeat same experiments at different time of day Average results over 10 experiments For each experiment All members join at the same time Single source, CBR traffic Each experiment lasts for 20 minutes winter 2008 Overlay and i3

42 Application Level Metrics
Bandwidth (throughput) observed by each receiver RTT between source and each receiver along overlay D C A B Source Data path RTT measurement These measurements include queueing and processing delays at end systems winter 2008 Overlay and i3

43 Performance of Overlay Scheme
CMU Exp1 CMU Exp2 Exp1 Exp2 Rank 1 2 RTT Harvard MIT 32ms 30ms Harvard 40ms MIT 42ms Different runs of the same scheme may produce different but “similar quality” trees Mean Std. Dev. “Quality” of overlay tree produced by a scheme Sort (“rank”) receivers based on performance Take mean and std. dev. on performance of same rank across multiple experiments Std. dev. shows variability of tree quality winter 2008 Overlay and i3

44 Factors Affecting Performance
Heterogeneity of host set Primary Set: 13 university hosts in U.S. and Canada Extended Set: 20 hosts, which includes hosts in Europe, Asia, and behind ADSL Source rate Fewer Internet paths can sustain higher source rate More intelligence required in overlay constructions winter 2008 Overlay and i3

45 Three Scenarios Considered
Primary Set 1.2 Mbps Primary Set 1.2 Mbps Primary Set 2.4 Mbps Extended Set 2.4 Mbps Does ESM work in different scenarios? How do different schemes perform under various scenarios? (lower)  “stress” to overlay schemes  (higher) winter 2008 Overlay and i3

46 BW, Primary Set, 1.2 Mbps Internet pathology
Naïve scheme performs poorly even in a less “stressful” scenario winter 2008 Overlay and i3

47 (lower)  “stress” to overlay schemes  (higher)
Scenarios Considered Primary Set 1.2 Mbps Primary Set 2.4 Mbps Extended Set 2.4 Mbps (lower)  “stress” to overlay schemes  (higher) Does an overlay approach continue to work under a more “stressful” scenario? Is it sufficient to consider just a single metric? Bandwidth-Only, Latency-Only winter 2008 Overlay and i3

48 Optimizing only for latency has poor bandwidth performance
BW, Extended Set, 2.4 Mbps no strong correlation between latency and bandwidth Optimizing only for latency has poor bandwidth performance winter 2008 Overlay and i3

49 Optimizing only for bandwidth has poor latency performance
RTT, Extended Set, 2.4Mbps Bandwidth-Only cannot avoid poor latency links or long path length Optimizing only for bandwidth has poor latency performance winter 2008 Overlay and i3

50 Summary so far… For best application performance: adapt dynamically to both latency and bandwidth metrics Bandwidth-Latency performs comparably to IP Multicast (Sequential-Unicast) What is the network cost and overhead? winter 2008 Overlay and i3

51 Resource Usage (RU) Captures consumption of network resource of overlay tree Overlay link RU = propagation delay Tree RU = sum of link RU UCSD CMU U.Pitt U. Pitt 40ms 2ms Efficient (RU = 42ms) Inefficient (RU = 80ms) Scenario: Primary Set, 1.2 Mbps (normalized to IP Multicast RU) IP Multicast 1.0 Bandwidth-Latency 1.49 Random 2.24 Naïve Unicast 2.62 winter 2008 Overlay and i3

52 Protocol Overhead total non-data traffic (in bytes)
total data traffic (in bytes) Results: Primary Set, 1.2 Mbps Average overhead = 10.8% 92.2% of overhead is due to bandwidth probe Current scheme employs active probing for available bandwidth Simple heuristics to eliminate unnecessary probes Focus of our current research winter 2008 Overlay and i3

53 Internet Indirection Infrastructure (i3)
Motivations Today’s Internet is built around a unicast point-to-point communication abstraction: Send packet “p” from host “A” to host “B” This abstraction allows Internet to be highly scalable and efficient, but… … not appropriate for applications that require other communications primitives: Multicast Anycast Mobility Today’s internet is build around a point-to-point communication abstractions. While this simple abstraction allows Internet to be highly scalable and Efficient, it is not appropriate for application that requires other communication primitives such as multicast, anycast, mobility, and so on. winter 2008 Overlay and i3

54 Why? Point-to-point communication  implicitly assumes there is one sender and one receiver, and that they are placed at fixed and well-known locations E.g., a host identified by the IP address xxx.xxx is located in Berkeley This is because there is a fundamental mismatch between point-to-point communication abstraction and these primitives. In particulr, the point-to-point communication abstraction implicitly assumes that there is only one sender and on receivers an that they are placed at fixed and well-known locations. Multicast, anycast, and mobility violate at least one of these assumptions. With mobility end-hosts do not have fixed locations, with multicast there are more than one receiver and sender. winter 2008 Overlay and i3

55 IP Solutions Extend IP to support new communication primitives, e.g.,
Mobile IP IP multicast IP anycast Disadvantages: Difficult to implement while maintaining Internet’s scalability (e.g., multicast) Require community wide consensus -- hard to achieve in practice During the last decade a plethora of solutions have been proposed to implement anycast, multicast, and mobility functionalities. These solutions can be classified into two catgeories: IP solutions and application level solutions. Example of IP solutions are mobile IP, IP multicast, and IP anycast Unfortunately, despite years of research these solutions have yet to be deployed on a large scale. There are many reasons for this. Two of them are winter 2008 Overlay and i3

56 Application Level Solutions
Implement the required functionality at the application level, e.g., Application level multicast (e.g., Narada, Overcast, Scattercast…) Application level mobility Disadvantages: Efficiency hard to achieve Redundancy: each application implements the same functionality over and over again No synergy: each application implements usually only one service; services hard to combine To get around these problems recently several application level solutions have been proposed… However, they have several disadvantages. First, it is hard to achieve efficiency…. That is, proposal for application level multicast don’t address mobility and vice-versa. Usually, a new abstraction require to deploy a new overlay. winter 2008 Overlay and i3

57 Key Observation “Any problem in computer science can
Virtually all previous proposals use indirection, e.g., Physical indirection point  mobile IP Logical indirection point  IP multicast “Any problem in computer science can be solved by adding a layer of indirection” winter 2008 Overlay and i3

58 Build an efficient indirection layer
i3 Solution Build an efficient indirection layer on top of IP Use an overlay network to implement this layer Incrementally deployable; don’t need to change IP IP TCP/UDP Application Indir. layer winter 2008 Overlay and i3

59 Internet Indirection Infrastructure (i3): Basic Ideas
Each packet is associated an identifier id To receive a packet with identifier id, receiver R maintains a trigger (id, R) into the overlay network id data id data Sender id R trigger Receiver (R) R data winter 2008 Overlay and i3

60 Service Model API Best-effort service model (like IP)
sendPacket(p); insertTrigger(t); removeTrigger(t) // optional Best-effort service model (like IP) Triggers periodically refreshed by end-hosts ID length: 256 bits winter 2008 Overlay and i3

61 Mobility Host just needs to update its trigger as it moves from one subnet to another Receiver (R1) Sender Such an indirection layer would support a large number of services. For example, to achieve mobility an end host needs only to update its trigger with the new address when it moves from one Subnet to another. id R2 id R1 Receiver (R2) winter 2008 Overlay and i3

62 Multicast Receivers insert triggers with same identifier
Can dynamically switch between multicast and unicast id data id data R1 data id R1 Receiver (R1) Sender Multicast is strightforward to achieve. The only difference between multicast and unicast is that in the case of the multicast there are more than on hosts inserting triggers with the same ID id R2 R2 data Receiver (R2) winter 2008 Overlay and i3

63 Anycast Use longest prefix matching instead of exact matching
Prefix p: anycast group identifier Suffix si: encode application semantics, e.g., location Sender Receiver (R1) p|s1 R1 Receiver (R2) p|s2 R2 p|s3 R3 Receiver (R3) data p|a winter 2008 Overlay and i3

64 Service Composition: Sender Initiated
Use a stack of IDs to encode sequence of operations to be performed on data path Advantages Don’t need to configure path Load balancing and robustness easy to achieve Transcoder (T) idT,id data idT,id data id data Receiver (R) R data Sender T,id data id R idT T winter 2008 Overlay and i3

65 Service Composition: Receiver Initiated
Receiver can also specify the operations to be performed on data Firewall (F) R data id data id data F,R data Finally, IL supports composable services, I.e., performing on the fly transformation such as transcoding on the data packets as they travel through the network. To achieve this we replace the packet ID with a stack of Ids, where each identifier excepting the last one identifies a transformation to be aplied on packets. The advantage of this solution versus previously proposed solutions is that you don’t need to find and configure the path,(you just insert the Ids in the proper order). Load balancing and robustness are easy to achieve. Just have more servers implementing the same operations. If one fails, the other one will take transparently over. Receiver (R) Sender idF F idF,R data id idF,R winter 2008 Overlay and i3

66 Quick Implementation Overview
i3 is implemented on top of Chord But can easily use CAN, Pastry, Tapestry, etc Each trigger t = (id, R) is stored on the node responsible for id Use Chord recursive routing to find best matching trigger for packet p = (id, data) winter 2008 Overlay and i3

67 Routing Example R inserts trigger t = (37, R); S sends packet p = (37, data) An end-host needs to know only one i3 node to use i3 E.g., S knows node 3, R knows node 35 3 7 20 35 41 37 R S trigger(37,R) send(37, data) send(R, data) Chord circle 2m-1 [8..20] [4..7] [21..35] [36..41] [40..3] winter 2008 Overlay and i3

68 Optimization #1: Path Length
Sender/receiver caches i3 node mapping a specific ID Subsequent packets are sent via one i3 node [8..20] [4..7] [42..3] 37 data [21..35] Sender (S) cache node [36..41] R data 37 R Receiver (R) winter 2008 Overlay and i3

69 Optimization #2: Triangular Routing
Use well-known trigger for initial rendezvous Exchange a pair of (private) triggers well-located Use private triggers to send data traffic [8..20] [4..7] 30 data 37 [2] [42..3] S [30] 2 S [21..35] Sender (S) [36..41] R data 2 [30] R [2] 30 R 37 R Receiver (R) winter 2008 Overlay and i3

70 Example 1: Heterogeneous Multicast
Sender not aware of transformations Receiver R1 (JPEG) id_MPEG/JPEG S_MPEG/JPEG id (id_MPEG/JPEG, R1) send(id, data) Sender (MPEG) send((id_MPEG/JPEG, R1), data) send(R1, data) R2 Receiver R2 send(R2, data) Finally, IL supports composable services, I.e., performing on the fly transformation such as transcoding on the data packets as they travel through the network. To achieve this we replace the packet ID with a stack of Ids, where each identifier excepting the last one identifies a transformation to be aplied on packets. The advantage of this solution versus previously proposed solutions is that you don’t need to find and configure the path,(you just insert the Ids in the proper order). Load balancing and robustness are easy to achieve. Just have more servers implementing the same operations. If one fails, the other one will take transparently over. winter 2008 Overlay and i3

71 Example 2: Scalable Multicast
i3 doesn’t provide direct support for scalable multicast Triggers with same identifier are mapped onto the same i3 node Solution: have end-hosts build an hierarchy of trigger of bounded degree R2 R1 R4 R3 g x (g, data) (x, data) winter 2008 Overlay and i3

72 Example 2: Scalable Multicast (Discussion)
Unlike IP multicast, i3 Implement only small scale replication  allow infrastructure to remain simple, robust, and scalable Gives end-hosts control on routing  enable end-hosts to Achieve scalability, and Optimize tree construction to match their needs, e.g., delay, bandwidth winter 2008 Overlay and i3

73 Example 3: Load Balancing
Servers insert triggers with IDs that have random suffixes Clients send packets with IDs that have random suffixes send( ,data) S1 A S1 S2 S2 S3 S3 send( ,data) S4 S4 B winter 2008 Overlay and i3

74 Example 4: Proximity Suffixes of trigger and packet IDs encode the server and client locations S2 S3 S1 send( ,data) S2 S3 S1 winter 2008 Overlay and i3

75 Outline Implementation Examples Security Applications
Protection against DoS attacks Routing as a service Service composition platform winter 2008 Overlay and i3

76 Applications: Protecting Against DoS
Problem scenario: attacker floods the incoming link of the victim Solution: stop attacking traffic before it arrives at the incoming link Today: call the ISP to stop the traffic, and hope for the best! Our approach: give end-host control on what packets to receive Enable end-hosts to stop the attacks in the network winter 2008 Overlay and i3

77 Why End-Hosts (and not Network)?
End-hosts can better react to an attack Aware of semantics of traffic they receive Know what traffic they want to protect End-hosts may be in a better position to detect an attack Flash-crowd vs. DoS winter 2008 Overlay and i3

78 Some Useful Defenses White-listing: avoid receiving packets on arbitrary ports Traffic isolation: Contain the traffic of an application under attack Protect the traffic of established connections Throttling new connections: control the rate at which new connections are opened (per sender) winter 2008 Overlay and i3

79 IDP – public trigger IDS, IDR – private triggers
1. White-listing Packets not addressed to open ports are dropped in the network Create a public trigger for each port in the white list Allocate a private trigger for each new connection IDS S Sender (S) Receiver (R) [IDR] IDR R data IDP [IDS] IDP – public trigger IDS, IDR – private triggers winter 2008 Overlay and i3

80 2. Traffic Isolation Drop triggers being flooded without affecting other triggers Protect ongoing connections from new connection requests Protect a service from an attack on another services Victim (V) Attacker (A) Legitimate client (C) ID2 V ID1 Transaction server Web server winter 2008 Overlay and i3

81 2. Traffic Isolation (cont’d)
Drop triggers being flooded without affecting other triggers Protect ongoing connections from new connection requests Protect a service from an attack on another services Victim (V) Attacker (A) Legitimate client (C) ID1 V Transaction server Web server Traffic of transaction server protected from attack on web server winter 2008 Overlay and i3

82 3. Throttling New Connections
Redirect new connection requests to a gatekeeper Gatekeeper has more resources than victim Can be provided as a 3rd party service Server (S) Client (C) IDC C X S puzzle puzzle’s solution Gatekeeper (A) IDP A winter 2008 Overlay and i3

83 Service Composition Platform
Goal: allow third-parties and end-hosts to easily insert new functionality on data path E.g., firewalls, NATs, caching, transcoding, spam filtering, intrusion detection, etc.. Why i3? Make middle-boxes part of the architecture Allow end-hosts/third-parties to explicitly route through middle-boxes winter 2008 Overlay and i3

84 Example Use Bro system to provide intrusion detection for end-hosts that desire so Bro (middle-box) M (idM:idBA, data) (idBA, data) (idAB, data) (idM:idAB, data) idM M server B idBA B client A idAB A i3 winter 2008 Overlay and i3

85 Design Principles Give hosts control on routing
A trigger is like an entry in a routing table! Flexibility, customization End-hosts can Source route Set-up acyclic communication graphs Route packets through desired service points Stop flows in infrastructure Implement data forwarding in infrastructure Efficiency, scalability winter 2008 Overlay and i3

86 Design Principles (cont’d)
Infrastructure Host Internet & Infrastructure overlays Data plane Control plane p2p & End-host overlays Data plane Control plane i3 Control plane Data plane winter 2008 Overlay and i3

87 Conclusions Indirection – key technique to implement basic communication abstractions Multicast, Anycast, Mobility, … This research Advocates for building an efficient Indirection Layer on top of IP Explore the implications of changing the communication abstraction; already done in other fields Direct addressable vs. associative memories Point-to-point communication vs. Tuple space (in Distributed systems) winter 2008 Overlay and i3


Download ppt "Overlay, End System Multicast and i3"

Similar presentations


Ads by Google