Download presentation
Presentation is loading. Please wait.
1
USENIX Security Symposium, San Jose, USA, July 30, 2008 Proactive Surge Protection: A Defense Mechanism for Bandwidth-Based Attacks Jerry Chou, Bill Lin University of California, San Diego Subhabrata Sen, Oliver Spatscheck AT&T Labs-Research
2
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 2 Outline Problem Approach Experimental Results Summary
3
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 3 Motivation Large-scale bandwidth-based DDoS attacks can quickly knock out substantial parts of a network before reactive defenses can respond All traffic that share common route links will suffer collateral damage even if it is not under direct attack Seattle SunnyvaleDenver Los Angeles Chicago New York Washington Atlanta Houston Kansas City Indianapolis
4
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 4 Motivation Potential for large-scale bandwidth-based DDoS attacks exist e.g. large botnets with more than 100,000 bots exist today that, when combined with the prevalence of high- speed Internet access, can give attackers multiple tens of Gb/s of attack capacity Moreover, core networks are oversubscribed (e.g. some core routers in Abilene have more than 30 Gb/s incoming traffic from access networks, but only 20 Gb/s of outgoing capacity to the core
5
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 5 Example Scenario Suppose under normal condition Traffic between Seattle/NY + Sunnyvale/NY under 10 Gb/s New YorkSeattle 10G Seattle/NY: 3 Gb/s HoustonAtlanta Indianapolis Kansas City Sunnyvale Sunnyvale/NY: 3 Gb/s
6
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 6 Example Scenario Suppose sudden attack between Houston/Atlanta Congested links suffer high rate of packet loss Serious collateral damage on crossfire OD pairs New York Sunnyvale Seattle 10G Sunnyvale/NY: 3 Gb/s Seattle/NY: 3 Gb/s HoustonAtlanta Houston/Atlanta: Attack 10 Gb/s Indianapolis Kansas City
7
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 7 Impact on Collateral Damage OD pairs are classified into 3 types with respect to the attack traffic Attacked: OD pairs with attack traffic Crossfire: OD pairs sharing route links with attack traffic Non-crossfire: OD pairs not sharing route links with attack traffic Collateral damage occurs on crossfire OD pairs Even a small percentage of attack flows can affect substantial parts of the network USEurope
8
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 8 Related Works Most existing DDoS defense solutions are reactive in nature However, large-scale bandwidth-based DDoS attacks can quickly knock out substantial parts of a network before reactive defenses can respond Therefore, we need a proactive defense mechanism that works immediately when an attack occurs
9
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 9 Related Works (cont’d) Router-based defenses like Random Early Drop (RED, RED-PD, etc) can prevent congestion by dropping packets early before congestion But may drop normal traffic indiscriminately, causing responsive TCP flows to severely degrade Approximate fair dropping schemes aim to provide fair sharing between flows But attackers can launch many seemingly legitimate TCP connections with spoofed IP addresses and port numbers Both aggregate-based and flow-based router defense mechanisms can be defeated
10
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 10 Previous Solutions (cont’d) Router-based defenses like Random Early Drop (RED, RED-PD, etc) can prevent congestion by dropping packets early before congestion But may drop normal traffic indiscriminately, causing responsive TCP flows to severely degrade Approximate fair dropping schemes aim to provide fair sharing between flows But attackers can launch many seemingly legitimate TCP connections with spoofed IP addresses and port numbers Both aggregate-based and flow-based router defense mechanisms can be defeated In general, defenses based on unauthenticated header information such as IP addresses and port numbers may not be reliable
11
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 11 Outline Problem Approach Experimental Results Summary
12
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 12 Our Solution Provide bandwidth isolation between OD pairs, independent of IP spoofing or number of TCP/UDP connections We call this method Proactive Surge Protection (PSP) as it aims to proactively limit the damage that can be caused by sudden demand surges, e.g. sudden bandwidth-based DDoS attacks
13
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 13 Traffic received in NY: Seattle: 3 Gb/s Sunnyvale: 3 Gb/s … Basic Idea: Bandwidth Isolation Meter and tag packets on ingress as HIGH or LOW priority Based on historical traffic demands and network capacity Drop LOW packets under congestion inside network New York Sunnyvale Seattle 10G Seattle/NY: Limit: 3.5 Gb/s Actual: 3 Gb/s All admitted as High HoustonAtlanta Indianapolis Kansas City Sunnyvale/NY: Limit: 3.5 Gb/s Actual: 3 Gb/s All admitted as High Houston/Atlanta: Limit: 3 Gb/s Actual: 2 Gb/s All admitted as High Houston/Atlanta: Limit: 3 Gb/s Actual: 10 Gb/s High: 3 Gb/s Low: 7 Gb/s Proposed mechanism proactively drop attack traffic immediately when attacks occur
14
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 14 Traffic Data Collector Traffic Data Collector Bandwidth Allocator Bandwidth Allocator Preferential Dropping Preferential Dropping Differential Tagging Differential Tagging Architecture Traffic Measurement Bandwidth Allocation Matrix tagged packets forwarded packets dropped packets Data Plane Policy Plane Deployed at Network Routers Deployed at Network Perimeter arriving packets High priority Low priority Proposed mechanism readily available in modern routers
15
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 15 Allocation Algorithms Aggregate traffic at the core is very smooth and variations are predictable Compute a bandwidth allocation matrix for each hour based on historical traffic measurements e.g. allocation at 3pm is computed by traffic measurements during 3-4pm in the past 2 months Source: Roughan’03 on a Tier-1 US Backbone
16
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 16 Allocation Algorithms To account for measurement inaccuracies and provide headroom for traffic burstiness, we fully allocate the entire network capacity as an utility max-min fair allocation problem Mean-PSP: based on the mean of traffic demands CDF-PSP: based on the Cumulative Distribution Function (CDF) of traffic demands Utility Max-min fair allocation Iteratively allocate bandwidth in “water-filling” manner Each iteration maximize the common utility of all flows Remove the flows without residual capacity after each iteration
17
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 17 Utility Max-min Fair Bandwidth Allocation 5 A 5 5 B 5 C 0 1 2 3 4 5 BW BCAB Links 1st round ACAC 20 2 1345 40 60 80 100 BW Utility(%) ABAB 20 40 60 80 100 Utility(%) 2 1345 BW BCBC 20 40 60 80 100 Utility(%) 2 1345 BW 0 1 2 3 4 5 BCAB Links 2nd round Utility functions Network Allocation
18
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 18 Mean-PSP (Mean-based Max-min) Use mean traffic demand as the utility function Iteratively allocate bandwidth in “water- filling” manner 0 2 4 6 8 10 BW BACBBCAB Links 1st round 0 2 4 6 8 10 BW BACBBCAB Links 2nd round A B C - 1.5 1 0.5- -1.5 1 Mean Demand - - - A B C ABC 6 4 46 6 4 BW Allocation B ij 10G A B C ABC
19
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 19 CDF-PSP (CDF-based Max-min) Explicitly capture the traffic variance by using a Cumulative Distribution Function (CDF) model as utility functions Maximize utility is equivalent to minimizing the drop probabilities for all flows in a max-min fair manner 20 2 1345 40 60 80 100 BW Utility(%) When allocated 3 unit bandwidth, drop probability is 20%
20
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 20 Outline Problem Approach Experimental Results Summary
21
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 21 Networks US Backbone Large tier1 backbone network in US ~700 nodes, ~2000 links (1.5Mb/s – 10Gb/s) 1-minute traffic traces: 07/01/07-09/03/07 Europe Backbone Large tier1 backbone network in Europe ~900 nodes, ~3000 links (1.5Mb/s – 10Gb/s) 1-minute traffic traces: 07/01/07-09/03/07
22
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 22 Evaluation Methodology NS2 Simulation Normal traffic: Based on actual traffic demands over 24 hour period for each backbone Attack traffic: US Backbone: highly distributed attack scenario Based on commercial anomaly detection systems From 40% ingress routers to 25% egress routers Europe Backbone: targeted attack scenario Created by synthetic attack flow generator From 40% ingress routes to only 2% egress routers
23
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 23 Packet Loss Rate Comparison USEurope Both PSP schemes greatly reduced packet loss rates Peak hours have higher packet loss rates
24
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 24 Relative Loss Rate Comparison USEurope PSP reduced packet loss rates by more than 75%
25
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 25 Behavior Under Scaled Attacks Packet drop rate under attack demand scaled by factor up to 3x Under PSP, the loss remains small throughout the range ! USEurope
26
USENIX Security Symposium, San Jose, USA, July 30, 2008 – Slide 26 Summary of Contributions Proactive solution for protecting networks that provides a first line of defense when sudden DDoS attacks occur Very effective in protecting network traffic from collateral damage Not dependent on unauthenticated header information, thus robust to IP spoofing Readily deployable using existing router mechanisms
27
USENIX Security Symposium, San Jose, USA, July 30, 2008 Questions?
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.