Presentation is loading. Please wait.

Presentation is loading. Please wait.

Distributed Denial-of-Service Attack Detection (and Mitigation?) Mukesh Agarwal, Aditya Akella, Ashwin Bharambe.

Similar presentations


Presentation on theme: "Distributed Denial-of-Service Attack Detection (and Mitigation?) Mukesh Agarwal, Aditya Akella, Ashwin Bharambe."— Presentation transcript:

1 Distributed Denial-of-Service Attack Detection (and Mitigation?) Mukesh Agarwal, Aditya Akella, Ashwin Bharambe

2 Motivation  Known solutions for DoS detection help end-node victims  Detection of attack and the source(s)  Plenty of past work (Traceback-foo)  The ISP perspective  Backbone under attack?  Network carrying a LOT of “useless” attack traffic?  Not well explored

3 Would ISPs Bother? Probably, yes…  ISPs care if their infrastructure is under attack  Attacks against outside nodes but traversing the ISP may/may not be interesting  Selling point!  Depends on the volume of the attack  ISP could be a good network citizen by helping downstream ISPs if necessary

4 Problem Statement 1.How can an ISP detect if its network as a whole is carrying a significant amount of potentially useless/ harmful traffic? 2.Once detected, what steps should the ISP take? In this talk, we will mostly discuss question (1).

5 Why is this challenging?  What traffic patterns are interesting for detection?  How quick is the detection?  What is the router overhead?  Local views of multiple routers  single global view of the network?  What should the response to detection be?  Where to put the detection functionality?  All routers? only edge routers? a few routers in each POP?

6 Interesting Traffic Patterns  To identify something interesting, need to know what normal is…  High-level idea  Routers keep “profiles” of the traffic they see  If at some point, traffic violates the local, normal profile  worth noticing!  Can an attacker by-pass detection?  Attacker can match profiles at some routers, but…  Hard to match profiles at many routers and still do significant damage to the ISP

7 Detection of Anomalies  What profiles to keep at a router?  Router must care for attack traffic only if it takes a substantial portion of link capacity  If not, either the traffic is not harmful enough, or it will be caught “elsewhere”  Keep track of destinations traffic to whom takes  fraction of link capacity (popular destinations)

8 Attacks vs. Flash Crowds  If a usually unpopular dst becomes popular  possible attack  What about popular dsts like cnn.com?  Need a finer-grained profile of traffic to such destinations  “Finger-print” the dest-bound traffic  Typical number of sources, source-subnets, flows, distribution of flow lengths, other flow characteristics  Again, hard for an attacker to match finger-prints at many routers

9 Profiling -- Overview 1.Track popular destinations 2.For each popular destination keep  #unique source IPs  #unique flows (src, sport pairs)  Approximate flow-length distribution  For thresholds      …  k  compute number of flows carrying more than  i fraction of total bytes to destination  We use k=3  Very approximate, but intuitively sufficient

10 Profiling Algorithm -- Components  Tracking highly common queries in a stream of data  Ice-berg queries  Sample-and-hold [SIGCOMM02]  Counting the number of (or other statistics of) unique items in a stream of data [Alon et al. 96]  Frequency moments  F k =  m i k  kth frequency moment  Want F 0 for now  May add more later…

11 Sample-and-hold  Sample-and-hold pretty good at identifying popular destinations  With moderate over-sampling, can ensure high accuracy  sampling prob = f  capacity 

12 Computing F 0  Pretty cool trick [FM85, AMS96]  If the stream has about n unique items, hash each unique item, randomly, to a d-bit string, S, where d > log(n)  Let R = max i {r i =#least-significant 0’s in S i }  2 R is approximately F 0 !!!

13 Putting everything together  When things go “out-of-profile” routers get suspicious  There is a margin for error  So, have to “check with” others  Helps the routers reinforce suspicion  Reduces false-positives

14 Signaling  Out-of-band  ICMP messages with TTL=255  Anti-entropy exchanges periodically  Piggyback on OSPF updates  In-band  For efficiency  Mark packets with suspicion  The reverse direction may still have to use the out-of-band mechanism

15 Mitigation and Response  After receiving a threshold number of suspicions, each router must act  Locally rate-limit the traffic to the destination  To what rate?  If attack traffic is causing packet drops, could drop marked packets preferentially  If not, forward suspicion to downstream ISP which could preferentially drop if needed

16 Where to Put Functionality?  Typical path through an ISP: -- -- --…-- --  Profiling were done only at the edge routers  just two points of Identification  Not enough for consensus  Must profile at a reasonable number of backbone routers too  Just profiling at backbone routers not enough either  Typical transit paths go over ~2 backbone routers (hot-potato routing)  For effective detection, must profile at most routers in the network

17 Current Status  Profiling schemes implemented in NS-2  Wrote popular DDoS tools (tfn2k, trinoo) in NS-2  Use Rocketfuel maps [SIGCOMM02] to build ISP topologies  Chose Ebone for our experiments  Set link capacities off the top of our heads  Backbone traffic traces used to represent background traffic in NS-2 [NLANR]

18 Current Status  Can make profiles for traffic  Overhead? Computation and memory?  Memory requirement small ~ 100k (ns-2 simulations)  In SRAM  Computation small  Sample-and-hold  1 hash table look-up + 1 write + 1 coin-flip per packet  Not expensive since tables in SRAM  F 0  4 byte-operations per sampled packet

19 Initial Results  Compared the profiles generated using traces collected at different times at the same router  Profiles generated very highly stable (> 90% match)  Small number of packets enough to get stable profiles (~1million or 15s)  Memory used to construct profiles is small on average (~100K)  At a router, attack traffic can be identified fairly quickly (< 500,000 total packets traversing the router)  Initial results -- need more rigorous testing  Have to test the consensus protocol  Time taken to converge, false-positives and negatives

20 Questions, Comments, Suggestions?


Download ppt "Distributed Denial-of-Service Attack Detection (and Mitigation?) Mukesh Agarwal, Aditya Akella, Ashwin Bharambe."

Similar presentations


Ads by Google