Presentation is loading. Please wait.

Presentation is loading. Please wait.

UCDavis Computer Security Lab Collaborative End-host Worm Defense Experiment Senthil Cheetanceri, Denys Ma, Allen Ting, Jeff Rowe, Karl Levitt UC Davis.

Similar presentations


Presentation on theme: "UCDavis Computer Security Lab Collaborative End-host Worm Defense Experiment Senthil Cheetanceri, Denys Ma, Allen Ting, Jeff Rowe, Karl Levitt UC Davis."— Presentation transcript:

1 UCDavis Computer Security Lab Collaborative End-host Worm Defense Experiment Senthil Cheetanceri, Denys Ma, Allen Ting, Jeff Rowe, Karl Levitt UC Davis Phil Porras, Linda Briesemeister, Ashish Tiwari SRI John Mark Agosta, Denver Dash, Eve Schooler Intel Corp.

2 Overview Introduction End-host Based Defense Approach Our DETER Experiment General Testing Tools A Worm Defense Testing Framework Simulations and Analysis

3 Cyber Defense Testing Validation Using Simulations and Analysis (L. Briesemeister - SRI) –Quickly validate proposed cyber defense strategies – Test a large variety of conditions and configurations Live Deployment –Validation using real operating conditions –Reluctance to deploy systems without serious testing –Testing response to live attacks is impossible DETER Testbed (S. Cheetancheri – UC Davis) –Tests defense systems using real code on real systems –Attacks can be safely launched –Bridges the testing gap between simulation and live deployment

4 The Specific Problem Oftentimes centralized network worm defenses are unavailable. –Mobile Users –Home Offices –Small Businesses –Network defenses have been bypassed or penetrated Local end-host detector/responders can form a last line of defense against large-scale distributed attacks. End-host detectors are “weak” –Without specific attack signatures, false-positives are high. –Local information isn’t sufficient in deciding whether a global attack is occurring Can “weak” end-host detectors be combined to produce a “strong” global detector that triggers response? How can a federation of local end-host detectors be used to detect worm attacks?

5 Our Approach… Motivated by, Sequential Hypothesis Testing Jung, J., Paxson, V., Berger, A., Balakrishnan, H., “Fast Portscan Detection Using Sequential Hypothesis Testing”, Proceedings of the IEEE Symposium on Security and Privacy, 2004 Corraborative Intrusion Detection and Inference Agosta, J.M., Dash, D., Schooler, E., Intel Research Probabilistic inference by a federation of end-host local detection points. Protocol for distributing alert information within the federation.

6 Distributed Decision Chains 1,12,13,14,1n,1 2,23,2 4,2n,2 3,34,3n,3 n,n......... Matrix of Likelihood ratios of Bernoulli trials { i, j } = j local alerts seen after i steps 4,4n,4... Worm threshold determines elements needed for an attack decision False alarm threshold determines elements for a false alarm decision 1,1 2,1 2,2 3,2 3,3 3,2 3,1 WORM! False Alarm

7 Sequential Hypothesis Testing H0 – Hypothesis that there is a worm H1 – Hypothesis that there is a worm Y = { 0 – No Alert raised { 1 – Alert raised P[Y=1 | H0] = Fp P[Y=0 | H0] = (1- Fp) P[Y=0 | H1] = Fn P[Y=1 | H1] = (1-Fn)

8 TRW Parameters Given: Fp - False +ve rate of individual detectors. Fn – False –ve rate of individual detectors. Desired: dD – desired rate of Detection dF – desired rate of False positive

9 Decision Making Likelihood Ratio, L: P[Y1|H1].P[Y2||H1]…P[Yn|H1] P[Y1|H0].P[Y2||H0]…P[Yn|H0] L < T0 (NoWorm) T0 = 1-dD/1-dF > T1 (Worm) T1 = dD/dF

10 Experiment Components Local Detectors Defense Agents “Vulnerable” Service Safe Worm Generator Background Traffic Generator

11 End-host Detector and Defense Agents Implement a “weak” end-host local detector –Alert is generated for all connections to un-serviced ports –False positive rate for local detection is high (one alert per hour per machine at UCDavis) Defense agents send local detector alerts to the defense agents on other end-hosts –Recipients are chosen at random for each alert Local alerts are aggregated into a global alert message. Agents use probabilistic inference do decide whether this is likely to be a worm or false alarm, or propagate global alert message if no decision has been reached.

12 Experimental Setup 200 Virtual Nodes on 40 Physical nodes. All nodes are on a single DETER LAN. 50 nodes are vulnerable –Alarms aren’t generated for worm connections to these nodes All nodes have a local detector and defense agent Single node serves as the external infection source. Internal infected hosts also generate worm traffic

13 Detection Time Random Scanning Worm @ 2 scans/sec Full Saturation: 12 minutes after launch Worm Detection: 4 minutes after launch Infected Nodes: 5 (10%)

14 Results For random scanning worm: –Full saturation of infections occurs at 15 minutes post launce –Worm detection trigger at 4 minutes after launch with 10% of vulnerable machines already infected. –Global worm alert broadcast could protect 90% False alarms –At 4 false alarms per minute over all 200 machines (from UC Davis laboratory network), no worm triggers –Live testing in needed to evaluate false alarm performance over a longer time period

15 Summary Simulations by Intel Research show that a distributed TRW algorithm can be useful to detect worms using only “weak” end-host detectors. Emulated testing confirms that the algorithm and protocol works on live machines in the presence of real traffic Code tested and working on real Unix machines in DETER testbed will be deployed in the UCD and Intel networks for further testing and evaluation.

16 Testing Tools NTGC - A tool for Network Traffic Generation Control and Coordination WormGen – Safe worm generation for cyber defense testing A framework for worm defense evaluation

17 NTGC: A tool for Network Traffic Generation Control and Coordination To develop a background traffic generation tool which can: –Build traffic model by extracting important traffic parameters from real traffic trace (tcpdump format) –Automatically configuring the testbed nodes to generate traffic based on the traffic model extracted from real traffic –Utilize existing traffic generators (e.g. TG, D-ITG, Harpoon) as low-level packet generation component –Generate real TCP connections

18 Architecture NTGC consists of the following components: Traffic Analyzer The traffic analyzer takes the trace data as input and reconstructs complete TCP connections. Traffic filter The traffic filter can manipulate the traffic parameter data generated by the traffic analyzer. Network address mapping tool This module maps the IP addresses of the packet trace into the DETER experimental network IP addresses. Configuration file generator –This module takes the output from the traffic analyzer (or traffic filter), and compile them into a TG or TTCP compatible configuration file –Parses the flow data generated by traffic analyzer and traffic filter, and then sends the flow information to the corresponding remote hosts. Command and flow data dispatcher We use this tool to send commands to control the NTGC agents running on each DETER nodes. Low-level packet generators (e.g. TG, TTCP, D-ITG, Harpoon)

19 Modular Diagram Raw trace 1 Raw trace n ………………… Traffic Analyzer Reconstruct TCP connections Generate flow data Merge traces Timestamp normalization Connection Data Flow Data Traffic Filter Filtering Address Remapping Scale up/ down Duplicate Remove Address Remapping rules. Topology file Configuration File Generator

20 NTGC Summary The traffic analyzer is able to generate the flow-level data in XML format. We are able to manipulate the traffic parameters within the XML format flow data. We tested configuration file generation and dispatching feature on DETER testbed, with a 40 nodes topology. The configuration file generator generated TG compatible configuration files for all 40 nodes, and dispatched the configuration files to all the nodes. We observed that the traffic was sending and receiving between all the experimental nodes, based on the traffic model derived from the WIDE trace.

21 WormGen – Safe Worm Generation On demand distributed attack behavior is needed for the evaluation of defenses. Nobody wants to implement and deploy attacks that spread automatically using real vulnerabilities. How to produce realistic attack behavior without actually launching an attack? WormGen generates a propagating worm on the test network without using Malcode.

22 The worm simulation network consists of a several networked agents and a single controller.

23 The controller assigns each agent a role for a given worm. 1)Vulnerable (denoted by a red x) 2)Vulnerable and initially infected (red x with XML code) 3)Not Vulnerable (denoted by a green check)

24 The controller sends a start command to the initially infected agent(s). The agent process the XML instructions, or “worm”, for information about how to spread.

25 The agent consults the information in the PortScanRate element to determine the speed in which it "scans".

26 Based on the "probability" values of RandomScan and LocalSubnetScan elements, the agent chooses which address range to target

27 For each infected agent, a new address range is chosen based on the the probability values, and the attack cycle continues.

28 or, once again, simply sends the worm. Only the vulnerable agents are infected.

29 When the worm is stopped, the controller gathers information from each agent and processes it into a report.

30 Motivation: Provide a framework for easy evaluation of worm defenses in DETER test-bed environment. Worm Topology Defense Evaluation Test-bed API Towards a Framework for Worm Defense Evaluation

31 The Framework itself

32 Features: Test-bed programming is transparent to the experimenter. Hooks for users ’ defense, worms and background traffic replay. Event Control System for executing series of experiments in batch mode. Standardized vulnerable servers Worm library

33 Advantages Current ApproachOur Approach ApproachCustom toolsStandardized tools Time to First experiment Hours to weeksHours Setup time : Expt time ratio 10:11:100 Testbed details knowledge RequiredNot Required

34 Example: Hierarchical Defense

35 Analysis No Defense Defense Turned on Only 5 iterations 10 iterations

36 Future Work Traffic Analysis on n/w components Provide default topologies –business networks, academic networks, defense networks, etc.,. Counter the effect of scale-down. Provide a formal language to describe the API for this framework.

37 Next Steps Implement and test other cooperative protocols –Multicast –Channel Biasing –Hierarchical Aggregation Include a variety of local end-host detectors with differing performance – more sophisticated Bayesian Network model developed by Intel Corp. Optimize local detector placement in the cooperative network


Download ppt "UCDavis Computer Security Lab Collaborative End-host Worm Defense Experiment Senthil Cheetanceri, Denys Ma, Allen Ting, Jeff Rowe, Karl Levitt UC Davis."

Similar presentations


Ads by Google