Presentation is loading. Please wait.

Presentation is loading. Please wait.

Nicholas Weaver Vern Paxson Stuart Staniford UC Berkeley ICIR

Similar presentations

Presentation on theme: "Nicholas Weaver Vern Paxson Stuart Staniford UC Berkeley ICIR"— Presentation transcript:

1 Wormholes and a Honeyfarm: Automatically Detecting Novel Worms (and other random stuff)
Nicholas Weaver Vern Paxson Stuart Staniford UC Berkeley ICIR Silicon Defense

2 Problem: Automatically Detecting New Worms
Detect a new worm on the Internet before many machines are infected Use this information to guide defenses 30-60 seconds to detect (and stop) Slammer Honeypots are accurate detectors Monitor egress to detect worms k vulnerable honeypots will detect a worm when ~1/k of the vulnerable machines are infected But impractical Cost: time, not machines Trust: must trust all honeypots!

3 Idea: Split the Network Endpoints from the Honeypots
Wormholes are traffic tunnels Routes connections to a remote system Untrusted endpoints Honeyfarm consists of Virtual Machine honeypots Create virtual honeypots on demand See Route internally generated traffic to other images Classify based on what can be infected This is the resulting split. The endpoints, “Wormholes”, are traffic tunnels. The honeyfarm itself uses virtual machine techniques so it doesn’t need to create a honeypot for each endpoint. The honeyfarm needs to use VM techniques simply because of the number of wormholes and you need to create images on demand. For more information on virtual honeypots, see the honeynet project.

4 How Wormholes Work Low cost “appliance”: Clear Box:
Plugs into network, obtains address through DHCP Contacts the Honeyfarm Reconfigures local network stack fool nmap style detection Forwards all traffic to/from the Honeyfarm Clear Box: Deployers have source code Restrictions built into the wormhole code so it doesn't trust the honeyfarm, can't contact the local network! Instead/addition to wormholes, one can... Route small telescopes to the honeyfarm Route ALL unused addresses in an institution... The goal of the wormholes is to design traffic tunnels that people will deploy. Exception: DNS requests can be generated to local DNS server. Wormhole CAN forward requests from the honeyfarm out, to prevent “phone home” based detection strategies. But it can’t forward requests to the local network, allowing it to be deployed more safely. The goal is to build devices which the deployers can trust, but the Honeyfarm does not have to Since the functionality is simple, we can run on low cost, commodity hardware We might want to also route entire address ranges, although such address ranges, although sensitive, represent more valuable “secrets” from an attacker’s viewpoint, so they can't be relied on as heavily. Image is random DS9 screen capture, found on the net.

5 How a Honeyfarm Works Creates Virtual Machine images to implement Honeypots Using VMware or similar Images exist "in potential" until traffic received Niels Provos suggested: Use honeyd as a first pass filter Completes the illusion that a honeypot exists at every wormhole location Any traffic received from wormhole Activate and configure a VM image Forward traffic to VM image Honeypot image generated traffic is monitored and redirected Wormhole IP: Honeyfarm VM Image IP: xx.xx.xx.xx VM Image IP: The honeyfarm needs to use virtual machine honeypots, as otherwise it would require too many resources. The goal is to detect behaviors in the system. Thus we need to have images almost ready to go, they need to be reconfigured and connected when traffic is received. We monitor the traffic to serve as the detector. VM Image IP:

6 What Could We Automatically Learn From a Honeyfarm?
A new worm is in the Internet Triggered based on ability to infect VMs What the worm is capable of Types of vulnerable configurations Including patch level Creates a “Vulnerability Signature” Some overt, immediate malicious behavior Immediate file erasers etc Possible attack signatures Works best for tracking: Human attackers Scanning worms Slow enough to react effectively Randomness hits wormholes Note the one limitation: Can only detect worms which can infect the VM images. We note capabilities based on what other VM image the captured worm can succeed in infecting. A “vulnerability signature” could be used by response mechanisms to block all traffic to vulnerable machines, without affecting immune machines. Noticing overtly malicious behavior is useful secondary information, but not the primary objective

7 What Trust is Needed? Wormhole deployers: Honeyfarm operator:
Need to trust wormhole devices, not the honeyfarm operator Honeyfarm operator: Attackers know of some wormholes, but most are generally unknown Wormhole locations are “open secrets” Does not trust wormhole deployers Detection is based on infected honeypots, not traffic from a wormhole Dishonest wormholes are filtered out Responding systems receiving an alert: Either the honeyfarm and operator are honest and uncompromised OR rely on multiple, independent honeyfarms all raising an alarm "If CERT and DOD-CERT say..." When building distributed systems and systems which trigger automatic responses, we need to be very concerned about trust issues. The wormhole deployers trust the wormholes, NOT the honeyfarm. The honeyfarm doesn’t trust the wormhole deployers. Responding systems either trust the honeyfarm, or trust 2 independent honeyfarms to both raise an alarm. (Image is taken from ), copyright sony.

8 Status and Acknowledgements
Status: Paper design Idea, attacks, costs, development time Lots of attacks on the honeyfarm system and possible defenses Plan to build honeyfarm first, attached to a small telescope Wormholes can be built for <$350, no moving parts, 50 Watts power, quantity 1 Acknowledgements: Honeypot technology: Honeynet project, honeyd, DTK Feedback from many people: Stefan Savage, David Moore, David Wagner, Niels Provos, etc etc etc.

9 Random Slide: 1 Gb (ASAP), 10 Gb (+2-3 years)
Need wiring-closet defenses: As close to the endpoint as possible, need to be reprogrammable <$1000 for GigE today (build for $500) Optical ideal, +$100 for 1000-base-T <$2000 for 10GigE in 2-3 years (build for $1000) New FPGAs with SERDESes, embedded processors, massive parallelism and pipelining DIMM 1000-BaseT PHY SX Transceiver FPGA SX Transceiver 1000-BaseT PHY SX Transceiver DIMM

10 Random Slide: Colonel John R. Boyd’s OODA “Loop”
Observe Orient Decide Act Implicit Guidance & Control Implicit Guidance & Control Unfolding Circumstances Cultural Traditions Observations Genetic Heritage Decision (Hypothesis) Analyses & Synthesis Action (Test) Feed Forward Feed Forward Feed Forward New Information Previous Experience Outside Information Unfolding Interaction With Environment Unfolding Interaction With Environment Feedback This slide (but not the following notes) is Copyright by the estate of John R Boyd. Permission was granted to use this slide in theses talks as long as the attribution remains. The OODA (Observe, Orient, Decide, Act) “loop/cycle” was developed by John Boyd as a way of describing competitors, with each participant (or group) having its own OODA loop. John R. Boyd was a retired USAF Colonel. As a military serviceman, he wrote the book on air-to-air tactics, was a driving force behind both the F15, F16, and F18 designs, and developed many of the concepts used in current US military strategies and tactics. Robert Corman’s biography “Boyd”, is a good biography for the curious. (Need good reference for understanding OODA concepts however) Feedback Note how orientation shapes observation, shapes decision, shapes action, and in turn is shaped by the feedback and other phenomena coming into our sensing or observing window. Also note how the entire “loop” (not just orientation) is an ongoing many-sided implicit cross-referencing process of projection, empathy, correlation, and rejection. From “The Essence of Winning and Losing,” John R. Boyd, January 1996. From Defense and the National Interest, copyright 2001 the estate of John Boyd Used with permission

11 Ranom Slide: What is the OODA loop?
The OODA (Observe, Orient, Decide, Act) cycle was designed as a semi-formal model of adversarial decision making Really a complex nest of feedback loops Originally designed to represent strategic and tactical decision-making Implicit shortcuts are critical in human-based systems Every participant or group has its own OODA loop Attack the opponent’s decision making process Avoid/confuse/manipulate the opponent’s observation/detection Stealthy worms Take advantage of errors in orientation/analysis Not yet but will begin to happen! Move faster than the opponent’s reaction time Why autonomous worms outrace “human-in-the-loop” systems Reactive worm defenses need fully-automated OODA loops The fastest, accurate OODA loop usually wins The OODA loop is a semi-formal model of adversarial decision making: each participant has their own “loop”, and groups create loops as well. The term loop is a misnomer, rather it is a collection of numerous feedback loops. This was primarily developed to model strategies and tactics, based around the idea of attacking the opponent’s decision making process rather than just the opponent's physical resources. This is critical in understanding worm defenses, because there are at least two competing processes: the worms and the defenses. During propagation, if a worm can avoid triggering the detection mechanisms, then the worm can avoid the entire defense, as nothing will get triggered. Likewise, any errors in the orientation/analysis portion can be exploited. Finally, and this is the worm’s greatest advantage against the defenders, if an individual’s OODA loop is operating effectively within the reaction-timescale of the opponent, then the opponent really can’t do anything, because he’s always one (or many) steps behind, and gets farther behind over time. As long as humans are in the response patch necessary to stop a worm, the defense OODA loop is always insufficiently slow

12 Random Slide: Automated OODA Loops
Since both the worms and worm-defense routines are automatic while a fast worm is spreading, the OODA loops are much simpler No implicit paths, everything is now explicit Orientation and decision making are combined Communication is also made explicit The OODA loops are shaped by the designer’s goals, objectives, and skills Observation is often critical for both sides In an automated loop, there are no implicit fast-paths, as all paths are now explicit. Orientation and decision is combined, simply because orientation in the original OODA formulation represents the implicit decision making, as opposed to explicit decision making. With an automated system, there is no longer a significant distinction between the two. Communication is made explicit in this simplification... It’s implicit in the original OODA loop formulation: actions can communicate to others (both friend and foe), and communication is one of the areas observed. But when thinking about both worms and worm-defense, communication becomes such an important part (in creating a wider viewpoint), that this version explicitly includes communication between decision-making systems. Likewise, since observation techniques can leak information to the opponents, this diagram isolates observation into three classes: passive, local, and active. Observe Orient/Decide Act Passive Automatic Decision Making Information Control Local Actions Control Feedback Active Interaction with Environment Communication

Download ppt "Nicholas Weaver Vern Paxson Stuart Staniford UC Berkeley ICIR"

Similar presentations

Ads by Google