Presentation on theme: "Nicholas Weaver Vern Paxson Stuart Staniford UC Berkeley ICIR"— Presentation transcript:
1 Wormholes and a Honeyfarm: Automatically Detecting Novel Worms (and other random stuff) Nicholas WeaverVern PaxsonStuart StanifordUC BerkeleyICIRSilicon Defense
2 Problem: Automatically Detecting New Worms Detect a new worm on the Internet before many machines are infectedUse this information to guide defenses30-60 seconds to detect (and stop) SlammerHoneypots are accurate detectorsMonitor egress to detect wormsk vulnerable honeypots will detect a worm when ~1/k of the vulnerable machines are infectedBut impracticalCost: time, not machinesTrust: must trust all honeypots!
3 Idea: Split the Network Endpoints from the Honeypots Wormholes are traffic tunnelsRoutes connections to a remote systemUntrusted endpointsHoneyfarm consists of Virtual Machine honeypotsCreate virtual honeypots on demandSee honeynet.orgRoute internally generated traffic to other imagesClassify based on what can be infectedThis is the resulting split. The endpoints, “Wormholes”, are traffic tunnels. The honeyfarm itself uses virtual machine techniques so it doesn’t need to create a honeypot for each endpoint. The honeyfarm needs to use VM techniques simply because of the number of wormholes and you need to create images on demand.For more information on virtual honeypots, see the honeynet project.
4 How Wormholes Work Low cost “appliance”: Clear Box: Plugs into network, obtains address through DHCPContacts the HoneyfarmReconfigures local network stackfool nmap style detectionForwards all traffic to/from the HoneyfarmClear Box:Deployers have source codeRestrictions built into the wormhole code so it doesn't trust the honeyfarm, can't contact the local network!Instead/addition to wormholes, one can...Route small telescopes to the honeyfarmRoute ALL unused addresses in an institution...The goal of the wormholes is to design traffic tunnels that people will deploy.Exception: DNS requests can be generated to local DNS server.Wormhole CAN forward requests from the honeyfarm out, to prevent “phone home” based detection strategies. But it can’t forward requests to the local network, allowing it to be deployed more safely. The goal is to build devices which the deployers can trust, but the Honeyfarm does not have toSince the functionality is simple, we can run on low cost, commodity hardwareWe might want to also route entire address ranges, although such address ranges, although sensitive, represent more valuable “secrets” from an attacker’s viewpoint, so they can't be relied on as heavily.Image is random DS9 screen capture, found on the net.
5 How a Honeyfarm WorksCreates Virtual Machine images to implement HoneypotsUsing VMware or similarImages exist "in potential" until traffic receivedNiels Provos suggested: Use honeyd as a first pass filterCompletes the illusion that a honeypot exists at every wormhole locationAny traffic received from wormholeActivate and configure a VM imageForward traffic to VM imageHoneypot image generated traffic is monitored and redirectedWormholeIP: aa.bb.cc.ddHoneyfarmVM ImageIP: xx.xx.xx.xxVM ImageIP: aa.bb.cc.ddThe honeyfarm needs to use virtual machine honeypots, as otherwise it would require too many resources. The goal is to detect behaviors in the system.Thus we need to have images almost ready to go, they need to be reconfigured and connected when traffic is received.We monitor the traffic to serve as the detector.VM ImageIP: aa.bb.cc.ee
6 What Could We Automatically Learn From a Honeyfarm? A new worm is in the InternetTriggered based on ability to infect VMsWhat the worm is capable ofTypes of vulnerable configurationsIncluding patch levelCreates a “Vulnerability Signature”Some overt, immediate malicious behaviorImmediate file erasers etcPossible attack signaturesWorks best for tracking:Human attackersScanning wormsSlow enough to react effectivelyRandomness hits wormholesNote the one limitation: Can only detect worms which can infect the VM images.We note capabilities based on what other VM image the captured worm can succeed in infecting.A “vulnerability signature” could be used by response mechanisms to block all traffic to vulnerable machines, without affecting immune machines.Noticing overtly malicious behavior is useful secondary information, but not the primary objective
7 What Trust is Needed? Wormhole deployers: Honeyfarm operator: Need to trust wormhole devices, not the honeyfarm operatorHoneyfarm operator:Attackers know of some wormholes, but most are generally unknownWormhole locations are “open secrets”Does not trust wormhole deployersDetection is based on infected honeypots, not traffic from a wormholeDishonest wormholes are filtered outResponding systems receiving an alert:Either the honeyfarm and operator are honest and uncompromisedOR rely on multiple, independent honeyfarms all raising an alarm"If CERT and DOD-CERT say..."When building distributed systems and systems which trigger automatic responses, we need to be very concerned about trust issues.The wormhole deployers trust the wormholes, NOT the honeyfarm.The honeyfarm doesn’t trust the wormhole deployers.Responding systems either trust the honeyfarm, or trust 2 independent honeyfarms to both raise an alarm.(Image is taken from ), copyright sony.
8 Status and Acknowledgements Status: Paper designIdea, attacks, costs, development timeLots of attacks on the honeyfarm system and possible defensesPlan to build honeyfarm first, attached to a small telescopeWormholes can be built for <$350, no moving parts, 50 Watts power, quantity 1Acknowledgements:Honeypot technology: Honeynet project, honeyd, DTKFeedback from many people: Stefan Savage, David Moore, David Wagner, Niels Provos, etc etc etc.
9 Random Slide: 1 Gb (ASAP), 10 Gb (+2-3 years) Need wiring-closet defenses:As close to the endpoint as possible, need to be reprogrammable<$1000 for GigE today (build for $500)Optical ideal, +$100 for 1000-base-T<$2000 for 10GigE in 2-3 years (build for $1000)New FPGAs with SERDESes, embedded processors, massive parallelism and pipeliningDIMM1000-BaseTPHYSX TransceiverFPGASX Transceiver1000-BaseTPHYSX TransceiverDIMM
10 Random Slide: Colonel John R. Boyd’s OODA “Loop” ObserveOrientDecideActImplicit Guidance & ControlImplicit Guidance & ControlUnfolding CircumstancesCultural TraditionsObservationsGenetic HeritageDecision (Hypothesis)Analyses & SynthesisAction (Test)Feed ForwardFeed ForwardFeed ForwardNew InformationPrevious ExperienceOutside InformationUnfolding Interaction With EnvironmentUnfolding Interaction With EnvironmentFeedbackThis slide (but not the following notes) is Copyright by the estate of John R Boyd. Permission was granted to use this slide in theses talks as long as the attribution remains.The OODA (Observe, Orient, Decide, Act) “loop/cycle” was developed by John Boyd as a way of describing competitors, with each participant (or group) having its own OODA loop.John R. Boyd was a retired USAF Colonel. As a military serviceman, he wrote the book on air-to-air tactics, was a driving force behind both the F15, F16, and F18 designs, and developed many of the concepts used in current US military strategies and tactics. Robert Corman’s biography “Boyd”, is a good biography for the curious. (Need good reference for understanding OODA concepts however)FeedbackNote how orientation shapes observation, shapes decision, shapes action, and in turn is shaped by the feedback and other phenomena coming into our sensing or observing window.Also note how the entire “loop” (not just orientation) is an ongoing many-sided implicit cross-referencing process of projection, empathy, correlation, and rejection.From “The Essence of Winning and Losing,” John R. Boyd, January 1996.From Defense and the National Interest, copyright 2001 the estate of John Boyd Used with permission
11 Ranom Slide: What is the OODA loop? The OODA (Observe, Orient, Decide, Act) cycle was designed as a semi-formal model of adversarial decision makingReally a complex nest of feedback loopsOriginally designed to represent strategic and tactical decision-makingImplicit shortcuts are critical in human-based systemsEvery participant or group has its own OODA loopAttack the opponent’s decision making processAvoid/confuse/manipulate the opponent’s observation/detectionStealthy wormsTake advantage of errors in orientation/analysisNot yet but will begin to happen!Move faster than the opponent’s reaction timeWhy autonomous worms outrace “human-in-the-loop” systemsReactive worm defenses need fully-automated OODA loopsThe fastest, accurate OODA loop usually winsThe OODA loop is a semi-formal model of adversarial decision making: each participant has their own “loop”, and groups create loops as well. The term loop is a misnomer, rather it is a collection of numerous feedback loops.This was primarily developed to model strategies and tactics, based around the idea of attacking the opponent’s decision making process rather than just the opponent's physical resources.This is critical in understanding worm defenses, because there are at least two competing processes: the worms and the defenses. During propagation, if a worm can avoid triggering the detection mechanisms, then the worm can avoid the entire defense, as nothing will get triggered.Likewise, any errors in the orientation/analysis portion can be exploited.Finally, and this is the worm’s greatest advantage against the defenders, if an individual’s OODA loop is operating effectively within the reaction-timescale of the opponent, then the opponent really can’t do anything, because he’s always one (or many) steps behind, and gets farther behind over time. As long as humans are in the response patch necessary to stop a worm, the defense OODA loop is always insufficiently slow
12 Random Slide: Automated OODA Loops Since both the worms and worm-defense routines are automatic while a fast worm is spreading, the OODA loops are much simplerNo implicit paths, everything is now explicitOrientation and decision making are combinedCommunication is also made explicitThe OODA loops are shaped by the designer’s goals, objectives, and skillsObservation is often critical for both sidesIn an automated loop, there are no implicit fast-paths, as all paths are now explicit.Orientation and decision is combined, simply because orientation in the original OODA formulation represents the implicit decision making, as opposed to explicit decision making. With an automated system, there is no longer a significant distinction between the two.Communication is made explicit in this simplification... It’s implicit in the original OODA loop formulation: actions can communicate to others (both friend and foe), and communication is one of the areas observed. But when thinking about both worms and worm-defense, communication becomes such an important part (in creating a wider viewpoint), that this version explicitly includes communication between decision-making systems.Likewise, since observation techniques can leak information to the opponents, this diagram isolates observation into three classes: passive, local, and active.ObserveOrient/DecideActPassiveAutomaticDecisionMakingInformationControlLocalActionsControlFeedbackActiveInteraction withEnvironmentCommunication