Presentation is loading. Please wait.

Presentation is loading. Please wait.

Using Failure Information Analysis to Detect Enterprise Zombies Zhaosheng Zhu 1, Vinod Yegneswaran 2, Yan Chen 1 1 Department of Electrical and Computer.

Similar presentations


Presentation on theme: "Using Failure Information Analysis to Detect Enterprise Zombies Zhaosheng Zhu 1, Vinod Yegneswaran 2, Yan Chen 1 1 Department of Electrical and Computer."— Presentation transcript:

1 Using Failure Information Analysis to Detect Enterprise Zombies Zhaosheng Zhu 1, Vinod Yegneswaran 2, Yan Chen 1 1 Department of Electrical and Computer Engineering, Northwestern University 2 Computer Science Laboratory, SRI International SecureComm 2009 International ICST Conference on Security and Privacy in Communication Networks

2 Outline Introduction An Empirical Survey of Application Failure Anomalies – Malware Trace Analysis – Failure Patterns of Normal Applications – On the Potential of Failure Analysis to Uncover Suspicious Activities Architecture Correlation and Clustering Engine Evaluation Related Work Conclusion 2

3 Netfuse: Failure Information A focus of our study is detecting self-propagating malware such as worms and botnets. The author begin by conducting an empirical study of transport- and application-layer failure activity using a collection of long-lived malware traces. – finding their failure patterns – automatically detect and isolate malware-like failure patterns 3

4 1 Introduction Enterprise network threat – worms, self-propagating bots, spamming bots, client-side infects (drive-by downloads) and phishing attacks – Between ten of thousands to more than hundred thousand per month Shield – Network intrusion detection systems (NIDS) – Antivirus (AV) 4

5 Introduction (cont’d) NIDS (network intrusion detection systems) – Knowledge-based Signatures Well-known exploits and intrusions  Reliable and accurate performance requires constant maintenance of the knowledgebase to reflect the latest vulnerabilities. – Behavior-based Predefined model of normal behavior Flag deviants from known models as anomalies  inherent difficulty of building robust normal models 5

6 Introduction (cont’d) Antivirus (AV) – monitors end hosts – performing periodic system scans and real-time monitoring, checking existing files and process images with a dictionary of malware signatures – day-zero detection 3/39 AV engines (Conficker A and B worms) 6

7 Introduction (cont’d) Objective – be independent of malware family and requiring no apriori knowledge of malware semantics or command and control (C&C) mechanisms. Motivated by – many malware communication patterns (in transport and application level) result in abnormally high failure rates Portscans Network protocol analyzers – Wireshark and L7 filters 7

8 Introduction (cont’d) Netfuse – A behavior-based detection system whose model for malicious behavior is derived from underlying protocol analyzers – its novelty lies in its use of multipoint failure monitoring for support vector machine (SVM)-based classification of malware failure profiles Failures – (Transport) TCP RSTs, ICMP – (Application) TCP/25 (SMTP), TCP/80 (HTTP), UDP/53 (DNS) and TCP/6667 (IRC). Also common protocols in non-standard ports 8

9 2 An Empirical Survey of Application Failure Anomalies Case Study – the failure patterns of malware using over 30 long-lived malware (5-8 hour) traces – failure profiles of several normal applications webcrawlers, P2P software and popular video sites Failure 9

10 Malware Trace Analysis 32 different malware – in a controlled virtual machine – From our honeynet, malicious email attachments, and the Offensive Computing website [6] – collected tcpdump traces of all network activity A diverse set of failures – broken C&C channel – scanning and spam delivery attempts – malware instances periodically retry failed communication attempts 10

11 Malware Trace Analysis (cont’d) 8 out of 32 did not generate failures. – 2 worms, 3 IRC bots, 3 spyware  well-behaved spyware binaries simply contacted a few active websites DNS – unresolved domain names or NXDOMAIN responses – C&C servers have been taken down? 11

12 Malware Trace Analysis - DNS (cont’d) While many well-behaved applications terminate connection attempts after a few failed tries, malware tends to be remarkably persistent in its repeated attempts. For some bots, such as Kraken, DNS failures could be considered part of normal behavior. – Failure -> Need to get new list Botmaster and the malware may use the same algorithm to generate (next) domain names. 1740 DNS failures in about 5 hours 12

13 Malware Trace Analysis - SMTP (cont’d) SMTP failures result from spamming behaviors. Certain SMTP servers immediately close the connection after the TCP handshake. “550 Recipient address rejected: User unknown” Storm does not retry a rejected username on the same SMTP server. 13

14 Malware Trace Analysis – HTTP (cont’d) sending mal-formed packets for DoS attacks querying for a configuration file that has since been removed from the control server HTTP 400: Bad or Malformed HTTP request HTTP 404: File not found 14

15 Malware Trace Analysis (cont’d) IRC – The channel is removed – Channel might be full due to too many bots TCP – which do complete a TCP handshake and/or terminate the connection with RST prior to sending any payload – because the server has been taken down or because it is too busy ICMP – Scanning behavior and communication patterns of P2P botnets 15

16 Failure Patterns of Normal Applications 16 The website news.sohu.com, there were only 18 transport layer (TCP) failures and 66 HTTP failures in 2 days. They used BitTorrent to download a popular Linux distribution (Fedora 10); they used eMule to download another popular Linux distribution (Ubuntu).

17 Failure Patterns of Normal Applications (cont’d) 17

18 On the Potential of Failure Analysis to Uncover Suspicious Activities (high volume) Failures in malware occur frequently in both the transport and application levels. – except for certain P2P (persist) DNS failures and in particular NXDOMAIN errors are common, these failures tend to persist. (low entropy) Failures be restricted to a few ports and often a few domains. 18

19 3 Architecture 19

20 Tools Wireshark – ICMP: error type, client IP – TCP: client and server IP, port numbers – DNS: failure type, domain name, client IP – FTP, IRC, HTTP and SMTP the server IP address, error code, client IP address, and detailed failure information L7 filter (non-standard ports) 20

21 Tools (cont’d) 21

22 4 Correlation and Clustering Engine First, they classify and aggregate failure information based on host IP address, protocol, and failure type. Four different (normalized) scores for each host – (i) composite failure – (ii) failure divergence – (iii) failure persistence – (iv) failure entropy Then, using SVM-based learning technique to classify suspicious hosts. 22

23 (i) composite failure This score estimates the severity of the observed failures by each host based on volume. – For every host, a vector {N i } N i represents the number of failures of the i th protocol Let α i is the number of application level failures β i is the # of TCP RSTs; γ i is the # of ICMP failures – Three constraints α i > τ, τ = 15 β i > μ(β) + 2* σ(β) γ i > μ(γ) + 2 * σ(γ) 23 T i is the total number of failures for i th protocol across all hosts.

24 (ii) failure divergence measure the delta between a host’s current (daily) failure profile and past failure profiles Exponentially weighted moving averages (EWMA) – Let E ijt correspond to the expected number of failures for host i, on protocol j on day t. (They set α to be 0.5.) 24 (normalize)

25 (iv) failure entropy For every server H i, we record the number of N i failures from it. They repeat the same for each server port P i. – DNS: the entropy in the domain names – HTTP, FTP IRC, and SMTP: entropy in the distribution of various failure types 25

26 (iii) failure persistence Malware failures tend to be long-lived. They simply split the time horizon into N parts (where N is set to 24 in our prototype implementation), and compute the percentage of parts where the failure happens. High failure persistence values provide yet another useful indicator of potential malware infections. 26

27 SVM-based Algorithm to Classify Suspicious Hosts A hyper-plane that separates positive and negative examples with maximal distance – publicly available tool WEKA – The input to the system is a series of four- dimensional vectors where each vector corresponds to the four scores of a individual host. 27

28 Detecting Failure Groups They want to know whether they belong to the same botnet. (similarity) Each type of failure can be represented as a set of (F i, N i ), where F i is the failure property and N i is the number of failures with this property. 28

29 Clustering Peter Kleiwig’s publicly available clustering package [1] – 29 Data clustering. http://www.let.rug.nl/kleiweg/clustering/.

30 5 Evaluation Data – Malware Trace I: 24 traces from Table 2 – Malware Trace II: five malware families that are not included in the training set – Malware Trace III: This data set contains more than 5,000 malware traces that were obtained from a sandnet. – Benign: three weeks, research institute network (> 100 systems) 30

31 Classification and Detection Results Training Process – An example of a rule generated by the SVM algorithm is 31 For their evaluation the detection rate for training is 97.2% and the false positive rate is 0.3%.

32 Classification and Detection Results 32 Performance Evaluation 92% 35% = 90/242 5%

33 Classification and Detection Results Clustering Results 33

34 Related Work BotHunter [Usenix Security 2007] – Dialog Correlation Engine to detect enterprise bots – Models lifecycle of bots: Inbound Scan / Exploit / Egg download / C & C / Outbound Scans – Relies on Snort signatures to detect different phases Rishi [HotBots 07] – Detects IRC bots based on nickname patterns BotSniffer [NDSS 08] – Uses spatio-temporal correlation to detect C&C activity BotMiner [Usenix Security 08] – Combines clustering with BotHunter and BotSniffer heuristics – Focus on successful bot communication patterns 34

35 Conclusions Failure Information Analysis – Signature-independent methodology for detecting infected enterprise hosts Netfuse system – Four components: FIA Engine, DNSMon, Correlation Engine, Clustering Correlation metrics: – Composite Failure Score, Divergence Score, Failure Entropy Score, Persistence Score Useful complement to existing network defenses 35


Download ppt "Using Failure Information Analysis to Detect Enterprise Zombies Zhaosheng Zhu 1, Vinod Yegneswaran 2, Yan Chen 1 1 Department of Electrical and Computer."

Similar presentations


Ads by Google