Presentation is loading. Please wait.

Presentation is loading. Please wait.

Rootkit-based Attacks and Defenses Past, Present and Future Vinod Ganapathy Rutgers University Joint work with Liviu Iftode, Arati.

Similar presentations


Presentation on theme: "Rootkit-based Attacks and Defenses Past, Present and Future Vinod Ganapathy Rutgers University Joint work with Liviu Iftode, Arati."— Presentation transcript:

1 Rootkit-based Attacks and Defenses Past, Present and Future Vinod Ganapathy Rutgers University Joint work with Liviu Iftode, Arati Baliga, Jeffrey Bickford (Rutgers) Andrés Lagar-Cavilla and Alex Varshavsky (AT&T Research)

2 What are rootkits? Tools used by attackers to conceal their presence on a compromised system Typically installed after attacker has obtained root privileges Stealth achieved by hiding accompanying malicious user-level programs 2 Rootkits = Stealthy malware

3 Rootkit-based attack scenario Sensitive information Credit card: SSN: Internet Kernel Applications Anti virusKey Logger Backdoor Kernel code Kernel data 3 Rootkits hide malware from anti-malware tools Rootkit-infected kernel

4 Significance of the problem 4 Microsoft reported that 7% of all infections from client machines are because of rootkits (2010). Rootkits are the vehicle of choice for botnet- based attacks: e.g., Torpig, Storm. –Allow bot-masters to retain long-term control A number of high-profile cases based on rootkits: –Stuxnet (2010), Sony BMG (2005), Greek wiretapping scandal (2004/5)

5 Evolution of rootkits USER SPACE KERNEL SPACE BELOW OS KERNEL /usr/bin/ls /usr/bin/ps /usr/bin/login System binaries Shared Libraries System call tableIDT Hypervisor-based rootkits (Subvirt, Bluepill) Process Lists Kernel Code 5 Focus of this talk: Kernel-level rootkits BELOW HYPERVISOR Device/Firmware rootkits (Stuxnet)

6 Manipulating control data int main() { open(…)... return(0) } sys_open(...) {... } evil_open(...) {... } sys_open evil_open System call table Change function pointers: Linux Adore rootkit 6 KERNEL USER SPACE

7 Manipulating non-control data run_list next_task run_list next_task run_list next_task run_list next_task all-tasks run-list Hidden process Change non-control data: Windows Fu rootkit Process AProcess BProcess C 7

8 Manipulating non-control data Goal: Attack the kernels pseudorandom number generator (PRNG ) [Baliga et al., 2007] 8 Urandom Entropy Pool (128 bytes) Secondary Entropy Pool (128 bytes) Primary Entropy Pool (512 bytes) /dev/random /dev/urandom External Entropy Sources The operating system kernel presents a vast attack surface for rootkits. The operating system kernel presents a vast attack surface for rootkits.

9 Observation: Rootkits operate by maliciously modifying kernel data structures –Modify function pointers to hijack control flow –Modify process lists to hide malicious processes –Modify polynomials to corrupt output of PRNG Detecting rootkits: Main idea 9 Continuously monitor the integrity of kernel data structures Continuously monitor the integrity of kernel data structures

10 Challenge: Data structure integrity monitor must be independent of the monitored system Solution: Use external hardware, such as a coprocessor, or a hypervisor to build the monitor System call table PRNG pools Process lists 10 Continuously monitor the integrity of kernel data structures Continuously monitor the integrity of kernel data structures Kernel Code Kernel Data Data structure integrity monitor

11 Challenge: Must monitor kernel code, control and non-control data structures Solution: Periodically fetch and monitor all of kernel memory System call table PRNG pools Process lists 11 Continuously monitor the integrity of kernel data structures Continuously monitor the integrity of kernel data structures Kernel Code Kernel Data Data structure integrity monitor

12 Challenge: Specifying properties to monitor Solution: Use anomaly detection –Inference phase: Infer data structure invariants –Detection phase: Enforce data structure invariants System call table PRNG pools Process lists 12 Continuously monitor the integrity of kernel data structures Continuously monitor the integrity of kernel data structures Kernel Code Kernel Data Data structure integrity monitor

13 Rootkit detection using invariants int main() { open(…)... return(0) } sys_open(...) {... } evil_open(...) {... } evil_open System call table 13 Invariant: Function pointer values in system call table should not change Invariant: Function pointer values in system call table should not change

14 run_list next_task run_list next_task run_list next_task run_list next_task all-tasks run-list Hidden process Process AProcess BProcess C 14 Rootkit detection using invariants

15 15 Urandom Entropy Pool (128 bytes) Secondary Entropy Pool (128 bytes ) Primary Entropy Pool (512 bytes ) /dev/random / dev/urandom External Entropy Sources Invariants poolinfo.tap1 is one of {26, 103} poolinfo.tap2 is one of {20, 76} poolinfo.tap3 is one of {14, 51} poolinfo.tap4 is one of {7, 25} poolinfo.tap5 == 1 Invariants poolinfo.tap1 is one of {26, 103} poolinfo.tap2 is one of {20, 76} poolinfo.tap3 is one of {14, 51} poolinfo.tap4 is one of {7, 25} poolinfo.tap5 == 1

16 A new rootkit detection tool Gibraltar *Identifies rootkits that modify control and non- control data *Automatically infers specifications of data structure integrity *Is physically isolated from the target machine 16

17 Hypervisor TargetMonitor Kernel Code Kernel Data Gibraltar daemon Invariant DB Memory page 2 2 Reconstruct data structures ? ? 3 3 Alert user Architecture of Gibraltar 17 Fetch Page 1 1 Myrinet NIC Myrinet NIC

18 Components of Gibraltar Page Fetcher Data Structure Extractor Root Symbols Kernel Data Definitions Physical Memory Address Invariants Enforcer Invariant Templates Invariant Generator Training Enforcement 18

19 Data structure extractor Inputs: –Memory pages from target machine –Root symbols: Entry-points into targets kernel –Type definitions of targets kernel Output: snapshot of targets memory Main idea : Traverse memory pages using root symbols and type definitions and extract data structures 19

20 Invariant generator Executes during a controlled, inference phase Inputs: Memory snapshots from a benign (uninfected) kernel Output: Likely data structure invariants 20 Invariants serve as specifications of data structure integrity

21 Invariant generator Used an off-the-shelf tool: Daikon [Ernst et al., 2000] Daikon observes execution of user-space programs and hypothesizes likely invariants We adapted Daikon to reason about snapshots –Obtain snapshots at different times during training –Hypothesize likely invariants across snapshots 21

22 Invariant enforcer Observes and enforces invariants on targets execution. Inputs: –Invariants inferred during training –Memory pages from target Algorithm: –Extract snapshots of targets data structures –Enforce invariants 22

23 Experimental evaluation How effective is Gibraltar at detecting rootkits? i.e., what is the false negative rate? What is the quality of automatically- generated invariants? i.e., what is the false positive rate? 23

24 Experimental setup Implemented on a Intel Xeon 2.80GHz, 1GB machine, running Linux Fetched memory pages using Myrinet PCI card –We also have a Xen-based implementation. Obtained invariants by training the system using several benign workloads 24

25 False negative evaluation Conducted experiments with 23 Linux rootkits 14 rootkits from PacketStorm 9 advanced rootkits, discussed in the literature All rootkits modify kernel control and non- control data Installed rootkits one at a time and tested effectiveness of Gibraltar at detecting the infection 25

26 26November 30, 2009 Rootkit nameData structures affectedDetected? 1. Adore-0.42System call table (from PacketStorm) 2. All-rootSystem call table (from PacketStorm) 3. KbdSystem call table (from PacketStorm) 4. Kis-0.9System call table (from PacketStorm) 5. Linspy2System call table (from PacketStorm) 6. ModhideSystem call table (from PacketStorm) 7. PhideSystem call table (from PacketStorm) 8. RialSystem call table (from PacketStorm) 9. Rkit-1.01System call table (from PacketStorm) 10. Shtroj2System call table (from PacketStorm) 11. Synapsys-0.4System call table (from PacketStorm) 12. THC BackdoorSystem call table (from PacketStorm) 13. Adore-ngVFS hooks/UDP recvmsg (from PacketStorm) 14. Knark-2.4.3System call table, proc hooks (from PacketStorm) 15. Disable FirewallNetfilter hooks (Baliga et al., 2007) 16. Disable PRNGVFS hooks (Baliga et al., 2007) 17. Altering RTCVFS hooks (Baliga et al., 2007) 18. Defeat signature scansVFS hooks (Baliga et al., 2007) 19. Entropy pool struct poolinfo (Baliga et al., 2007) 20. Hidden processProcess lists (Petroni et al., 2006) 21. Linux BinfmtShellcode.com 22. Resource waste struct zone_struct (Baliga et al., 2007) 23. Intrinsic DOS int max_threads (Baliga et al., 2007)

27 False positive evaluation Ran a benign workload for 42 minutes –Copying Linux kernel source code –Editing a text document –Compiling the Linux kernel –Downloading eight videos from Internet –Perform file system operations using the IOZone benchmark Measured how many invariants were violated 27

28 False positive evaluation 28November 30, 2009 Only considered persistent invariants, i.e., those that survived machine reboots during our evaluation –Total of 236,444 invariants –0.035% raise spuriously violated during normal operation (82 unique invariants) Can also infer transient invariants –Had much higher false positive rate (0.65%) init_fs->root->d_sb->s_dirty.next->i_dentry.next-> d_child.prev->d_inode->i_fop.read == 0xeff9bf60 run_list all_tasks

29 Performance evaluation Training time: total of 56 minutes 25 mins to collect snapshots (total 15 snapshots) 31 minutes to infer invariants Detection time Ranges from 15 seconds up to 132 seconds PCI Overhead 0.49%, measured using the stream benchmark 29

30 30 Part I: The past and present Detecting Kernel-Level Rootkits using Data Structure Invariants Part 2: The future Security versus Energy Tradeoffs for Host-based Mobile Rootkit Detection

31 The rise of mobile malware Mobile devices increasingly ubiquitous: –Store personal and contextual information. –Used for sensitive tasks, e.g., online banking. Mobile malware has immense potential to cause societal damage. Kaspersky Labs report (2009). –106 types of mobile malware. –514 variants. Prediction: We have only seen the tip of the iceberg. 31

32 Are mobile rootkits possible? OSLines of Code Linux 2.6 Kernel10 million Android20 million Symbian20 million Complexity comparable to desktops 34

33 The threat of mobile rootkits Several recent reports of mobile malware gaining root access. iPhone: –iKee.A, iKee.B (2009). –Exploited jailbroken iPhones via SSH. Android: –GingerMaster, DroidDeluxe, DroidKungFu (2011). –Apps that perform root exploits against Android. 33

34 34 What can a mobile rootkit do? Snoop on private phone conversations Track user location using GPS sensitive documents to attacker Stealthily enable camera and microphone Exhaust the battery

35 38 Detecting mobile rootkits Mobile hardware Hypervisor Trusted domain Mobile OS A host-based approach Detector Detection tools run in a trusted domain Mobile hypervisors soon –VMWare –OKL4 Microvisor (Evoke) –Xen on ARM (Samsung)

36 Main challenge: Battery power Desktop machines can execute host- based malware detection systems 24/7. Mobile devices are limited by their battery. Rootkit detection mechanisms in their current form have high energy cost. –With Gibraltar, battery life decreases 2x faster. –Running the detector only when charging is not an option. 36

37 Security/energy tradeoff 37 Host-based security monitors will consume energy –Optimizing for energy less security Results in a security/energy tradeoff Our goal: –Formally characterize and quantify this tradeoff –Use tradeoff to configure the security monitor to balance security and energy

38 How to conserve energy? 38 Frequency of Checks Attack Surface What to Check When to Check Is there a sweet spot? Is there a sweet spot? Frequency of Checks –When to check? –Scan less frequently –Timing versus events Attack Surface –What to check? –Scan fewer code/data objects

39 Experimental platform 39 Viliv S5 –Intel Atom 1.33GHz, 1.5W. –3G, WiFi, GPS, Bluetooth. –Battery: mWh. Xen Hypervisor –Evaluated the tradeoff using two existing rootkit detectors within trusted domain. –Gibraltar and Patagonix [USENIX Security 2008] Workloads –3G and WiFi workload simulating user browsing. –Lmbench for a CPU intensive workload.

40 Experimental setup 40

41 Hypervisor Guest domainTrusted domain Kernel Code Kernel Data Gibraltar daemon Invariant DB Data page 2 2 Reconstruct data structures ? ? 3 3 Alert user Gibraltar: Checking data integrity 41 Fetch Page invariants on 2209 data structures

42 Evaluating Gibraltar 42 while(1) { for all kernel data structures { get current value check against invariant } Maximum security 100% CPU usage Poor energy efficiency Idle Continuous Scan Must trade security for energy

43 Poll Frequency (seconds) Attack Surface 0 Static Data All Data Function Pointers All Lists Process List Original design of Gibraltar Frequency of Checks Tradeoffs for Gibraltar 43

44 while(1) { for all kernel data structures { get current value check against invariant } while(1) { every x seconds { for all kernel data structures { get current value check against invariant } Modifying check frequency 44 Idle Scan

45 Sweet Spot! Results: Modifying check frequency 45

46 Modifying attack surface monitored 46 while(1) { for all kernel data structures { get current value check against invariant } while(1) { for all kernel data structures { for a subset of data structures { get current value check against invariant }

47 96% of rootkits! [Petroni et al. CCS 07] Results: Modifying attack surface 47

48 Hypervisor Guest domain Trusted domain Code: OS & applications Data Patagonix daemon Hash DB Code page Resume guest hash(page) Alert user Patagonix: Checking code integrity 48 ? ? hashes for binary files

49 All Code Root Processes Kernel Code Original design of Patagonix Frequency of Checks Event threshold: pages executed between checks Attack Surface Tradeoffs for Patagonix 49

50 Results: Modifying check frequency 50 Low overhead after initial checks

51 Results: Modifying attack surface 51

52 52 Monitor kernel code and static data, function pointers and lists: Protects against 96% of known attacks. Use polling sweet spot of 30 seconds. Putting it all together

53 Rootkit-based Attacks and Defenses Past, Present and Future Vinod Ganapathy Thank You References: Gibraltar: ACSAC 2008, IEEE TDSC Mobile rootkits: HotMobile 2011, MobiSys 2011.

54 37 Example: Conversation Snooping Attacker Send SMS Rootkit-infected Dial me Call Attacker Turn on Mic Delete SMS

55 Feasibility of cloud offload Cloud offload impractical energy-wise


Download ppt "Rootkit-based Attacks and Defenses Past, Present and Future Vinod Ganapathy Rutgers University Joint work with Liviu Iftode, Arati."

Similar presentations


Ads by Google