Presentation is loading. Please wait.

Presentation is loading. Please wait.

Human Factor vs. Technology

Similar presentations


Presentation on theme: "Human Factor vs. Technology"— Presentation transcript:

1 Human Factor vs. Technology
Joanna Rutkowska Invisible Things Lab Gartner IT Security Summit, London, 17 September, 2007.

2 Basic Definitions…

3 Message of this talk Human factor is not the weakest link in IT security The technology factor is as week as the human factor! Human factor used to describe: User’s unawareness (“stupidity”) Admin’s incompetence NOT developer’s incompetence NOT system designer’s incompetence Security Consumers  “Human Factor” Security Vendors  “Technology Factor” © Invisible Things Lab,

4 Getting Into System Exploiting User’s Unawareness/Incompetence
Social engineering Bad configuration Exploiting Technological Weakness Software flaw (e.g. buffer overflow) Protocol weakness (e.g. MitM) Usual Goal: arbitrary code execution on target system © Invisible Things Lab,

5 After Getting In… “Break and Escape” “Steal and Escape”
E.g. website defacement, files deletion Introduce damage, not compromise! “Steal and Escape” Steal confidential files, databases records, etc.. Do not compromise system – escape after data theft! Problems: encrypted data, passwords – only hashes stored “Install Some Malware” Compromise the system for full control! © Invisible Things Lab,

6 Prevention Approaches…

7 Prevention Approaches
Signature-based User’s education AI-based (anomaly detection) Host IPSes OS hardening (anti-exploitation) Least privilege design Code verification © Invisible Things Lab,

8 Signature based approaches
Protect against “user’s stupidity” by blacklisting known attack patterns – e.g. certain “phishing mails” Protect against technological weaknesses by having a signature for an exploit (majority) or generic signature for an attack (minority, unfortunately) No protection against unknown (targeted) attacks! All major A/V vendors alerts about increasing number of targeted attacks, since 2006 targeted  we usually don’t have a signature © Invisible Things Lab,

9 User’s education Increase awareness among users and competences of system administrators Should eliminate most of the social engineering based attacks, e.g. sending a malware via Can not protect against attacks exploiting flaws in software, i.e. exploits “Keeping your A/V up to date” does not address the problem of targeted attacks © Invisible Things Lab,

10 AI (anomaly based) Using “Artificial Intelligence” (heuristics) to detect “abnormal” patterns of: … behavior (e.g. iexplore.exe starting cmd.exe) … network traffic (e.g. suspicious connections) Problems: No guarantee to detect anything! False positives! Do you think “AI” can solve problems better then “HI” (Human Intelligence)? ;) © Invisible Things Lab,

11 Anti Exploitation Make exploitation process (very) hard!
Stack Protection Stack Guard for UNIX-like systems (1998) Microsoft /GS stack protection (2003) Address Space Layout Randomization (ASLR) PaX project for Linux (2001) Vista ASLR (Microsoft, 2007) Non-Executable pages PaX project for Linux (2000) OpenBSD’s W^X (2003) Windows NX ( ) Other technologies © Invisible Things Lab,

12 Least Privilege and Privilege Separation
Limit scope of an attack by limiting the rights/privileges of the components exposed to the attack (e.g. processes) Least Privilege Principle: every process (or other entity) has the minimal set of rights necessary to do its job How many people work using the Administrator’s account? Privilege Separation Different programs have different, non-overlapping, competences… © Invisible Things Lab,

13 Example: Vista’s User Account Control
Attempt to force people to adhere to the LP Principle All user’s processes run by default with restricted privs, User want to perform an operation which requires more privileges – a popup appears asking for credentials, Goal: if restricted process gets exploited, attacker does not automatically get administrator’s rights! Many implementation problems though: February 2007: Microsoft announced that UAC is not… a security feature! © Invisible Things Lab,

14 Example: Privilege Separation
Different account for different tasks, e.g.: joanna – main account used to log in joanna.web – used to run Firefox joanna. – used to run Thunderbird joanna.sensitive – access to /projects directory, run password manager and another instance of web browser for banking. Easy to implement on Linux or even on Vista! In Vista we rely on User Interface Privilege Isolation (UIPI) © Invisible Things Lab,

15 Problems with priv-separation
If attacker exploits a bug in kernel or one of kernel drivers (e.g. graphics card driver)… … then she has full control over the system and can bypass all the protection offered by the OS! This is a common problem of all general purpose OSes based on monolithic kernel – e.g. Linux, Windows. Drivers are the weakest point in OS security! Hundreds of 3rd party drivers, All run with kernel privileges! We will get back top this later… © Invisible Things Lab,

16 Avoiding Bugs and Code Verification
Developers education e.g. Microsoft and Secure Development Lifecycle (SDL) Fuzzing Generate random “situations” and see when software crashes… Currently the favorite bughunter’s technique… Code auditing Very expensive – requires experienced experts, Few automatic tools exist to support the process. Formal verification methods Manual methods only for very small projects (a few k-lines) No mature automatic tools yet (still 5-10 years?) © Invisible Things Lab,

17 How Prevention Fails In Practice…

18 Example: the ANI bug ANI bug (MS07-17, April 2007)
“This vulnerability can be exploited by a malicious web page or HTML message and results in remote code execution with the privileges of the logged-in user. The vulnerable code is present in all versions of Windows up to and including Windows Vista. All applications that use the standard Windows API for loading cursors and icons are affected. This includes Windows Explorer, Internet Explorer, Mozilla Firefox, Outlook and others.” Source: Determina Security, © Invisible Things Lab,

19 ANI Bug vs. Vista Code Review and Testing Process?
MS admitted their fuzzers were not tuned up to catch this bug in their code… Anti-Exploitation technologies? GS stack protection failed, because compiler “heuristics” decided not to include it for the buggy function! NX usually fails, because IE and explorer have DEP disabled by default! ASLR could be bypassed due to implementation weaknesses! © Invisible Things Lab,

20 ANI Bug vs. Vista UAC? UAC allows to run IE in a so called Protected Mode (PM) However: PM is not deigned to protect user’s information! It only protects against modification user’s data! Also, MS announced that UAC/Protected Mode can not be treated as a security boundary! i.e. expect that it will be easy to break out from Protected Mode… © Invisible Things Lab,

21 ANI Bug vs. educated user?
To exploit this bug it’s just enough to redirect a user to browse a compromised page (or open an )… No special action from a user required! Exploit can be very reliable – even experienced user might not realize that he or she has been just attacked! © Invisible Things Lab,

22 ANI vs. A/V Attack was discovered in December 2006
Information has been published in April 2007 What if it was discovered by a “black hat” even earlier? Do you really believe that there was only 1 person on the planet capable of discovering it? Why would A/V block/detect such an attack when the information about it was not public? © Invisible Things Lab,

23 Going further… So, now we see that the technology can not protect (even smart) user from being exploited… We saw an attack scenario, when an exploit bypasses various anti-exploitation techniques and eventually gets admin access to the systems… The next goal is usually to install some rootkit in other words to get into kernel… But, we have Vista Kernel Protection on Vista! © Invisible Things Lab,

24 Digital Drivers Signing…
“Digital signatures for kernel-mode software are an important way to ensure security on computer systems.” “Windows Vista relies on digital signatures on kernel mode code to increase the safety and stability of the Microsoft Windows platform” “Even users with administrator privileges cannot load unsigned kernel-mode code on x64-based systems.” Quotes from the official Microsoft documentation: Digital Signatures for Kernel Modules on Systems Running Windows Vista, © Invisible Things Lab,

25 Example: Vista Kernel Protection Bypassing
Presented by Invisible Things Lab at Black Hat in August Exploiting bugs in 3rd party kernel drivers, e.g.: ATI Catalyst driver NVIDIA nTune driver It’s not important whether the buggy driver is present on the target system – a rootkit might always bring it there! There are hundreds of vendors providing kernel drivers for Windows… All those drivers share the same address space with the kernel… © Invisible Things Lab,

26 Buggy Drivers: Solution?
Today we do not have tools to automatically analyze binary code for the presence of bugs Binary Code Validation/Verification There are only some heuristics which produce too many false positives and also omit more subtle bugs There are some efforts for validation of C programs e.g. ASTREE ( Still very limited – e.g. assumes no dynamic memory allocation in the input program Effective binary code verification is a very distant future © Invisible Things Lab,

27 Buggy Drivers: Solutions?
Drivers in ring 1 (address space shared among drivers) Not a good solution today (lack of IOMMU) Drivers in usermode Drivers execute in their own address spaces in ring3 Very good isolation of faulty/buggy drivers from the kernel Examples: MINIX3, supports all drivers, but still without IOMMU Vista UMDF, supports only drivers for a small subset of devices (PDAs, USB sticks). Most drivers can not be written using UMDF though. © Invisible Things Lab,

28 Message I believe its not possible to implement effective kernel protection on General Purpose OSes based on a microkernel architecture Establishing a 3rd party drivers verification authority might raise a bar, but will not solve a problem Move on towards microkernel based architecture! © Invisible Things Lab,

29 Moral Today’s prevention technology does not always work…
In how many cases it does work vs. fails? © Invisible Things Lab,

30 How secure is our system?
In how many cases our prevention fails? This is a meaningless question! If you know that a certain type of attacks is possible (i.e. practically) then the system is simple insecure! “System is not compromised with probability = 98%”?! “The cat is alive with probability of 50%”?! What does it mean? © Invisible Things Lab,

31 Detection for the Rescue!

32 Detection Detection is used to verify that prevention works
Detection can not replace prevention E.g. data theft – even if we detect it, we can not make the attacker to “forget” the data she has stolen! © Invisible Things Lab,

33 Detection Host-Based Network Based
Tries to find out whether current OS and applications has been compromised or not A/V products Network Based Tries to detect attacks by analysis network traffic E.g. detect known exploit, or suspicious connections Network IDS Sometimes combined with firewall – IPS systems © Invisible Things Lab,

34 Stealth Malware rootkits, backdoors, keyloggers, etc…
stealth is a key feature! stealth – means that legal processes can’t see it (A/V) stealth – means that administrator can’t see it (admin tools) stealth – means that we should never know whether we’re infected or not! © Invisible Things Lab,

35 Paradox… If a stealth malware does its job well…
…then we can not detect it… …so how can we know that we are infected? © Invisible Things Lab,

36 How we know that we were infected?
We count on a bug in the malware! We hope that the author forgot about something! We use hacks to detect some known stealth malware (e.g. hidden processes). We need to change this! We need a systematic way to check for system integrity! We need a solution which would allow us to detect malware which is not buggy! © Invisible Things Lab,

37 State of Detection Current detection products cannot not deal well with targeted stealth malware, We need systematic way for checking system compromises, but, Unfortunately current OS are too complex! We can’t even reliably read system memory! Due to various attacks, e.g. against DMA But… maybe we should be not afraid of targeted stealth malware? Maybe it’s just a FUD? © Invisible Things Lab,

38 Targeted Stealth Malware?
Gartner: 10 Key Predictions for 2007: #5: By the end of 2007, 75 percent of enterprises will be infected with undetected, financially motivated, targeted malware that evaded their traditional perimeter and host defenses. (source: eWeek based on Gartner) © Invisible Things Lab,

39 Prevention vs. Detection
Prevention is not perfect as we saw, Detection is very immature, We should have better detection to verify our prevention mechanisms, OS complexity is a problem when verifying system integrity There is no way to implement effective detection without cooperation with the OS vendors! © Invisible Things Lab,

40 Human Factor vs. Technology
“User stupidity” is only part of the problem (a small part) Many modern attacks do not require user to do anything “stupid” or suspicious (e.g. WiFi driver’s exploitation) There is no technology on the market that offers unbreakable prevention Even competent admins can not do much about it Current technology does not even allow for detecting many modern stealth malware! Conscious users can not find out whether their systems has been compromised -- they can only count on attacker’s mistake! © Invisible Things Lab,

41 Final Message Human Factor is a weak link in computer security,
But the technology is also flawed! We should work on improving the technology just as we work on educating users… Unfortunately challenges here are much bigger, mostly due to over complexity of the current OSes. As a savvy user, I would like to have technology, that would protect me! I don’t have it today! Not even effective detection! Cooperation from OS vendors required! © Invisible Things Lab,

42 Invisible Things Lab Focus on Operating System Security
In contrast to application security and network security Targeting 3 groups of customers Vendors – assessing their products, advising Corporate Customers (security consumers) – unbiased advice about which technology to deploy Law enforcement/forensic investigators – educating about current threats (e.g. stealth malware) © Invisible Things Lab,

43 Joanna Rutkowska, Invisible Things Lab joanna@invisiblethingslab.com
Thank You Joanna Rutkowska, Invisible Things Lab

44 Topics For Roundtable Discussion
Virtualization based malware (a-little-bit-technical topic) how different from “classic” kernel malware? should we be afraid? defense approaches Tricky tricks! why we should avoid tricks when building security? built-in security vs. 3rd party-provided security? “Dump users” human factor vs. technology Can users be educated in security? Should they? © Invisible Things Lab,


Download ppt "Human Factor vs. Technology"

Similar presentations


Ads by Google