Download presentation
Presentation is loading. Please wait.
Published byDonald McBride Modified over 9 years ago
1
Copyright © 2007 - The OWASP Foundation Permission is granted to copy, distribute and/or modify this document under the terms of the Creative Commons Attribution-ShareAlike 2.5 License. To view this license, visit http://creativecommons.org/licenses/by-sa/2.5/ The OWASP Foundation OWASP & WASC AppSec 2007 Conference San Jose – Nov 2007 http://www.owasp.org/ http://www.webappsec.org/ Testing the Tester Measuring Quality of Security Testing Ofer Maor CTO, Hacktics OWASP Israel AppSec 2008 Conference
2
OWASP Israel 2008 Conference – Sep 2008 2 Introduction Security Testing is a Critical Element Part of the system’s security lifecycle Is in fact the QA of the security in the system Provides the only way to assess the quality of security Nonetheless, Quality is Uncertain No way to measure quality of security testing No certification or other type of quality ranking And even worse – We only know our testing failed when it’s too late. Even PCI and other regulations avoid this issue
3
OWASP Israel 2008 Conference – Sep 2008 3 Introduction Creates a Huge Challenge for Organizations How to choose the right security testing solution How can we guarantee that security testing is sufficient Makes Budget Decisions Harder… How can we determine cost effectiveness How to deal with purchasing constraints In This Presentation We Will: Discuss aspects of security testing quality Discuss means of assessing quality of testing solutions
4
OWASP Israel 2008 Conference – Sep 2008 4 Agenda Quality of Security Testing False Positives / False Negatives Coverage / Validity Business Impact / Threat Security Testing Approaches (Pros & Cons) Black/Grey/White Box Penetration Test vs. Code Review Automatic vs. Manual Testing the Tester Determining the Right Approach Evaluating Tools Quality Evaluating Services Quality
5
OWASP Israel 2008 Conference – Sep 2008 5 Quality of Security Testing
6
OWASP Israel 2008 Conference – Sep 2008 6 Quality of Security Testing Quality of Testing is Essentially Measured by Two Elements: False Negatives (Something was missed…) Most obvious problem Exposes the system to attacks False Positives (Something was made up…) Surprisingly – equal problem for enterprises Generates redundant work and effort Creates distrust (Cry Wolf Syndrome) Not necessarily technological (the flaw is there – but poses no real threat)
7
OWASP Israel 2008 Conference – Sep 2008 7 False Negatives: Reasons Coverage of Tested Components URL/parameter/component missed Specific code section not reached in tested setup Flow Attributes Users Coverage of Tests Specific vulnerability not tested Specific variant not tested
8
OWASP Israel 2008 Conference – Sep 2008 8 False Negatives: Reasons (Cont’d) Test Quality/Proficiency Poorly defined test Attack not properly created Expected result not properly defined Poorly run test – lack of proficiency Technology changed / security measures exist Test requires some modifications Requires evasion techniques Requires different analysis of results More…
9
OWASP Israel 2008 Conference – Sep 2008 9 Coverage Problems Application Data Coverage Automatic crawling problems Practically infinite links Client-side created links (JS/AJAX) Proper flow and context data Availability 3 rd Party components can not be tested Code unavailable Size Too many URLs/parameters/code to cover Insufficient time
10
OWASP Israel 2008 Conference – Sep 2008 10 Coverage Problems Test Coverage Vulnerability not tested Impossible to test (Logical by Automated, Brute Force by Manual, etc.) Newly Discovered Vulnerability (Not up to date yet…) Seemingly Insignificant Vulnerability Never-before seen vulnerability (Mostly logical…) Test may impair availability or reliability Variant not tested Too many possible variants (common with injection problems) – may require very specific tweaking Logical vulnerability extremely dependant on actual application
11
OWASP Israel 2008 Conference – Sep 2008 11 Scope of Threat Who is Trying to Attack Us? What do They Want? Not Just Security Testing - The entire security solution should be relative to the threat Security Testing Cost-Effectiveness: Simulates a script-kiddie / automated tools? Simulates an average hacker? Simulates focused attack with large resources? When is a Missed Vulnerability Classified as “False Positive” (And when is it out of scope?)
12
OWASP Israel 2008 Conference – Sep 2008 12 False Positives: Reasons Test Quality/Proficiency Poorly defined test Expected result not properly defined Poorly defined validation differentiators Poorly run test – lack of proficiency Technology changed / security measures exist Result appears vulnerable Honeypots “Patched” security solutions Test unable to correlate to other components (code review) Technological vulnerability – no threat Test does correlate to context of the system
13
OWASP Israel 2008 Conference – Sep 2008 13 Validity How Can We Tell if It’s Really Vulnerable? Probe Tests – Attempt to determine validity by a set of tests and expected results May poorly identify a result as a vulnerability Very susceptible to honey pots Exploits Take advantage of a vulnerability to achieve an actual attack Very high validity (Still might fall on honeypots though) Requires additional effort and may pose a risk
14
OWASP Israel 2008 Conference – Sep 2008 14 Business Impact How Dangerous is This Vulnerability? Still Controversial – Do we really want to fix just vulnerabilities posing immediate threats? Can we call it a vulnerability if it does nothing? Associating Risk Organizations usually prioritize effort by risk Do we really give each vulnerability the same risk level in every system? Requires Contextual understanding of system
15
OWASP Israel 2008 Conference – Sep 2008 15 Quality of Security Testing – Criteria Summary False Negatives Application data coverage Test coverage Test quality / proficiency Scope of threat False Positives Test quality / proficiency Validity Business context Cost Effectiveness (% of acceptable False Negatives and Positives)
16
OWASP Israel 2008 Conference – Sep 2008 16 Security Testing Approaches (Pros & Cons)
17
OWASP Israel 2008 Conference – Sep 2008 17 Security Testing Approaches (Pros & Cons) Black/Grey Box Application vulnerability scanners Manual penetration test White Box Static code analyzers Manual code review
18
OWASP Israel 2008 Conference – Sep 2008 18 Application Vulnerability Scanners Application Data Coverage Good in terms of volume (Large applications) Problematic in contextual aspects Complex Flows Multiple User Privileges Specific data influences code executed Coverage of Tests Generally good with Technical Vulnerabilities Very limited with Logical Vulnerabilities Variant Coverage – Wide, but not adaptive
19
OWASP Israel 2008 Conference – Sep 2008 19 Application Vulnerability Scanners Test Quality / Proficiency Generally Good (Depends on product…) However, fails to adapt to changes and non standard environments Validity Limited – Generally high rate of False Positives No (or very limited) Exploits Validation differentiators suffer from test quality Business Impact None
20
OWASP Israel 2008 Conference – Sep 2008 20 Application Vulnerability Scanners Scope of Threat Good against: Script Kiddies Tool based attacks Basic-level Hackers Usually insufficient with focused/advanced attacks Cost Effectiveness Can be High, under following circumstances: Coverage and Quality issues not present with tested technology Users of tools have sufficient security understanding Tool used for many scans
21
OWASP Israel 2008 Conference – Sep 2008 21 Manual Penetration Testing Application Data Coverage Good in contextual aspects Allows properly utilizing the application Differences between users are usually clear May be problematic in volume aspects Nonetheless, proper categorization can solve volume issues Coverage of Tests Very good – if working methodologically Variant coverage Not necessarily wide However, Proper testing allows finding the right variants
22
OWASP Israel 2008 Conference – Sep 2008 22 Manual Penetration Testing Test Quality / Proficiency Potentially good – but depends greatly on the person Main advantage – allows creativity and adaption to identify non standard vulnerabilities Validity Usually good – Easier for person to identify false positives Easier to perform exploits Business Impact Can be considered by tester
23
OWASP Israel 2008 Conference – Sep 2008 23 Manual Penetration Testing Scope of Threat Depends on the actual effort and quality Can be used to face advanced focused attacks Cost Effectiveness Potentially high. May suffer when: Very large applications Need to only face automatic tools Effort invested (and cost) may vary greatly
24
OWASP Israel 2008 Conference – Sep 2008 24 Manual Penetration Testing Additional Key Aspects Quality varies greatly between testers Quality may also vary for the same tester (time, mood, inspiration, etc.) Black vs. grey box Users Coverage Information Gathering Business impact Greatly depends on tester’s understanding How much effort do we want to invest?
25
OWASP Israel 2008 Conference – Sep 2008 25 Static Code Analyzers Application Data Coverage Generally good (no crawling setbacks) Problematic when not all code available Coverage of Tests Generally good with technical vulnerabilities Very limited with logical vulnerabilities Variant coverage – wide, but not adaptive
26
OWASP Israel 2008 Conference – Sep 2008 26 Static Code Analyzers Test Quality / Proficiency Generally good in some aspects (Depends on product…) High accuracy in identifying code violations However, not necessarily security violations Validity Very limited – high rate of false positives Code violations may often not be exploitable May be blocked by external components Business Impact None. There even is no application context (static)
27
OWASP Israel 2008 Conference – Sep 2008 27 Static Code Analyzers Scope of Threat Good against: Mostly syntax attacks (Injections) Mostly script kiddies / tool based attacks Usually insufficient with focused/advanced attacks Cost Effectiveness Good for enforcing proper coding practices Requires, however, performing modifications of code not necessarily vulnerable
28
OWASP Israel 2008 Conference – Sep 2008 28 Manual Code Review (Static) Application Data Coverage Can be problematic – Usually impossible to go over every line of code Requires smart analysis of what to review and what not to review Coverage of Tests Generally good with technical vulnerabilities Somewhat limited with logical vulnerabilities (often hard to determine full logic of non running code)
29
OWASP Israel 2008 Conference – Sep 2008 29 Manual Code Review (Static) Test Quality / Proficiency Potentially excellent (if properly done) Allows identification of backdoors and rare issues May suffer from inability of understanding flow of complex systems Validity Limited – Relatively high rate of false positives Code violations may often not be exploitable May be blocked by external components Business Impact Partial. Depends on preliminary preparations
30
OWASP Israel 2008 Conference – Sep 2008 30 Manual Code Review (Static) Scope of Threat Good against: High level of attackers Backdoors and rare problems Cost Effectiveness Usually expensive, and only justifies costs in critical parts
31
OWASP Israel 2008 Conference – Sep 2008 31 Choosing the Right Approach Determining the Types of Threats Determining the Required Frequency Weighing Pros, Cons and Costs Usually – A combination of approaches applies: Manual Penetration Test + Partial Code Review Manual Penetration Test + Scanner (Free/Commercial) Scanner + Partial Manual Penetration Test (Validation) Scanner + Static Code Analyzer (Correlation) Static Code Analyzer + Manual Code Review (Validation) Etc…
32
OWASP Israel 2008 Conference – Sep 2008 32 Testing the Tester: Evaluating Quality of Security Testing
33
OWASP Israel 2008 Conference – Sep 2008 33 Testing the Tester The Hardest Part – Determining the quality of security testing solution: This Consists Of: Identify % of false negatives Identify % of false positives Other aspects (Not discussed here) Performance Management Reporting Etc.
34
OWASP Israel 2008 Conference – Sep 2008 34 Testing the Tester How NOT to Determine Quality Marketing material Sales pitches Magazine articles (Usually not professional enough) Benchmarking on known applications (WebGoat, Hackme Bank, etc.) So What Should We Do? References (that we trust…) Comparative analysis (on our systems) Ideally – Compare with a “perfect” report
35
OWASP Israel 2008 Conference – Sep 2008 35 Product Assessment Comparative Analysis Run several products on a few systems in the enterprise False Negatives Ideally – compare against a report containing all findings – identify percent of false negative in each product. Alternatively – unite the real findings from all reports, and compare against that False Positives Perform validation of each finding to eliminate all false positives Note the amount of false positives in each product Assess other aspects (if needed) – Details of report, speed of execution, etc.
36
OWASP Israel 2008 Conference – Sep 2008 36 Service Assessment Much Trickier Hiring a consultant is like hiring an employee First of All - References Find references of customers with similar environments and needs Ask around yourself – the vendor will always provide you with their best references! Check the Specific Consultant It’s not just about the company – it’s about the people involved in the project Check the resume, perform an interview
37
OWASP Israel 2008 Conference – Sep 2008 37 Service Assessment Comparative Analysis Similar to product – great way of comparing services Main problem – Usually expensive Important note – The benchmarking should be done without prior knowledge of the testers! False positive & negative assessment: Mostly similar Business impact, however, now plays a role – a good tester should eliminate (or downgrade) non hazardous findings See if testing includes strong validation (exploitation) Quality of report and information gathered in it should also be examined
38
OWASP Israel 2008 Conference – Sep 2008 38 Some Techniques That (Sometimes) Help Application Data Coverage Build (and approve) a test plan List all components/modules tested Require an automated tool in addition Main problem – usually increases costs Test Coverage Review methodology / list of tests Main problem – does not really improve variant coverage Validation – Require strong validation (Exploits)
39
OWASP Israel 2008 Conference – Sep 2008 39 Summary Quality of Security Testing is Hard to Measure or Quantify Nonetheless – It is Important to Maintain Adequate Quality to Address the Threat Quality of Security (and Security Testing) Must be Guaranteed In Advance Maintaining Quality has an Associated Cost: Testing for quality Best products, best tools, best consultants Finding the Balance is Crucial
40
OWASP Israel 2008 Conference – Sep 2008 40 Thank You! Discussion & Questions
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.