Download presentation
Presentation is loading. Please wait.
1
Getting the most out of HP Fortify SCA
Peter Blay, HP Fortify Senior Technical Support Engineer Simon Corlett, HP Fortify Technical Account Manager
2
Agenda Introduction The case studies Improving performance
Streamlining the audit Wrap up First off, a quick run down of how the presentation will be structured. Following an introduction into the challenges Application Security poses and how SCA fits into this we’ll be introducing a couple of applications we’ll be using as Case Studies. As we go through the presentation we’ll hark back to these apps to demonstrate how the tips and tricks we’ve covered can provide real world benefit. This will include improving the performance of you SCA scans also streamlining the audit once you have your results. Finally we’ll provide a summary of all we’ve discussed and take any questions…
3
1. Introduction
4
Application security challenges
In-house development ✔ Securing legacy applications Certifying new releases Demonstrating compliance Procuring secure software Outsourced Commercial Open source As a member of your company’s application security team, you’ve got a myriad of challenges when considering the risk of your software. First, legacy systems. These systems were built in a different era – For many legacy applications, security was sufficient for their time and place of creation. These systems and millions of lines of code have be scanned and scrubbed. They have to be secured. The second part of the challenge, is preventing more insecure code from being developed and introduced. This is what we mean when we say “build security in” (which you’ll likely hear multiple times over the next couple of days). How can you ensure that new releases don’t continually introduce additional risk through software vulnerabilities? Particularly when the threat landscape changes constantly -- with new threats being identified nearly every day. Additionally…..there is increased pressure externally from changes in compliance regulations and from internal audit policies and practices. Just responding to compliance mandates can turn into a never-ending cycle and ultimately not ensure that your code is more secure. So at what stage should security come in? And specifically who’s job is it?
5
Fixing things late is frustrating
30x more costly to secure in production Runtime After an application is released into Production, it costs 30x more than during design. WebInspect / SecurityScope 30X SCA 15X Cost Education 10X 5X 2X Requirements Coding Integration/ component testing System testing Production According to an NIST study, the cost of fixing software increases substantially further along the Software Development Lifecycle (SDL). It costs 30x more to fix security issues after a breach in Production than to build security into your code at the beginning during an applications Design. The Fortify software aims to cover each stage of the SDL - from educating developers up front, to dynamic analysis to assist with an applications QA, and finally to runtime protection of applications in Production. We’ll be focusing on securing applications in development. How to get the most out of SCA to ensure vluns are found and fixed early “saving both time and money”. Source: NIST
6
Vulnerability management Application Lifecycle
HP Fortify solutions Static Analysis Dynamic Analysis Runtime Analysis Actual attacks Source code mgt system Static analysis via build integration Dynamic testing in QA or production Real-time protection of running application Hackers Vulnerability management Normalization (Scoring, guidance) Remediation Application Lifecycle IDE Plug-ins (Eclipse, Visual Studio, etc.) Correlate target vulnerabilities with common guidance and scoring Vulnerability database Defects, metrics and KPIs used to measure risk Correlation (Static, Dynamic, Runtime) Developers (onshore or offshore) Threat intelligence Rules management Development, project and management stakeholders A recap for those who aren’t aware as to how the Fortify products fit together. Our focus here is on SCA and effective Vuln Management to speed up remediation.
7
2. The case studies
8
iOS Mobile Application
SpyPhone Scan Results Additional Options: None Files: 74 Executable LOC: 3,368 Scan Time: 01:47 Total Issues: 30 Critical: 0 High: 10 Medium: 0 Low: 20 Summary: An example application to show the data a rogue iOS application can collect. OS: Mac OS X Language: Objective-C Platform: iOS Website: As you can see, this isn’t a particularly large application, but then again neither are the majority of applications on the market today. Most iOS applications are relatively small compared to desktop programs and even frameworks we scanned had only a couple of hundred scannable files.
9
.NET Web Application nopCommerce 3.10 Scan Results Summary:
Additional Options: -Xmx6G Files: 3,830 Executable LOC: 136,590 Scan Time: 30:45 Total Issues: 3,420 Critical: 119 High: 1,532 Medium: 44 Low: 1,725 Summary: An open source ecommerce software that contains both a catalogue frontend and an administration tool backend. OS: Windows Language: C# Platform: ASP.NET 4.5 Website: nopCommerce is an open source ecommerce solution. It's based on ASP.NET 4.5 (MVC 4) with an MS SQL backend database. It’s basically an online shopping cart which can be integrated into new or existing online stores. It’s a relatively decent sized app with nearly 4,000 files and 140,000 Executable LOC.
10
3. Improving results
11
The 3 stages of SCA Analysis
Clean sourceanalyzer –b BuildID -clean Translate sourceanalyzer –b BuildID ... Scan sourceanalyzer –b BuildID -scan -f results.fpr Before we get going it’s important to understand just how an SCA scan works. Broken down into 3 stages… Clean Removes any previous translations from the specified BuildID sourceanalyzer -clean – removes all previous translations Translate Creates .nst (normalised syntax tree) files for the code These are stored in “C:\Users\user\AppData\Local\Fortify\sca5.16\build\BuildID” or “.fortify/sca5.16/build/BuildID” Scan Scans all .nst’s on that build ID Produces .fpr
12
Improving performance
Hardware Disk I/O Recommend: 7,200 RPM Best Results: 10K or 15K RPM SSD CPU 2.1 GHz processor 3.2 GHz or faster JVM Tuning Heap Size Only 600MB by default Increase via -Xmx option Stack Size Only 1MB by default Increase via -Xss option Perm Gen Size Only 64MB by default Increase via -XX:MaxPermSize option Environment Variable All options can be set via: SCA_VM_OPTS 32 vs 64 bit 32 bit mode Runs by default regardless of SCA version Limited to: Linux: ~3GB Windows: ~1.3GB 64 bit mode Enabled with -64 option Can use as much RAM as is available. Recommend using no more than 2/3 available RAM. Recommendations in the System Requirements tend to be minimums. SCA and particularly the scan stage are very memory intensive. Big projects will need lots of resources. Walk through Hardware, JVM Tuning and 32 vs 64 bit. As with the Sys Req recommendations the defaults are low so will almost certainly need raising. Any JVM options can be used so setting -Xms may also help, or specifying a garbage collector.
13
Improving performance
Hardware Disk I/O Recommend: 7,200 RPM Best Results: 10K or 15K RPM SSD CPU 2.1 GHz processor 3.2 GHz or faster JVM Tuning Heap Size Only 600MB by default Increase via -Xmx option Stack Size Only 1MB by default Increase via -Xss option Perm Gen Size Only 64MB by default Increase via -XX:MaxPermSize option Environment Variable All options can be set via: SCA_VM_OPTS 32 vs 64 bit 32 bit mode Runs by default regardless of SCA version Limited to: Linux: ~3GB Windows: ~1.3GB 64 bit mode Enabled with -64 option Can use as much RAM as is available. Recommend using no more than 2/3 available RAM. Scan results Additional Options: -Xmx2G Scan Time: 01:20 Baseline Scan: 01:47 Scan Difference (%): Down 25% Total Issues: 30 Critical: 0 High: 10 Medium: 0 Low: 20 Scan results Additional Options: None Scan Time: 16:45 Baseline Scan: 30:45 Scan Difference (%): Down 46% Total Issues: 3,315 Critical: 93 High: 1,521 Medium: 44 Low: 1,657 Case study review… iOS App Increasing memory gives it greater dedicated resources to work with, thus bringing the scan time down. .NET App Increased memory was used in the baseline scan to give a more complete set of results. Removing this additional memory massively lowers the scan time but also lowers the amount of the app covered so the vulnerability counts drop. We’ll talk about this tradeoff a little later in “Quality vs Speed”.
14
Mobile build sessions Machine T Machine S
sourceanalyzer -b BuildID <translation commands> sourceanalyzer -b BuildID -make-mobile sourceanalyzer -b BuildID -export-build-session build-session.mbs Machine T Transfer build-session.mbs from “Machine T” to “Machine S” sourceanalyzer -import-build-session build-session.mbs sourceanalyzer -b BuildID -scan -f results.fpr Machine S During an SCA scan the actual Scan stage is much more memory intensive than the Translation. This is because it’s at the Scan stage that SCA is essentially creating a complete model of the application and tracing all possible flows of data through that. As you can imagine even a small application could have significantly complex dataflows. Now, while the Translation stage requires all dependencies to be present, the Scan stage is platform independent. This means that it’s often easiest to perform the Translation on a developer’s build machine and then perform the Scan on a dedicated build machine. This is done by creating a Mobile Build Session using the -make-mobile command, transferring the .mbs file to the dedicated Scan machine where it’s imported and the Scan kicked off. This process is similar to how Cloudscan works. Our colleagues Chris and Sudha will be discussing Cloudscan in detail on Thursday morning.
15
Quality vs. speed QUALITY SPEED
Greater application coverage and so a more complete set of results Quicker scan times QUALITY SPEED Long running scans A less complete set of results Increasing one often reduces the other. Which is more important. Clean up warnings - just because scan completed doesn’t mean it’s perfect. Shown with .NET app previously. Limiting memory led to lots of OOM errors which meant that the app wasn’t fully covered in the scan. This in turn means that the scan time was vastly reduced but some issues (including criticals) were missed.
16
Breaking it down - Scan a single project - Reuse Build ID
Trans 1 Trans 2 Trans 3 Scan FPR - Use a single binary or object (C/C++) - Use -append option C:\Windows\System32\cmd.exe C:\>sourceanalyzer -b BuildID -show-build-tree Debug/Sample.exe Debug/Sample.lib Debug/Sample.obj Sample.cpp stdafx.h Debug/stdafx.obj stdafx.cpp C:\>sourceanalyzer -b BuildID -bin Debug/Sample.obj -scan -f out.fpr Trans 1 Trans 2 Trans 3 Scan 1 Scan 2 Scan 3 If resources are an issue one option may be to break the scan down into more manageable chunks. Less resources required, quicker scan times, but there will be missing dataflow. In rare cases the translation may be more intensive than the scan or be being conducted on a developer machine with limited resources. In this case you can run each translation separately and then scan together. There’ll be no dataflow tracked between the separate translations however. Another option is to break the scan into separate chunks and use -append - again this will miss dataflow between the separate scans. However, a crafty trick is to translate together then run separate scans for each analyser (the dataflow uses the most resources) and then writing the results to 1 FPR to give you a complete scan. Depending on language it’s also possible to just scan part of the project rather than the entire thing. This can be useful if there’s only 1 component you need to focus on. FPR
17
Quick Scan & the limiters
Set -quick in fortify-sca-quickscan.properties com.fortify.sca.limiters.ConstraintPredicateSize Default value: 50000 Quick Scan value: 10000 Skips calculations defined as very complex in the buffer analyser to improve scanning time. com.fortify.sca.limiters.BufferConfidenceInconclusiveOnTimeout Default value: true Quick Scan value: false com.fortify.sca.limiters.MaxChainDepth Default value: 5 Quick Scan value: 4 Controls the maximum call depth through which the data flow analyser tracks tainted data. Increasing this value increases the coverage of data flow analysis, and results in longer analysis times. com.fortify.sca.limiters.MaxTaintDefForVar Default value: 1000 Quick Scan value: 500 This property sets the complexity limit for data flow precision backoff. Data flow incrementally decreases precision of analysis for functions that exceed this complexity metric for a given precision level. com.fortify.sca.limiters.MaxTaintDefForVarAbort Default value: 4000 Quick Scan value: 1000 This property sets a hard limit for function complexity. If complexity of a function exceeds this limit at the lowest precision level, the analyser will not analyse that function. SPEED QUALITY The depth of analysis SCA performs sometimes depends on the available resources. SCA uses a complexity metric to tradeoff these resources against the number of vulnerabilities that can be found. Sometimes, this means giving up on a particular function when it doesn't look like SCA has enough resources available. This is normally when you will see the a "Function too complex" warning output in the resulting FPR and the logfile. When this message appears, it doesn't necessarily mean the function in the program has been completely ignored. For example, the dataflow analyzer will typically visit a function many times before analysis is complete, and may not run into this complexity limit in the early visits (since its model of other functions is less developed). In this case, anything learned from the early visits will be reflected in the results. That said, we do allow the user to control the "give up" point via some SCA properties called limiters. Different analyzers have different limiters, a predefined set of these limiters can be run using the -quick option. This table shows a handful of the limiters and their effects. For a full set of limiters please see the SCA User Guide. It’s also worth noting that by default SSC will not accept quick scans. This has to be changed with a setting on a per-project basis. However changing the limiters in an individual basis does not have this affect and FPR’s can be uploaded to SSC as normal.
18
Quick Scan & the limiters
Set -quick in fortify-sca-quickscan.properties com.fortify.sca.limiters.ConstraintPredicateSize Default value: 50000 Quick Scan value: 10000 Skips calculations defined as very complex in the buffer analyser to improve scanning time. com.fortify.sca.limiters.BufferConfidenceInconclusiveOnTimeout Default value: true Quick Scan value: false com.fortify.sca.limiters.MaxChainDepth Default value: 5 Quick Scan value: 4 Controls the maximum call depth through which the data flow analyser tracks tainted data. Increasing this value increases the coverage of data flow analysis, and results in longer analysis times. com.fortify.sca.limiters.MaxTaintDefForVar Default value: 1000 Quick Scan value: 500 This property sets the complexity limit for data flow precision backoff. Data flow incrementally decreases precision of analysis for functions that exceed this complexity metric for a given precision level. com.fortify.sca.limiters.MaxTaintDefForVarAbort Default value: 4000 Quick Scan value: 1000 This property sets a hard limit for function complexity. If complexity of a function exceeds this limit at the lowest precision level, the analyser will not analyse that function. Scan Results Additional Options: -quick Scan Time: 01:32 Baseline Scan: 01:47 Scan Difference (%): Down 14% Total Issues: 6 Critical: 0 High: 6 Medium: 0 Low: 0 Scan Results Additional Options: -quick Scan Time: 21:37 Baseline Scan: 30:45 Scan Difference (%): Down 30% Total Issues: 62 Critical: 56 High: 6 Medium: 0 Low: 0 SPEED QUALITY Case study review… iOS App Running a quick scan only has minor benefits. There’s a small overhead when reading and applying the quickscan limiters. This is heavily outweighed in larger apps but can completey defeat the object in very small apps such as this. Only Critical and High issues are reported. Applying filters may be more appropriate here. This will be discussed later. .NET App For larger applications the limiters have a much more dramatic effect. As we don’t delve as deep into each function the scan time is reduced by 30%. The tradeoff for this is that the number of issues found is also drastically reduced. Only Critical and High issues are reported and, if you recall the baseline results, a lot of these are missed. This is a rather extreme example, adjusting the limiters slightly can allow you to fine tune the scan - lowering the scan time but not losing quite so many results.
19
Shrinking the FPR Bring down scan time and reduce the FPR size with: -Dcom.fortify.sca.FPRDisableMetatable=true* *Undocumented property FPR Normal scan including both source and snippets FPR Scan run with -disable-source-bundling FPR Scan run with -disable-source-bundling and -Dcom.fortify.sca.FVDLDisableSnippets=true Once a scan has been completed, you can often find yourself dealing with an excessively large and unwieldy results file. This can make life very difficult if this needs to be distributed to other members of your team to audit or if large amounts of memory are needed to even open the FPR in Auditworkbench. There are however a number of ways to alleviate this pain… There’s actually a hidden property called FPRDisableMetatable. Setting this means we won’t write the data to the FPR which pertains to which functions were scanned and if they were covered by our rules. Many customers don’t actually use this functionality so it can be an acceptable loss. Please note though that there is currently a bug with DisableMetatable which means it also removes the archive to view the source on SSC Server - it will however still be viewable in Auditworkbench. Removing the bundled source and snippets will create lightweight FPR’s. It’s possible to link these back to the source in AWB. However no source will be shown in SSC so collaborative auditing on the SSC GUI isn’t really an option. It’s possible to download FPR’s from SSC which do not contain the source. So it can often be advantageous to perform a scan, upload this to SSC and then grab your lightweight FPR from SSC. While this won’t contain the full source archive, it will contain snippets however.
20
Shrinking the FPR Scan Results Baseline Scan FPR Size: 17.9MB
Scan Time: 30:45 Disable Metatable FPR Size: 10.4MB Scan Time: 27:52 Disable Source FPR Size: 14.5MB Scan Time: 30:03 Disable Metatable, Source & Snippets FPR Size: 4.2MB Scan Time: 27:36 Bring down scan time and reduce the FPR size with: -Dcom.fortify.sca.FPRDisableMetatable=true* *Undocumented property FPR Normal scan including both source and snippets FPR Scan run with -disable-source-bundling FPR Scan run with -disable-source-bundling and -Dcom.fortify.sca.FVDLDisableSnippets=true Here we can see how effective each option was with our .NET application. - Disabling the metatable not only reduces the size of the FPR, it also takes a chunk out of the scan time as writing the metatable itself can be time consuming. - Removing the bundled source doesn’t really effect the scan time but it does significantly reduce our FPR size. - Finally, setting all options brings our scan time down a little and, as you can see, gives us a major reduction in scan time. While this rules out collaboratively auditing the results using SSC Server. It will still contain all of the results which can be viewed in AWB, alongside the source if present on the machine.
21
4. Streamlining the audit
22
Project templates What are they & why should I care?
Layout to how you, and all employees see issues. Can be standardised across projects or across companies. Have to follow OWASP Top ? Sure thing Have to be PCI compliant? Sure thing Features of project templates: Filters Folders (which from 3.90?? Onwards you can have Performance Indicators in SSC for folders) Filter Sets Custom tags Work very well with custom rules Working them with SSC
23
Project templates Scan Results Scan Results
Project Template: OWASP Top A1: 2 A2 : 0 A3 : 0 A4 : 0 A5 : 0 A6 : 0 A7 : 1 A8 : 0 A9 : 1 A10 : 0 Scan Results Project Template: OWASP Top A1: 78 A2 : 44 A3 : 3 A4 : 619 A5 : 598 A6 : 73 A7 : 797 A8 : 0 A9 : 43 A10 : 37 Results categorised by OWASP Top The same applies to the .NET application. Each result is mapped to it’s corresponding OWASP category. Those results filtered out as they don’t fall into any category are not completely lost. They will still be found in the FPR however their category will be listed as <none>.
24
Metadata: map issues to compliance obligations
Security programs benchmarking against standards Map compliance post-scan Internal security standards External standards: CWE, NIST , STIG, internal company standards, etc. Easier to support new standards Continually up to date by rules Issues carry more weight: You have a cross-site scripting violation on checkout.php:72 Vs. Your application violates <standard> because of the cross-site scripting on checkout.php:72 One of the strengths of SCA has always been our ability to map the results found to various security standards such as OWASP and PCI. While these mapping used to be hardcoded into the SCA rulepacks we’ve recently separated them out and provide them as an XML file, known as External Metadata which is distributed alongside our rulepacks. This allows you to add both external and internal security standards to your results, and makes it easier for you to support new standards. Mapping your issues to a particular standard can also help drive security forward. For example telling a developer that an issue in their code violates a particular standard will carry far more weight than simply telling them the issue exists.
25
You can and should add your own mappings
Map to your internal compliance standards. HP Fortify maintains base. Add new compliance mappings. Extend existing mappings. Plain-text example: CWE. Easier to add & update mappings. Easier to create & update reports. Use the base as an example, add your own. Screenshot: Notepad++ editing the XML mapping. We highly recommend adding your own mappings… For example, if you want to use OWASP Top before we officially add support, then you can do this yourself easily by adding to the metadata xml file. On client machines you can place your mappings in the CustomExternalMetadata directory. This will mean that your custom modifications are not overwritten when the rulepacks and external metadata is updated from our side. We update our external metadata along with our rulepack updates every quarter… as an aside we’re expecting OWASP 2013 to be introduced along with our Q rulepacks due for a release towards the end of this month - however this is subject to change.
26
Scan time filters Comes back to disabling source. But say, the source is needed, but your FPR is huge due to load of issue you may not care about. This can be used. Steps: -Create a new filter set (can use an already existing one instead) -add filters -save to <SCA install>/Core/config/filters/defaulttemplate.xml -add to fortify-sca.properties com.fortify.sca.FilterSet=<filter set name>
27
Collaborative audit & bug management
Extra functionality Native speed not reliant on server hardware & load Not so reliant on uptime Full source More customizable experience Native and collaborative bug tracker plugins While it’s still possible to collaboratively audit projects using SSC Server, recent versions of the Fortify software have introduced the ability to collaboratively audit results on your local machine from AuditWorkbench or the IDE Plugins for VisualStudio or Eclipse. AWB and the plugins will pull down the latest results from SSC, allow you to audit them, before pushing them back to the Server. This offers a number of advantages: - AWB and the plugins have a few extra features: The metatable we discussed earlier, diagrams showing the flow of data through the app for each issue, and the ability to create new filters. - As a native application it doesn’t rely on the server hardware and usage so is likely to be quicker. - Even if the source isn’t bundled in the FPR, if you have it locally you can still see it alongside the results. - AWB and the plugins also offer native & collaborative bug tracker integrations which allow auditors to submit any issues found directly into a development team’s bug tracking system. Integration can be with multiple bug tracking systems which means there’s no problem if different teams use different systems. - Finally, it allows you to only provide AWB or the plugins to auditors and people who will be auditing a lot. You can also be really restrictive with custom roles - included from 3.50
28
Collaborative audit & bug management (cont.)
explain about Batch Bug submission selection criteria -finds non-submitted bugs & groups them -then allows you to submit 1 bug for all of them, including built in variables to make the description easier across all the various issues in the one bug Explain about bug state management -allows multiple auditing sessions to submit audited work to SSC, and then report bugs in one lump --restricts duplicate bugs by sorting how you want --saves time from having to submit lots of bugs and sort out similar/same descriptions & information
29
A secure SDL & the governance module
One of the most popular ways to implement a Secure SDL is by Software Security Assurance (SSA). SSA prescribes a set of activities, including the comprehensive identification and removal of security vulnerabilities in your software. The Governance Module helps you keep track of and step through these activities. Using the Process Designer you can specify which activities are required and then keep track of these with SSA projects on SSC. You can then tick off requirements as you progress through your Secure SDL. Certain actions can be prohibited using SSC’s roles and due dates can be specified with alerts to remind you The Governance Module helps both companies with an immature SSDL as it gives them a starting point, and also companies with a system already in place as it allows you to record, track and customise your process as necessary.
30
5. Wrap up
31
In summary… Making small changes add up to have big affects on workflow and cutting out wasted time and effort. You can customize how a lot of the software looks and feels, helping integrate SCA & SSC with your internal processes and requirements. Take a look at your internal processes now and think about how SCA & SSC currently integrates with them: Are your developers spending too much time scanning? Would it be worthwhile getting additional hardware for one standalone machine vs increasing hardware for all workstations? Is it taking too long to discern which are the most important problems for your business? We’ve shown how to both how to vastly improve speed and results before you even get an eye on the final results, and how to improve integration into your company to speed up auditing to save you time and therefore money.
32
Any questions?
33
Thank you
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.