Download presentation
Presentation is loading. Please wait.
1
Web Application Firewall Evaluation
with DevOps, SDLC and the New OWASP Core Rule Set
2
`su chaim` Chaim Sanders Current Roles Previous Roles
Project Leader of the OWASP Core Rule Set ModSecurity Core Developer Security Researcher at Trustwave Spiderlabs Lecturer at Rochester Institute of Technology (RIT) Previous Roles Commercial Consultant Governmental Consultant
3
Why the web? Why OWASP? We are going to focus on HTTP(S) Today
Special care needs to be taken here According to CAIDA web is ~85% of internet traffic
4
Why the web?
5
Why the web? Why OWASP? We are going to focus on HTTP(S) Today
Special care needs to be taken here According to CAIDA web is ~85% of internet traffic As a result web architecture is complicated It also means complicated attacks are acceptable Attacks that will only work on 0.01% of users are valuable
6
Why the web?
7
Why the web? Why OWASP? We are going to focus on HTTP(S) Today
Special care needs to be taken here According to CAIDA web is ~85% of internet traffic As a result web architecture is complicated It also means complicated attacks are acceptable Attacks that will only work on 0.01% of users are valuable Diversity of attacks is high as well Attacker on server / Attacker on client Attacker on client via server Attacker on server via server Attacker on intermediary
8
Vocabulary - WAF WAF- Web Application Firewall
WAFs Come in multiple different forms
9
Vocabulary - WAF WAF- Web Application Firewall
WAFs Come in multiple different forms Can be placed in several places on the network Inline Out-of-line On the web server Different Technologies Signatures Heuristics Often driven by PCI requirements, as it’s an approved security control What is the difference between an IDS versus WAF Web isn’t an oligarchy like most protocols
10
Vocabulary - ModSecurity
An Open Source Web Application Firewall Probably the most popular WAF Designed in 2002 Currently on version with version 3.0 in the works Designed to be open and supports the OWASP Core Rule Set First developed in 2009 An OWASP project meant to provide free generic rules to ModSecurity users Open-source ModSecurity and the more recently released IronBee are also considered costeffective competition for commercial WAFs. OWASP Core rule set is a set of generic rules written in a special language called secrules design to allow security practiioners to quickly block various web attacks
11
Problem Statement WAF- Web Application Firewall
How do you evaluate security software More specifically how does one evaluate a Web Application Firewall? Speed Effectiveness Support Documentation Market Share We need to address this from both a developer perspective, a client perspective, and a purchaser perspective.
12
Naive approach The first thing people will ask – What is your performance? Due to the difference in types of web applications there are different requirements Per request speed differences A request to upload a file vs a GET Per protocol speed differences A request that is processing XML Amount of concurrent requests Resource availability Amount and effectiveness of defenses in place
13
Naive approach But speed is only one thing surely there are many other factors – chiefly among them is actual security!!
14
Existing Work: Gartner Methodology
Gartner publishes a MQ for WAFs Some Questions How is Gartner deciding on this? Does their methodology make sense? And where is ModSecurity? Taking a look at a mapping of existing WAF technologies, how do you actually assign these values
15
Gartner Inclusion Requirements
Their offerings can protect applications running on different types of Web servers. Their WAF technology is known to be approved by qualified security assessors as a solution for PCI DSS Requirement 6.6 They provide physical, virtual or software appliances, or cloud instances. Their WAFs were generally available as of 1 January 2015. Their WAFs demonstrate features/scale relevant to enterprise-class organizations. They have achieved $6 million in revenue from the sales of WAF technology. Gartner has determined that they are significant players in the market due to market presence or technology innovation. Gartner analysts assess that the vendor's WAF technology provides more than a repackaged ModSecurity engine and signatures. So we’re relying on PCI DSS 6.6
16
Problem Statement PCI DSS 6.6: Gartner’s Choice
Gartner basis for it’s ‘is this security thing working’ is PCI PCI outlines a lot of “recommended capabilities” The important one for us is recommendation #2 Really this boils down to the QSA gets to decide Perhaps there is a more authoritative methodology… “React appropriately (defined by active policy or rules) to threats against relevant vulnerabilities as identified, at a minimum, in the OWASP Top Ten and/or PCI DSS Requirement 6.5.“
17
Existing Work: WAFEC Web Application Firewall Evaluation Criteria 1.0
Presented as a yes/no spreadsheet for evaluating WAFs Focuses on 9 feature areas: Deployment Architecture HTTP and HTML Support Detection Techniques Protection Techniques Logging Reporting Management Performance XML Joint project between WASC and OWASP WAFEC 2.0 has been under release for quite some time The goals are: Help stakeholders understand what a WAF is and its role in protecting web sites. Provide a tool for users to make an educated decision when selecting a WAF. Not … does this shit actually work! Both detection techniques and Protection techniques are only about 40 questions total
18
WAFEC and “Security” What does WAFEC says about evaluating security
Let's say the criteria is ‘Protects from SQL injection attacks’ How do I evaluate that effectiveness of this criteria? Right now it’s just yes/no How do i generate the weight for the importance of this? What about in non-SQL environments? As a result of the goal, and this problem, typically there isn’t any information beyond yes/no More importantly, none of these responses are published :( Another example: what about ‘support for passive decryption’, etc.? To be clear this can still be really helpful in deteremining effectiveness
19
A purchasing problem Buyers remorse
Sure knowing features is important, but do we have evidence of effectivness? How do we test WAF effectiveness Perhaps there are indvidual features that are weak Effectivness seems like a helpful metric Not all WAFs can be equally good This will help architects know the WAF they picked works in their envirovment If something is wrong can we track it? We’ve all met product dev teams Can we force them to get better?
20
Our Problems (Recap) OWASP CRS is just like everyone else
For years our team has struggled with effectivness metrics People would ask us for performance, we’d explain…. The last straw came when we were asked to benchmark against another WAF Not only were we faster… but the tests were useless Maybe performance isn’t everything (if it is we’re screwed) Maybe effetiveness is a better metric, but it’s harder Round robin cacheing - Nothing like a cache hit to boost performance numbers Ever think of using a racecar as your daily driver?
21
A New Dawn The OWASP CRS v3.0.0 project
Over the past few years we’ve been revamping the Core Rule Set this includes: Utilization of new ModSecurity features, such as libinjection A new, revamped paranoia level that promises to reduce false positives Brand new defenses that make CRS more effective than ever Effectiveness improvements to many existing rule areas More documentation and a new, more intuitive, file layout Mitigations for many common false positives Additional configuration options, including a sampling mode Additional testing and support scripts Default of anomaly based protection Each one of these bullets can be their own talk easily, but today let's talk about how we ‘test’ WAF effectiveness
22
CRS as a base Pro v Con OWASP CRS is just like everyone else
ModSecurity is designed to be exteremly configurable and neutral It doesn’t ship with a ruleset at all CRS is in a unique position of being a separate project from ModSecurity In fact other WAFs use it (both outwardly and not so much -_-) But we also have unqiue problems Lots of developers, it’s open source Our own language (many are compatabile with it) We run in a lot of enviorvments Windows, Linux, AIX, Mac OS, Apache, Nginx, IIS So how do we properly ‘test’ in such an enviorvment.
23
Our Approach The OWASP CRS v3.0.0 project
WAFEC we felt provided a good base but we needed two things Regression tests And well… regression tests We laid out two story's We wanted to know if there was a regression for a rule Imagine someone changing a rule Or if there was a regression for the system as a whole Imagine making sure all known XSS is blocked Both of these were going to require making HTTP requests Another example: what about ‘support for passive decryption’, etc.? To be clear this can still be really helpful in deteremining effectiveness
24
Designing your own wheel
I’m generally a huge proponent of reuse, if possible Step 1) As a project decide on a common testing language Step 2) Find existing projects that might fit your need (look at existing wheels) Step 3) Evaluate if they meet your goals (see if the wheels fit) Step 4) Inevitably recreate some wheels (rebuild wheels) Step 5) Profit??? These are exactly the steps we went through
25
Deciding on a language Python and why we love it
OWASP CRS had a disadvantage, it has it’s own language. ModSecurity is written in C/C++ and so are a lot of the tools Turns out generally people don’t really love C projects The choice came down to Python, Ruby, and Go As a group we chose Python It’s easy Most people know it One problem, our existing tools were in various langugues, perticularly our unit testing – which people had stopped to use.
26
Find existing projects
Current Unit Testing People had stopped to using it because: It was hard to use It was hard to write tests for It was written in Perl It wasn’t required It didn’t integrate well It did however make HTTP requests, but it wasn’t very modular. Some other libraries also had the same problem Things like Python requests, and PyCurl – weren’t sufficient
27
Evaluate if they meet your goal
Why not Curl or Requests? It turns out frequently the things we need to test are straight up against spec “EHELO mymailserver.test.com” Tools like PyCurl and Requests don’t even like to let you do things like change the version We needed more flexability PyYAML, Pytest, etc We were able to leverage many existing tools though
28
Inevitably recreate the wheel
Our Framework for Testing WAFS (FTW) The Framework for Testing WAFs - a solution when your test cases involve things that SHOULD never happen. A modular HTTP request generator Designed to be user friendly and easy (YAML tests) Designed for multiple WAF support Available as a python pip module (we got ftw, kinda neat) We want to integrate with existing best practices in development Open Source
29
Sample tests Has default settings (sorta like scapy) but can be overwridden as needed A raw request can also be sent either in base64 form or raw form Additionally some magic is done but that can be turned off (content length, encoding etc) Each test file can consist of many tests with multiple stages as well (think login to a webapp)
30
Integration with OWASP CRS
Keep it separated! While we had FTW and it reached it’s 1.0 Milestone we still needed actual tests We generated a new repo to contain those and used git-submodules to bring it into CRS’s master repo. Some tests are targeted using the ModSecurityv2 plugin to trigger a rule (type 1) Other tests are just giant lists of exploits separated into categories. (type 2) So how do we make use of our new testing framework to do test driven development
31
What tools we had Buildbot – ModSecurity
Every try and support a project that runs everywhere? Whenever we get a build of ModSecurity we test it on 45 different environments Buildbot is great for this It’s Python Flexible We’ll see how we leverage this for CRS in a bit It’s like Jenkins but written in python and the GUI is slightly less nice The problem is that it’s not ideal for everything – VM control yes, good feedback no.
32
Solving only part of our problem
Integrating our methodology with our environment What we did so far We develop this strict testing methodology We make it easy to use We write these tests… This doesn’t solve the part where users don’t use these test Using buildbot we can get feedback, but its not enough Such tests are good for internal tasks like our type 2 tests To this end we used our existing buildout environment to extend to OWASP CRS. Buildbot is great if you want to run internal tests Perticularly if you have multiple enviorvments/VMs
33
The other half of our problem
Immediate user feedback is critical To this end we leveraged Travis CI This gives us both immediate feedback right in github
34
The other half of our problem
Additional information as needed
35
The other half of our problem
Ultimately this gives contributors information about if we will accept their contribution In CRS these checks are linked to rules so if a rule is found not to have associated tests you will fail a check Or if an existing rule fails a check. Total weight of these tests clocks in at around 1500 tests against the rules
36
Travis CI problems Ultimately Travis is awesome but you need to tell it what to do To do full tests we need to install ModSecurity (or other WAFs) This was a nice part of Buildbot, that we can’t leverage To automate this we use Ansible More Python goodness Allows us to easily setup our enviorvment Other teams use Vagrant for this effect As Travis CI supports Docker this was our ideal deployment Turns out docker isn’t quite ready for a ModSecurity image ModSecurity v3 complicates this even more Total weight of these tests clocks in at around 1500 tests against the rules
37
Dockers take on modules
38
Future Work What’s left to do … a lot AMIs
Minimal Click Deployment (MCD) Parsers/linters And more!
39
Thank Yous Special thanks to Zach Allen for help developing FTW
Thanks also to Christian Peron and the rest of the Fastly Team for great insight Thanks to Christian Folini and Walter Hop from the CRS team Thanks to Jared Strout for help reviewing the presentation And of course thanks to everyone in the community who uses CRS
40
Questions
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.