Automated Whitebox Fuzz Testing Patrice Godefroid (Microsoft Research)‏ Michael Y. Levin (Microsoft Center for Software Excellence)‏ David Molnar (UC-Berkeley,

Slides:



Advertisements
Similar presentations
Sequential Logic Design
Advertisements

Copyright © 2013 Elsevier Inc. All rights reserved.
SecuBat: An Automated Web Vulnerability Detection Framework
CALENDAR.
The 5S numbers game..
The basics for simulations
Cristian Cadar, Peter Boonstoppel, Dawson Engler RWset: Attacking Path Explosion in Constraint-Based Test Generation TACAS 2008, Budapest, Hungary ETAPS.
Statically Validating Must Summaries for Incremental Compositional Dynamic Test Generation Patrice Godefroid Shuvendu K. Lahiri Cindy Rubio-González International.
EE, NCKU Tien-Hao Chang (Darby Chang)
Leonardo de Moura Microsoft Research. Z3 is a new solver developed at Microsoft Research. Development/Research driven by internal customers. Free for.
Applications of Synchronization Coverage A.Bron,E.Farchi, Y.Magid,Y.Nir,S.Ur Tehila Mayzels 1.
The Dafny program verifier
Improving Integer Security for Systems with KINT Xi Wang, Haogang Chen, Zhihao Jia, Nickolai Zeldovich, Frans Kaashoek MIT CSAIL Tsinghua IIIS.
Automated Whitebox Fuzz Testing
Automated Whitebox Fuzz Testing Patrice Godefroid (Microsoft Research)‏ Michael Y. Levin (Microsoft Center for Software Excellence)‏ David Molnar (UC-Berkeley,
FMCAD 2009 Tutorial Nikolaj Bjørner Microsoft Research.
1 Symbolic Execution Kevin Wallace, CSE
A Data Warehouse Mining Tool Stephen Turner Chris Frala
Symbolic Execution with Mixed Concrete-Symbolic Solving
1 Chao Wang, Yu Yang*, Aarti Gupta, and Ganesh Gopalakrishnan* NEC Laboratories America, Princeton, NJ * University of Utah, Salt Lake City, UT Dynamic.
Fuzzing and Patch Analysis: SAGEly Advice. Introduction.
TaintScope: A Checksum-Aware Directed Fuzzing Tool for Automatic Software Vulnerability Detection Tielei Wang 1, Tao Wei 1, Guofei Gu 2, Wei Zou 1 1 Peking.
HP Quality Center Overview.
Hybrid Concolic Testing Rupak Majumdar Koushik Sen UC Los Angeles UC Berkeley.
1 Towards Automatic Discovery of Deviations in Binary Implementations with Applications to Error Detection and Fingerprint Generation David Brumley, Juan.
CSE503: SOFTWARE ENGINEERING SYMBOLIC TESTING, AUTOMATED TEST GENERATION … AND MORE! David Notkin Spring 2011.
SOFTWARE SECURITY JORINA VAN MALSEN 1 FLAX: Systematic Discovery of Client-Side Validation Vulnerabilities in Rich Web Applications.
Fuzzing Dan Fleck CS 469: Security Engineering Sources:
PLDI’2005Page 1June 2005 DART: Directed Automated Random Testing Patrice Godefroid Nils Klarlund Koushik Sen Bell Labs Bell Labs UIUC.
DART Directed Automated Random Testing Patrice Godefroid, Nils Klarlund, and Koushik Sen Syed Nabeel.
Finding Errors in.NET with Feedback-Directed Random Testing By Carlos Pacheco, Shuvendu K. Lahiri and Thomas Ball Presented by Bob Mazzi 10/7/08.
Leveraging User Interactions for In-Depth Testing of Web Applications Sean McAllister, Engin Kirda, and Christopher Kruegel RAID ’08 1 Seoyeon Kang November.
1 Today More on random testing + symbolic constraint solving (“concolic” testing) Using summaries to explore fewer paths (SMART) While preserving level.
1 Loop-Extended Symbolic Execution on Binary Programs Pongsin Poosankam ‡* Prateek Saxena * Stephen McCamant * Dawn Song * ‡ Carnegie Mellon University.
Leveraging User Interactions for In-Depth Testing of Web Application Sean McAllister Secure System Lab, Technical University Vienna, Austria Engin Kirda.
Analysis of Complex Data Structures with PEX Vadim Mutilin Evgeniy Gerlits Vladimir Fedotov Mentor: Patrice Godefroid.
Automatic Creation of SQL Injection and Cross-Site Scripting Attacks 2nd-order XSS attacks 1st-order XSS attacks SQLI attacks Adam Kiezun, Philip J. Guo,
Automating Software Testing Using Program Analysis -Patrice Godefroid, Peli de Halleux, Aditya V. Nori, Sriram K. Rajamani,Wolfram Schulte, and Nikolai.
Dynamic Test Generation To Find Integer Bugs in x86 Binary Linux Programs David Molnar Xue Cong Li David Wagner.
DART: Directed Automated Random Testing Koushik Sen University of Illinois Urbana-Champaign Joint work with Patrice Godefroid and Nils Klarlund.
Symbolic Execution with Mixed Concrete-Symbolic Solving (SymCrete Execution) Jonathan Manos.
CUTE: A Concolic Unit Testing Engine for C Technical Report Koushik SenDarko MarinovGul Agha University of Illinois Urbana-Champaign.
Comparison of Blackbox and Whitebox Fuzzers in Finding Software Bugs
Automated Whitebox Fuzz Testing (NDSS 2008) Presented by: Edmund Warner University of Central Florida April 7, 2011 David Molnar UC Berkeley
Automated Whitebox Fuzz Testing Network and Distributed System Security (NDSS) 2008 by Patrice Godefroid, ‏Michael Y. Levin, and ‏David Molnar Present.
TaintScope Presented by: Hector M Lugo-Cordero, MS CAP 6135 April 12, 2011.
CS 217 Software Verification and Validation Week 7, Summer 2014 Instructor: Dong Si
jFuzz – Java based Whitebox Fuzzing
EXE: Automatically Generating Inputs of Death Cristian Cadar, Vijay Ganesh, Peter M. Pawlowski, David L. Dill, Dawson R. Engler 13th ACM conference on.
Finding Errors in.NET with Feedback-Directed Random Testing Carlos Pacheco (MIT) Shuvendu Lahiri (Microsoft) Thomas Ball (Microsoft) July 22, 2008.
CSV 889: Concurrent Software Verification Subodh Sharma Indian Institute of Technology Delhi Scalable Symbolic Execution: KLEE.
Symbolic and Concolic Execution of Programs Information Security, CS 526 Omar Chowdhury 10/7/2015Information Security, CS 5261.
Automating Configuration Troubleshooting with Dynamic Information Flow Analysis Mona Attariyan Jason Flinn University of Michigan.
Convicting Exploitable Software Vulnerabilities: An Efficient Input Provenance Based Approach Zhiqiang Lin Xiangyu Zhang, Dongyan Xu Purdue University.
CUTE: A Concolic Unit Testing Engine for C Koushik SenDarko MarinovGul Agha University of Illinois Urbana-Champaign.
Lazy Annotation for Program Testing and Verification (Supplementary Materials) Speaker: Chen-Hsuan Adonis Lin Advisor: Jie-Hong Roland Jiang December 3,
Dynamic Symbolic Execution (aka, directed automated random testing, aka concolic execution) Slides by Koushik Sen.
Week 6 MondayTuesdayWednesdayThursdayFriday Testing III Reading due Group meetings Testing IVSection ZFR due ZFR demos Progress report due Readings out.
CSE 331 SOFTWARE DESIGN & IMPLEMENTATION SYMBOLIC TESTING Autumn 2011.
RDE: Replay DEbugging for Diagnosing Production Site Failures
Presented by Mahadevan Vasudevan + Microsoft , *UC-Berkeley
*Acknowledgements: Suman Jana, Dawn Song, Kostya Serebryany,
Project Springfield Fuzz your code before hackers do
Automatic Test Generation SymCrete
VUzzer: Application-aware Evolutionary Fuzzing
VUzzer: Application-aware Evolutionary Fuzzing
CUTE: A Concolic Unit Testing Engine for C
The Zoo of Software Security Techniques
Presentation transcript:

Automated Whitebox Fuzz Testing Patrice Godefroid (Microsoft Research)‏ Michael Y. Levin (Microsoft Center for Software Excellence)‏ David Molnar (UC-Berkeley, visiting MSR)

Fuzz Testing Send “random” data to application – B. Miller et al.; inspired by line noise Fuzzing well-formed “seed” Heavily used in security testing – e.g. July 2006 “Month of Browser Bugs”

Whitebox Fuzzing Combine fuzz testing with dynamic test generation – Run the code with some initial input – Collect constraints on input with symbolic execution – Generate new constraints – Solve constraints with constraint solver – Synthesize new inputs – Leverages Directed Automated Random Testing (DART)‏ ( [Godefroid-Klarlund-Sen-05,…])‏ – See also previous talk on EXE [Cadar-Engler-05, Cadar- Ganesh-Pawlowski-Engler-Dill-06, Dunbar-Cadar-Pawlowski- Engler-08,…]

Dynamic Test Generation void top(char input[4]) { int cnt = 0; if (input[0] == ‘b’) cnt++; if (input[1] == ‘a’) cnt++; if (input[2] == ‘d’) cnt++; if (input[3] == ‘!’) cnt++; if (cnt >= 3) crash(); } input = “good”

Dynamic Test Generation void top(char input[4]) { int cnt = 0; if (input[0] == ‘b’) cnt++; if (input[1] == ‘a’) cnt++; if (input[2] == ‘d’) cnt++; if (input[3] == ‘!’) cnt++; if (cnt >= 3) crash(); } input = “good” I 0 != ‘b’ I 1 != ‘a’ I 2 != ‘d’ I 3 != ‘!’ Collect constraints from trace Create new constraints Solve new constraints  new input.

Depth-First Search void top(char input[4]) { int cnt = 0; if (input[0] == ‘b’) cnt++; if (input[1] == ‘a’) cnt++; if (input[2] == ‘d’) cnt++; if (input[3] == ‘!’) cnt++; if (cnt >= 3) crash(); } I 0 != ‘b’ I 1 != ‘a’ I 2 != ‘d’ I 3 != ‘!’ good

Depth-First Search goo!good void top(char input[4]) { int cnt = 0; if (input[0] == ‘b’) cnt++; if (input[1] == ‘a’) cnt++; if (input[2] == ‘d’) cnt++; if (input[3] == ‘!’) cnt++; if (cnt >= 3) crash(); } I 0 != ‘b’ I 1 != ‘a’ I 2 != ‘d’ I 3 == ‘!’

Depth-First Search godd void top(char input[4]) { int cnt = 0; if (input[0] == ‘b’) cnt++; if (input[1] == ‘a’) cnt++; if (input[2] == ‘d’) cnt++; if (input[3] == ‘!’) cnt++; if (cnt >= 3) crash(); } I 0 != ‘b’ I 1 != ‘a’ I 2 == ‘d’ I 3 != ‘!’ good

Key Idea: One Trace, Many Tests Office 2007 application: Time to gather constraints: 25m30s Tainted branches/trace: ~1000 Time per branch to solve, generate new test, check for crashes: ~1s Therefore, solve+check all branches for each trace!

Generational Search goo! godd gaod bood “Generation 1” test cases good void top(char input[4]) { int cnt = 0; if (input[0] == ‘b’) cnt++; if (input[1] == ‘a’) cnt++; if (input[2] == ‘d’) cnt++; if (input[3] == ‘!’) cnt++; if (cnt >= 3) crash(); } I 0 == ‘b’ I 1 == ‘a’ I 2 == ‘d’ I 3 == ‘!’

The Search Space void top(char input[4]) { int cnt = 0; if (input[0] == ‘b’) cnt++; if (input[1] == ‘a’) cnt++; if (input[2] == ‘d’) cnt++; if (input[3] == ‘!’) cnt++; if (cnt >= 3) crash(); }

SAGE Architecture (Scalable Automated Guided Execution) Check for Crashes (AppVerifier)‏ Trace Program (Nirvana)‏ Gather Constraints (Truscan)‏ Solve Constraints (Disolver) Input0Trace File Constraints Input1 Input2 … InputN

Since 1 st MS internal release in April’07: dozens of new security bugs found (most missed by blackbox fuzzers, static analysis)‏ Apps: image processors, media players, file decoders,… Confidential ! Many bugs found rated as “security critical, severity 1, priority 1” Now used by several teams regularly as part of QA process Credit is due to the entire SAGE team and users: – CSE: Michael Levin (DevLead), Christopher Marsh, Dennis Jeffries (intern’06), Adam Kiezun (intern’07) Plus Nirvana/iDNA/TruScan contributors. – MSR: Patrice Godefroid, David Molnar (intern’07) (+ constraint solver Disolver)‏ – Plus work of many beta users who found and filed most of these bugs! Initial Experiences with SAGE

ANI Parsing - MS RIFF...ACONLIST B...INFOINAM.... 3D Blue Alternat e v1.1..IART anih$...$ rate seq LIST....framic on RIFF...ACONB B...INFOINAM.... 3D Blue Alternat e v1.1..IART anih$...$ rate seq anih....framic on Initial input Crashing test case

ANI Parsing - MS RIFF...ACONLIST B...INFOINAM.... 3D Blue Alternat e v1.1..IART anih$...$ rate seq LIST....framic on RIFF...ACONB B...INFOINAM.... 3D Blue Alternat e v1.1..IART anih$...$ rate seq anih....framic on Only 1 in 2 32 chance at random! Initial input Crashing test case

Initial Experiments #Instructions and Input size largest seen so far App Tested#TestsMean DepthMean #Instr.Mean Size ANI ,066,0875,400 Media ,409,37665,536 Media ,432,48927,335 Media ,644,65230,833 Media ,685,24022,209 Compression , Office ,731,24845,064

Zero to Crash in 10 Generations Starting with 100 zero bytes … SAGE generates a crashing test: h: ; h: ; h: ; h: ; h: ; h: ; h: ;.... Generation 0 – initial input – 100 bytes of “00”

Zero to Crash in 10 Generations Starting with 100 zero bytes … SAGE generates a crashing test: h: ; RIFF h: ; h: ; h: ; h: ; h: ; h: ;.... Generation 1

Zero to Crash in 10 Generations Starting with 100 zero bytes … SAGE generates a crashing test: h: ** ** ** ; RIFF....*** h: ; h: ; h: ; h: ; h: ; h: ;.... Generation 2

Zero to Crash in 10 Generations Starting with 100 zero bytes … SAGE generates a crashing test: h: D ** ** ** ; RIFF=...*** h: ; h: ; h: ; h: ; h: ; h: ;.... Generation 3

Zero to Crash in 10 Generations Starting with 100 zero bytes … SAGE generates a crashing test: h: D ** ** ** ; RIFF=...*** h: ; h: ; h: ;....strh h: ; h: ; h: ;.... Generation 4

Zero to Crash in 10 Generations Starting with 100 zero bytes … SAGE generates a crashing test: h: D ** ** ** ; RIFF=...*** h: ; h: ; h: ;....strh....vids h: ; h: ; h: ;.... Generation 5

Zero to Crash in 10 Generations Starting with 100 zero bytes … SAGE generates a crashing test: h: D ** ** ** ; RIFF=...*** h: ; h: ; h: ;....strh....vids h: ;....strf h: ; h: ;.... Generation 6

Zero to Crash in 10 Generations Starting with 100 zero bytes … SAGE generates a crashing test: h: D ** ** ** ; RIFF=...*** h: ; h: ; h: ;....strh....vids h: ;....strf....( h: ; h: ;.... Generation 7

Zero to Crash in 10 Generations Starting with 100 zero bytes … SAGE generates a crashing test: h: D ** ** ** ; RIFF=...*** h: ; h: ; h: ;....strh....vids h: ;....strf....( h: C9 9D E4 4E ; ÉäN h: ;.... Generation 8

Zero to Crash in 10 Generations Starting with 100 zero bytes … SAGE generates a crashing test: h: D ** ** ** ; RIFF=...*** h: ; h: ; h: ;....strh....vids h: ;....strf....( h: ; h: ;.... Generation 9

Zero to Crash in 10 Generations Starting with 100 zero bytes … SAGE generates a crashing test: h: D ** ** ** ; RIFF=...*** h: ; h: ; h: ;....strh....vids h: B A ;....strf²uv:( h: ; h: ;.... Generation 10 – bug ID ! Found after only 3 generations starting from “well-formed” seed file

Different Crashes From Different Initial Input Files 100 zero bytes X XXXX XXXXX XX XX XX XXX X XXX X X X Media 1: 60 machine-hours, 44,598 total tests, 357 crashes, 12 bugs Bug ID seed1seed2seed3seed4 seed5seed6seed7

Most Bugs Found are “Shallow” Generation

Blackbox vs. Whitebox Fuzzing Cost/precision tradeoffs – Blackbox is lightweight, easy and fast, but poor coverage – Whitebox is smarter, but complex and slower – Recent “semi-whitebox” approaches Less smart but more lightweight: Flayer (taint-flow analysis, may generate false alarms), Bunny-the-fuzzer (taint-flow, source- based, heuristics to fuzz based on input usage), autodafe, etc. Which is more effective at finding bugs? It depends… – Many apps are so buggy, any form of fuzzing finds bugs! – Once low-hanging bugs are gone, fuzzing must become smarter: use whitebox and/or user-provided guidance (grammars, etc.)‏ Bottom-line: in practice, use both!

Related Work Dynamic test generation (Korel, Gupta-Mathur-Soffa, etc.)‏ – Target specific statement; DART tries to cover “most” code Static Test Generation: hard when symbolic execution imprecise Other “DART implementations” / symbolic execution tools: – EXE/EGT (Stanford): independent [’05-’06] closely related work – CUTE/jCUTE (UIUC/Berkeley): similar to Bell Labs DART implementation – PEX (MSR) applies to.NET binaries in conjunction with “parameterized-unit tests” for unit testing of.NET programs – YOGI (MSR) use DART to check the feasibility of program paths generated statically using a SLAM-like tool – Vigilante (MSR) generate worm filters – BitBlaze/BitScope (CMU/Berkeley) malware analysis, more – Others...

Work-In-Progress Catchconv (Molnar-Wagner) – Linux binaries Leverage Valgrind dynamic binary instrumentation – Signed/unsigned conversion, other errors – Current target: Linux media players Four bugs to mplayer developers so far Comparison against zzuf black-box fuzz tester – Use STP to generate new test cases – Code repository on Sourceforge

SAGE Summary Symbolic execution scales – SAGE most successful “DART implementation” – Dozens of serious bugs, used regularly at MSFT Existing test suites become security tests Future of fuzz testing?

Thank you! Questions?

Backup Slides

Most bugs are “shallow”

Code Coverage and New Bugs

Some of the Challenges Path explosion When to stop testing? Gathering initial inputs Processing new test cases – Triage: which are real crashes? – Reporting: prioritize test cases, communicate to developers