Presentation is loading. Please wait.

Presentation is loading. Please wait.

Automated Whitebox Fuzz Testing Patrice Godefroid (Microsoft Research)‏ Michael Y. Levin (Microsoft Center for Software Excellence)‏ David Molnar (UC-Berkeley,

Similar presentations


Presentation on theme: "Automated Whitebox Fuzz Testing Patrice Godefroid (Microsoft Research)‏ Michael Y. Levin (Microsoft Center for Software Excellence)‏ David Molnar (UC-Berkeley,"— Presentation transcript:

1 Automated Whitebox Fuzz Testing Patrice Godefroid (Microsoft Research)‏ Michael Y. Levin (Microsoft Center for Software Excellence)‏ David Molnar (UC-Berkeley, visiting MSR) Presented by Yuyan Bao

2 Acknowledgement This presentation is extended and modified from The presentation by David Molnar Automated Whitebox Fuzz Testing The presentation by Corina Păsăreanu (Kestrel) Symbolic Execution for Model Checking and Testing

3 Agenda Fuzz Testing Symbolic Execution Dynamic Testing Generation Generational Search SAGE Architecture Conclusion Contribution Weakness Improvement

4 Fuzz Testing An effective technique for finding security vulnerabilities in software Apply invalid, unexpected, or random data to the inputs of a program. If the program fails(crashing, access violation exception), the defects can be noted.

5 Whitebox Fuzzing This paper present an alternative whitebox fuzz testing approach inspired by recent advances in symbolic execution and dynamic test generation – Run the code with some initial well-formed input – Collect constraints on input with symbolic execution – Generate new constraints (by negating constraints one by one) – Solve constraints with constraint solver – Synthesize new inputs

6 Symbolic Execution A method for deriving test cases which satisfy a given path Performed as a simulation of the computation on the path Initial path function = Identity function, Initial path condition = true Each vertex on the path induce a symbolic composition on the path function and a logical constraint on the path condition: If an assignment was made: If a conditional decision was made: path condition path condition branch condition Output: path function and path condition for the given path White Box Testing and Symbolic Execution 6 Slide from “White Box Testing and Symbolic Execution” by Michael Beder

7 Symbolic Execution 7 Code: (PC=“path condition”) - “Simulate” the code using symbolic values instead of program numeric data Symbolic execution tree: x:X,y:Y PC: true x:X,y:Y PC: X>Y x:X,y:Y PC: X<=Y true false x:X+Y,y:Y PC: X>Y x:X+Y,y:X PC: X>Y x:Y,y:X PC: X>Y x:Y,y:X PC: X>Y  Y-X>0 x:Y,y:X PC:X>Y  Y-X<=0 true false FALSE! Not reachable int x, y; if (x > y) { x = x + y; y = x - y; x = x - y; if (x – y > 0) assert (false); }

8 Dynamic Test Generation Executing the program starting with some initial inputs Performing a dynamic symbolic execution to collect constraints on inputs Use a constraint solver to infer variants of the previous inputs in order to steer the next executions

9 Dynamic Test Generation void top(char input[4]) { int cnt = 0; if (input[0] == ‘b’) cnt++; if (input[1] == ‘a’) cnt++; if (input[2] == ‘d’) cnt++; if (input[3] == ‘!’) cnt++; if (cnt >= 3) crash(); } input = “good” Running the program with random values for the 4 input bytes is unlikely to discovery the error: a probability of about Traditional fuzz testing is difficult to generate input values that will drive program through all its possible execution paths.

10 Dynamic Test Generation void top(char input[4]) { int cnt = 0; if (input[0] == ‘b’) cnt++; if (input[1] == ‘a’) cnt++; if (input[2] == ‘d’) cnt++; if (input[3] == ‘!’) cnt++; if (cnt >= 3) crash(); } input = “good” I 0 != ‘b’ I 1 != ‘a’ I 2 != ‘d’ I 3 != ‘!’ Collect constraints from trace Create new constraints Solve new constraints  new input.

11 Depth-First Search void top(char input[4]) { int cnt = 0; if (input[0] == ‘b’) cnt++; if (input[1] == ‘a’) cnt++; if (input[2] == ‘d’) cnt++; if (input[3] == ‘!’) cnt++; if (cnt >= 3) crash(); } I 0 != ‘b’ I 1 != ‘a’ I 2 != ‘d’ I 3 != ‘!’ good I 0 != ‘b’ I 1 != ‘a’ I 2 != ‘d’ I 3 != ‘!’

12 Depth-First Search goo!good void top(char input[4]) { int cnt = 0; if (input[0] == ‘b’) cnt++; if (input[1] == ‘a’) cnt++; if (input[2] == ‘d’) cnt++; if (input[3] == ‘!’) cnt++; if (cnt >= 3) crash(); } I 0 != ‘b’ I 1 != ‘a’ I 2 != ‘d’ I 3 == ‘!’

13 Practical limitation godd void top(char input[4]) { int cnt = 0; if (input[0] == ‘b’) cnt++; if (input[1] == ‘a’) cnt++; if (input[2] == ‘d’) cnt++; if (input[3] == ‘!’) cnt++; if (cnt >= 3) crash(); } I 0 != ‘b’ I 1 != ‘a’ I 2 == ‘d’ I 3 != ‘!’ good Every time only one constraint is expanded, low efficiency!

14 Generational search BFS with a heuristic to maximize block coverage Score returns the number of new blocks covered 14 Slide from “Symbolic Execution” by Kevin Wallace, CSE

15 Generational Search Key Idea: One Trace, Many Tests Office 2007 application: Time to gather constraints: 25m30s Tainted branches/trace: ~1000 Time per branch to solve, generate new test, check for crashes: ~1s Therefore, solve+check all branches for each trace!

16 Generational Search Key Idea: One Trace, Many Tests goo! godd gaod bood “Generation 1” test cases good void top(char input[4]) { int cnt = 0; if (input[0] == ‘b’) cnt++; if (input[1] == ‘a’) cnt++; if (input[2] == ‘d’) cnt++; if (input[3] == ‘!’) cnt++; if (cnt >= 3) crash(); } I 0 == ‘b’ I 1 == ‘a’ I 2 == ‘d’ I 3 == ‘!’

17 void top(char input[4]) { int cnt = 0; if (input[0] == ‘b’) cnt++; if (input[1] == ‘a’) cnt++; if (input[2] == ‘d’) cnt++; if (input[3] == ‘!’) cnt++; if (cnt >= 3) crash(); } 0 good 1 goo! 1 godd 2 god! 1 gaod 2 gao! 2 gadd gad! 1 bood 2 boo! 2 bodd 2 baod bod! bad! badd baod The Generational Search Space

18 SAGE Architecture (Scalable Automated Guided Execution) Task: Tester Check for Crashes (AppVerifier)‏ Task: Tracer Trace Program (Nirvana)‏ Task: CoverageCollector Gather Constraints (Truscan)‏ Task: SymbolicExecutor Solve Constraints (Disolver) Input0Trace File Constraints Input1 Input2 … InputN

19 Since 1 st MS internal release in April’07: dozens of new security bugs found (most missed by blackbox fuzzers, static analysis)‏ Apps: image processors, media players, file decoders,… Many bugs found rated as “security critical, severity 1, priority 1” Now used by several teams regularly as part of QA process Initial Experiences with SAGE

20 Experiments ANI Parsing - MS RIFF...ACONLIST B...INFOINAM.... 3D Blue Alternat e v1.1..IART anih$...$ rate seq LIST....framic on RIFF...ACONB B...INFOINAM.... 3D Blue Alternat e v1.1..IART anih$...$ rate seq anih....framic on Only 1 in 2 32 chance at random! Initial input Crashing test case Initial input : 341 branch constraints, 1,279,939 total instructions. Crash test cases: 7706 test cases, 7hrs 36 mins

21 Initial Experiments #Instructions and Input size largest seen so far App Tested#TestsMean DepthMean #Instr.Mean Size ANI ,066,0875,400 Media ,409,37665,536 Media ,432,48927,335 Media ,644,65230,833 Media ,685,24022,209 Compression , Office ,731,24845,064

22 Different Crashes From Different Initial Input Files 100 zero bytes X XXXX XXXXX XX XX XX XXX X XXX X X X Media 1: 60 machine-hours, 44,598 total tests, 357 crashes, 12 bugs Bug ID seed1seed2seed3seed4 seed5seed6seed7

23 Conclusion Blackbox vs. Whitebox Fuzzing Cost/precision tradeoffs – Blackbox is lightweight, easy and fast, but poor coverage – Whitebox is smarter, but complex and slower – Recent “semi-whitebox” approaches Less smart but more lightweight: Flayer (taint-flow analysis, may generate false alarms), Bunny-the-fuzzer (taint-flow, source-based, heuristics to fuzz based on input usage), autodafe, etc. Which is more effective at finding bugs? It depends… – Many apps are so buggy, any form of fuzzing finds bugs! – Once low-hanging bugs are gone, fuzzing must become smarter: use whitebox and/or user-provided guidance (grammars, etc.)‏ Bottom-line: in practice, use both!

24 Contribution A novel search algorithm with a coverage-maximizing heuristic It performs symbolic execution of program traces at the x86 binary level Tests larger applications than previously done in dynamic test generation

25 Weakness Gathering initial inputs The class of inputs characterized by each symbolic execution is determined by the dependence of the program’s control flow on its inputs Not a general solution Only for x86 windows applications Only focus on file-reading applications Nirvana simulates a given processor’s architecture

26 Improvement Portability Translate collected traces into an intermediate language Reproduce bugs on different platforms without re-simulating the program Better way to find initial inputs efficiently

27 Thank you! Questions?


Download ppt "Automated Whitebox Fuzz Testing Patrice Godefroid (Microsoft Research)‏ Michael Y. Levin (Microsoft Center for Software Excellence)‏ David Molnar (UC-Berkeley,"

Similar presentations


Ads by Google