Presentation is loading. Please wait.

Presentation is loading. Please wait.

Scalable Dynamic Analysis for Automated Fault Location and Avoidance Rajiv Gupta Funded by NSF grants from CPA, CSR, & CRI programs and grants from Microsoft.

Similar presentations


Presentation on theme: "Scalable Dynamic Analysis for Automated Fault Location and Avoidance Rajiv Gupta Funded by NSF grants from CPA, CSR, & CRI programs and grants from Microsoft."— Presentation transcript:

1 Scalable Dynamic Analysis for Automated Fault Location and Avoidance Rajiv Gupta Funded by NSF grants from CPA, CSR, & CRI programs and grants from Microsoft Research

2 Motivation  Software bugs cost the U.S. economy about $59.5 billion each year [NIST 02].  Embedded Systems Mission Critical / Safety Critical Tasks A failure can lead to Loss of Mission/Life. (Ariane 5) arithmetic overflow led to shutdown of guidance computer. (Mars Climate Orbiter) missed unit conversion led to faulty navigation data. (Mariner I) missing superscripted bar in the specification for the guidance program led to its destruction 293 seconds after launch. (Mars Pathfinder) priority inversion error causing system reset. (Boeing 747-400) loss of engine & flight displays while in flight. (Toyota hybrid Prius) VSC, gasoline-powered engine shut off. (Therac-25) wrong dosage during radiation therapy. …….

3 Overview Dynamic Slicing Offline Environment Faults Online Program Execution Fault Location Fault Avoidance Scalability Tracing + Logging Long-running Multi-threaded

4 Fault Location Goal: Assist the programmer in debugging by automatically narrowing the fault to a small section of the code.  Dynamic Information Data dependences Control dependences Values  Execution Runs One failed execution & Its perturbations

5 Dynamic Information …… ExecutionProgram Dynamic Dependence Graph Data Control

6 Approach Detect execution of statement s such that  Faulty code Affects the value computed by s; or  Faulty code is Affected-by the value computed by s through a chain of dependences. Estimate the set of potentially faulty statements from s:  Affects: statements from which s is reachable in the dynamic dependence graph. (Backward Slice)  Affected-by: statements that are reachable from s in the dynamic dependence graph. (Forward Slice)  Intersect slices to obtain a smaller fault candidate set.

7 Backward & Forward Slices Erroneous Output Failure inducing Input Backward Slice [Korel&Laski,1988] Forward Slice [ASE-05]

8 Failure Inducing Input Erroneous Output [ASE-05]  For memory bugs the number of statements is very small (< 5). Backward & Forward Slices

9 Bidirectional Slices Backward Slice of CP Forward Slice of CP + Bidirectional Slice Combined Slice [ICSE-06] Found critical predicates in 12 out of 15 bugs Search for critical predicate: Brute force: 32 predicates to 155K predicates; After Filtering and Ordering: 1 to 7K predicates. Critical Predicate: An execution instance of a predicate such that changing its outcome “repairs” the program state.

10 Pruning Slices  Confidence in v  v C( v ): [0,1] 0 - all values of v produce same 1 - any change in v will change How? Value profiles.  1 1 1 [PLDI-06]

11 Test Programs Real Reported BugsInjected Bugs  Nine logical bugs (incorrect ouput) Unix utilities  grep 2.5, grep 2.5.1, flex 2.5.31, make 3.80.  Six memory bugs (program crashes) Unix utilities  gzip, ncompress, polymorph, tar, bc, tidy.  Siemens Suite (numerous versions)  schedule, schedule2, replace, print_tokens.. Unix utilities  gzip, flex

12 Dynamic Slice Sizes Buggy RunsBSFSBiS flex 2.5.31(a)695605 225 flex 2.5.31(b)272 257 NA flex 2.5.31(c) 50 1368NA grep 2.5NA73188 grep 2.5.1(a)NA32111 grep 2.5.1(b)NA599NA grep 2.5.1(c)NA12453 make 3.80(a)98112391372 make 3.80(b)129016461436 gzip-1.2.434339 ncompress-4.2.418230 polymorph-0.4.021322 tar 1.13.25105202117 bc 1.06204188267 tidy554367541

13 Combined Slices Buggy Runs BS BS^FS^BiS (%BS) flex 2.5.31(a)69527 (3.9%) flex 2.5.31(b)272102 (37.5%) flex 2.5.31(c)505 (10%) grep 2.5NA86 (7.4%*EXEC) grep 2.5.1(a)NA25 (4.9%*EXEC) grep 2.5.1(b)NA599 (53.3%*EXEC) grep 2.5.1(c)NA12 (0.9%*EXEC) make 3.80(a)981739 (81.4%) make 3.80(b)12901051 (75.3%) gzip-1.2.4343 (8.8%) ncompress-4.2.4182 (14.3%) polymorph-0.4.0213 (14.3%) tar 1.13.2510545 (42.9%) bc 1.06204102 (50%) tidy554161 (29.1%)

14 Evaluation of Pruning ProgramDescriptionLOCVersionsTests print_tokensLexical analyzer 56554072 print_tokens2Lexical analyzer 51054057 replacePattern replacement 56385542 schedulePriority scheduler 41232627 schedule2Priority scheduler 30732683 gzipUnix utility 800911217 flexUnix utility 124188525 Siemen’s Suite Single error is injected in each version. All the versions are not included:  No output or the very first output is wrong;  Root cause is not contained in the BS (code missing error).

15 ProgramBSPruned SlicePruned Slice / BS print_tokens 1103531.8% print_tokens2 1145548.2% replace 1316045.8% schedule 1177059.8% schedule2 905864.4% gzip 35712133.9% flex 727273.7% Evaluation of Pruning

16 Effectiveness Backward Slice [AADEBUG-05] ≈ 31% of Executed Statements Combined Slice [ASE-05,ICSE-06] ≈ 36% of Backward Slice ≈ 11% of Exec. Erroneous output Failure inducing input Critical predicate Pruned Slice [PLDI-06] ≈ 41% of Backward Slice ≈ 13% of Exec. Confidence Analysis

17 Effectiveness  Slicing is effective in locating faults.  No more than 10 static statements had to be inspected. Program-bugInspected Stmts. mutt – heap overflow8 pine – stack overflow3 pine – heap overflow10 mc – stack overflow2 squid – heap overflow5 bc – heap overflow3

18 Execution Omission Errors X= =X A = A<0 X= =X A = A<0 X= =X A = A<0 Inspect pruned slice. Dynamically detect an Implicit dependence. Incrementally expand the pruned slice. [PLDI-07] Implicit dependence

19 Scalability of Tracing Dynamic Information Needed Dynamic Dependences  for all slicing Values for Confidence Analysis  for pruning slices  annotates the static program representation Whole Execution Trace (WET)  Trace Size ≈ 15 Bytes / Instruction

20 Trace Sizes & Collection Overheads  Trace sizes are very large for even 10s of execution. ProgramRunning Time Dep. Trace Collection Time mysql13 s21 GB2886 s prozilla8 s6 GB2640 s proxyC10 s456 MB880 s mc10 s55 GB418 s mutt20 s388 GB3238 s pine14 s156 GB2088 s squid15 s88 GB1132 s

21 Compacting Whole Execution Traces  Explicitly remember dynamic control flow trace.  Infer as many dynamic dependences as possible from control flow (94%), remember the remaining dependences explicitly (≈ 6%).  Specialized graph representation to enable inference.  Explicitly remember value trace.  Use context-based method to compress dynamic control flow, value, and address trace.  Bidirectional traversal with equal ease [MICRO-04, TACO-05]

22 Input: N=2 5 1 : for I=1 to N do 6 1 : if (i%2==0) then 7 1 : p=&a 8 1 : a=a+1 9 1 : z=2*(*p) 10 1 : print(z) 1 1 : z=0 2 1 : a=0 3 1 : b=2 4 1 : p=&b 5 2 : for I=1 to N do 6 2 : if (i%2==0) then 8 2 : a=a+1 9 2 : z=2*(*p) 1: z=0 2: a=0 3: b=2 4: p=&b 5: for i = 1 to N do 6: if ( i %2 == 0) then 7: p=&a endif endfor 8: a=a+1 9: z=2*(*p) 10: print(z) Dependence Graph Representation

23 5:for i=1 to N 6:if (i%2==0) then 7: p=&a 8: a=a+1 9: z=2*(*p) 10: print(z) T F 1: z=0 2: a=0 3: b=2 4: p=&b T Input: N=2 1 1 : z=0 2 1 : a=0 3 1 : b=2 4 1 : p=&b 5 1 : for i = 1 to N do 6 1 : if ( i %2 == 0) then 8 1 : a=a+1 9 1 : z=2*(*p) 5 2 : for i = 1 to N do 6 2 : if ( i %2 == 0) then 7 1 : p=&a 8 2 : a=a+1 9 2 : z=2*(*p) 10 1 : print(z) T 1 2 3 4 5 6 7 8 9 10 11 12 13 14 F Dependence Graph Representation

24 1 2 3 45 6 78 9 10 1 2 3 4 5 6 7 2 1 3467934679 3567935679 3468934689 3568935689 1 2 3 2 1 3467934679 1 2 3 3 45 6 78 9 Transform: Traces of Blocks

25 Infer: Local Dependence Labels X = Y= X X = Y= X (10,10) (20,20) (30,30) 10,20,30 X = Y= X =Y 21 (20,21)... (...,20)...

26 Transform: Local Dep. Labels X = *P = Y= X X = *P = Y= X (10,10) (20,20) 10,20 X = *P = Y= X (20,20)

27 Transform: Local Dep. Labels X = *P = Y= X X = *P = Y= X (10,10) (20,20) X = *P = Y= X 10,20 X = *P = Y= X 2010 11,21 (10,11) (20,21) =Y 11,21 =Y (20,21) (10,11)

28 Group: Non-Local Dep. Edges X = Y = = Y = X X = Y = X = Y = = Y = X X = Y = (10,21) (20,11) X = Y = = Y = X X = Y = (20,11) (10,21) 11,21 10 20

29 Compacted WET Sizes Program Statements Executed (Millions) WET Size (MB)Before / After BeforeAfter 300.twolf 256.bzip2 255.vortex 197.parser 181.mcf 164.gzip 130.li 126.gcc 099.go Average 690 751 609 615 715 650 740 365 685 647 10,666 11,921 8,748 8,730 10,541 9,688 10,399 5,238 10,369 9,589 647 221 105 188 416 538 203 89 575 331 16.49 54.02 83.63 46.34 25.33 18.02 51.22 58.84 18.04 41.33 ≈ 4 Bits / Instruction

30 [PLDI-04] vs. [ICSE-03] Slicing Times

31 Dep. Graph Generation Times  Offline post-processing after collecting address and control flow traces  ≈ 35x of execution time  Online techniques [ICSM 2007]  Information Flow: 9x to18x slowdown  Basic block Opt.: 6x to10x slowdown  Trace level Opt.: 5.5x to 7.5x slowdown  Dual Core: ≈1.5x slowdown  Online Filtering techniques  Forward slice of all inputs  User-guided bypassing of functions

32 Reducing Online Overhead  Record non-deterministic events online Less than 2x overhead Deterministic replay of executions  Trace faulty executions off-line Replay the execution Switch on tracing Collect and inspect traces  Trace analysis is still a problem The traces correspond to huge executions Off-line overhead of trace collection is still significant

33 Reducing Trace Sizes Checkpointing Schemes  Trace from the most recent checkpoint Checkpoints are of the order of minutes. Better but the trace sizes are still very large. Exploiting Program Characteristics  Multithreaded and server-like [ISSTA-07, FSE-06] Examples : mysql, apache. Each request spawns a new thread. Do not trace irrelevant threads.

34 Beyond Tracing  Checkpoint: capture memory image.  Execute and Record (log) Events.  Upon Crash, Rollback to checkpoint.  Reduce log and Replay execution using reduced log.  Turn on tracing during replay. x Checkpointlog x Trace Reduced log  Applicable to Multithreaded Programs [ISSTA-07]

35 An Example  A mysql bug “ load …” command will crash the server if database is not specified Without typing “use database_name”, thd->db is Null.

36 Example – Execution and log file open path=/etc/my.cnf … Wait for connection Create Thread 1 Wait for command Create Thread 2 Wait for command Recv “show databases” Handle command Recv “load data …” Handle -- ( server crashes ) Recv “use test; select * from b” Handle command Run mysql server User 1 connects to the server User 2 connects to the server User 1: “show databases” User 2: “use test” “ select * from b” User 1: “load data into table1” Time Blue – T0 Red – T1 Green – T2 Gray - Scheduler

37 Execution Replay using Reduced log open path=/etc/my.cnf … Wait for connection Create Thread 1 Recv “load data …” Handle -- ( server crashes ) Run mysql server User 1 connects to the server User 2 connects to the server User 1: “show databases” User 2: “show databases” “ select * from b” User 1: “load data into table1” Time

38 Execution Reduction  Effects of Reduction  Irrelevant Threads  Replay-only vs. Replay & Trace  How? By identifying Inter-thread Dependences  Event Dependences - found using the log  File Dependences - found using the log  Shared-Memory Dependences - found using replay  Space requirement reduced by 4x  Time requirement reduced by 2x  Naïve approach requires thread id of last writer of each address  Space and time efficient detection o Memory Regions: Non-shared vs shared o Locality of References to Regions

39 Experimental Results

40 Original Optimized Program-bug Trace Sizes Num. of dependences Experimental Results

41 Orig. OPT. Program-bug Execution Times (seconds) Logging Experimental Results

42 Debugging System Slicing Module WET Slices Application binary Execution Engine Valgrind InputOutput Traces Instrument code Compressed Trace Static Binary Analyzer Diablo Control Dependence Reduced Log Record Replay Jockey Checkpoint + log

43 Fault Avoidance Large number of faults in server programs are caused by the environment. 56 % of faults in Apache server. Types of Faults Handled  Atomicity Violation Faults. Try alternate scheduling decisions.  Heap Buffer Overflow Faults. Pad memory requests.  Bad User Request Faults. Drop bad requests. Avoidance Strategy  Recover first time, Prevent later. Record the change that avoided the fault.

44 Experiments ProgramType of BugEnv. Change# of Trials Time taken (secs.) mysql-1Atomicity Violn.Scheduler1130 mysql-2Atomicity Violn.Scheduler165 mysql-3Atomicity Violn.Scheduler165 mysql-4Buffer Overflow.Mem. Padding1700 pine-1Buffer Overflow.Mem. Padding1325 pine-2Buffer Overflow.Mem. Padding1270 mutt-1Bad User Req.Drop Req.3205 bc-1Bad User Req.Drop Req.3290 bc-2Bad User Req.Drop Req.3195

45 Summary Dynamic Slicing Offline Environment Faults Online Program Execution Fault Location Fault Avoidance Scalability Tracing + Logging Long-running Multi-threaded


Download ppt "Scalable Dynamic Analysis for Automated Fault Location and Avoidance Rajiv Gupta Funded by NSF grants from CPA, CSR, & CRI programs and grants from Microsoft."

Similar presentations


Ads by Google