Presentation is loading. Please wait.

Presentation is loading. Please wait.

EMPIRICAL EVALUATION OF INNOVATIONS IN AUTOMATIC REPAIR CLAIRE LE GOUES SITE VISIT FEBRUARY 7, 2013 1.

Similar presentations


Presentation on theme: "EMPIRICAL EVALUATION OF INNOVATIONS IN AUTOMATIC REPAIR CLAIRE LE GOUES SITE VISIT FEBRUARY 7, 2013 1."— Presentation transcript:

1 EMPIRICAL EVALUATION OF INNOVATIONS IN AUTOMATIC REPAIR CLAIRE LE GOUES SITE VISIT FEBRUARY 7, 2013 1

2 “Benchmarks set standards for innovation, and can encourage or stifle it.” -Blackburn et al. 2

3 2009: 15 papers on automatic program repair* 2011: Dagstuhl seminar on self-repairing programs 2012: 30 papers on automatic program repair* 2013: dedicated program repair track at ICSE *manually reviewed the results of a search of the ACM digital library for “automatic program repair” 3 AUTOMATIC PROGRAM REPAIR OVER TIME

4 Manually sift through bugtraq data. Indicative example: Axis project for automatically repairing concurrency bugs 9 weeks of sifting to find 8 bugs to study. Direct quote from Charles Zhang, senior author, on the process: "it's very painful” Very difficult to compare against previous or related work or generate sufficiently large datasets. 4 CURRENT APPROACH

5 GOAL: HIGH-QUALITY EMPIRICAL EVALUATION 5

6 SUBGOAL: HIGH-QUALITY BENCHMARK SUITE 6

7 Indicative of important real-world bugs, found systematically in open-source programs. Support a variety of research objectives. “Latitudinal” studies: many different types of bugs and programs “Longitudinal” studies: many iterative bugs in one program. Scientifically meaningful: passing test cases  repair Admit push-button, simple integration with tools like GenProg. 7 BENCHMARK REQUIREMENTS

8 Indicative of important real-world bugs, found systematically in open-source programs. Support a variety of research objectives. “Latitudinal” studies: many different types of bugs and programs “Longitudinal” studies: many iterative bugs in one program. Scientifically meaningful: passing test cases  repair Admit push-button, simple integration with tools like GenProg. 8 BENCHMARK REQUIREMENTS

9 Goal: a large set of important, reproducible bugs in non-trivial programs. Approach: use historical data to approximate discovery and repair of bugs in the wild. SYSTEMATIC BENCHMARK SELECTION http://genprog.cs.virginia.edu 9

10 Indicative of important real-world bugs, found systematically in open-source programs: Add new programs to the set, with as wide a variety of types as possible (support “latitudinal” studies) Support a variety of research objectives: Allow studies of iterative bugs, development, and repair: generate a very large (100) set of bugs in one program (php) (support “longitudinal” studies). 10 NEW BUGS, NEW PROGRAMS

11 ProgramLOCTestsBugsDescription fbc97,0007733Language (legacy) gmp145,0001462Multiple precision math gzip491,000125Data compression libtiff77,0007824Image manipulation lighttpd62,0002959Web server php 1,046,00 0 11,995100Language (web) python407,00035511Language (general) wireshark 2,814,00 0 637Network packet analyzer valgrind711,0005952Simulator and debugger vlc522,00017??Media player svn629,0001,748??Source control Total 7,001,00 0 16,07 7 163 11

12 Indicative of important real-world bugs, found systematically in open-source programs. Support a variety of research objectives. “Latitudinal” studies: many different types of bugs and programs “Longitudinal” studies: many iterative bugs in one program. Scientifically meaningful: passing test cases  repair Admit push-button, simple integration with tools like GenProg. 12 BENCHMARK REQUIREMENTS

13 They must exist. Sometimes, but not always, true (see: Jonathan Dorn) 13 TEST CASE CHALLENGES

14 ProgramLOCTestsBugsDescription fbc97,0007733Language (legacy) gmp145,0001462Multiple precision math gzip491,000125Data compression libtiff77,0007824Image manipulation lighttpd62,0002959Web server php 1,046,00 0 11,995100Language (web) python407,00035511Language (general) wireshark 2,814,00 0 637Network packet analyzer valgrind711,0005952Simulator and debugger Total 5,850,00 0 14,31 2 163 14 BENCHMARKS

15 They must exist. Sometimes, but not always, true (see: Jonathan Dorn) They should be of high quality. This has been a challenge from day 0: nullhttpd Lincoln labs noticed it too: sort In both cases, adding test cases led to better repairs. 15 TEST CASE CHALLENGES

16 They must exist. Sometimes, but not always, true (see: Jonathan Dorn) They should be of high quality. This has been a challenge from day 0: nullhttpd Lincoln labs noticed it too: sort In both cases, adding test cases led to better repairs. They must be automated to run one at a time, programmatically, from within another framework. 16 TEST CASE CHALLENGES

17 Need to be able to compile and run new variants programmatically. Need to be able to run test cases one at a time. It’s not simple, and as we scale up to real-world systems, becomes increasingly tricky. Much of the challenge is unrelated to the program in question, instead requiring highly-technical knowledge of OS-level details. 17 PUSH-BUTTON INTEGRATION

18 Calling a process from within another process : system(“run test 1”) ...; wait() wait() returns the process exit status. This is complex. Example: a system call can fail because the OS ran out of memory in creating the process, or because the process itself ran out of memory. How do we tell the difference? Answer: bit masking 18 DIGRESSION ON WAIT()

19 Moral: integration is tricky, and lends itself to human mistakes. Possibility 1: original programmers make mistakes in developing the test suite. Test cases can have bugs, too. Possibility 2: we (GenProg devs/users) make mistakes in integration. A few old php test cases are not to our standards; faulty bitshift math for extracting the return value components. 19 REAL-WORLD COMPLEXITY

20 Interested in more, better benchmark design, with easy integration (without gnarly OS details). Virtual machines provide one approach. Need a better definition of “high quality test case” vs. “low quality test case:” Can the empty program pass it? Can every program pass it? Can the “always crashes” program pass it? 20 INTEGRATION CONCERNS

21 Over the past year, we have conducted studies of representation and operators for automatic program repair: One-point crossover on patch representation. Non-uniform mutation operator selection. Alternative fault localization framework. Results on the next slide incorporate “all the bells and whistles:” Improvements based on those large-scale studies. Manually confirmed quality of testing framework. 21 CURRENT REPAIR SUCCESS

22 ProgramPrevious ResultsCurrent Results fbc1/3 gmp1/2 gzip1/5 libtiff17/24 lighttpd5/9 php28/4455/100 python1/112/11 wireshark1/74/7 valgrind---1/2 Total55/10587/163 22

23 TRANSITION 23

24 REPAIR TEMPLATES 24 CLAIRE LE GOUES SHIRLEY PARK DARPA SITE VISIT FEBRUARY 7, 2013

25 BIO + CS INTERACTION 25

26 Immune response is equally fast for large and small animals. Human lung is 100x larger than mouse lung, still finds influenza infections in ~8 hours. Successfully balances local search and global response. Balance between generic and specialized T-cells: Rapid response to new pathogens vs. long-term memory of previous infections (cf. vaccines). IMMUNOLOGY: T-CELLS 26

27 MUTATE DISCARD INPUT EVALUATE FITNESS ACCEPT OUTPUT 27

28 Tradeoff between generic mutation actions and more specific action templates: Generic: INSERT, DELETE, REPLACE Specific: if ( != NULL) { } AUTOMATIC SOFTWARE REPAIR 28

29 HYPOTHESIS: GENPROG CAN REPAIR MORE BUGS, AND REPAIR BUGS MORE QUICKLY, IF WE AUGMENT MUTATION ACTIONS WITH “REPAIR TEMPLATES.” 29

30 Insight: Just like T-cells “remember” previous infections, abstract previous fixes to generate new mutations. Approach: Model previous changes using structured documentation. Cluster a large set of changes by similarity. Abstract the center of each cluster Example: if( < 0) return 0; else 30 OPTION 1: PREVIOUS CHANGES

31 Insight: Looking up things at a library provides people with the best example of what they are looking to reproduce. Approach: Generate static paths through C programs. Mine API usage patterns from those paths Abstract the patterns into mutation templates. Example: while(it.hasnext()) 31 OPTION 2: EXISTING BEHAVIOR

32 THIS WORK IS ONGOING. 32

33 We are generating a benchmark suite to support GenProg research, integration and tech transfer, and the automatic repair community at large. Current GenProg results for 12-hour repair scenario: 87/163 (53%) of real-world bugs in dataset. Repair templates will augment GenProg’s mutation operators to help repair more bugs, and repair bugs more quickly. 33 CONCLUSIONS


Download ppt "EMPIRICAL EVALUATION OF INNOVATIONS IN AUTOMATIC REPAIR CLAIRE LE GOUES SITE VISIT FEBRUARY 7, 2013 1."

Similar presentations


Ads by Google