Jeremy Nimmer, page 1 Automatic Generation of Program Specifications Jeremy Nimmer MIT Lab for Computer Science Joint work with.

Slides:



Advertisements
Similar presentations
Copyright © 2006 The McGraw-Hill Companies, Inc. Programming Languages 2nd edition Tucker and Noonan Chapter 18 Program Correctness To treat programming.
Advertisements

Omnibus: A clean language and supporting tool for integrating different assertion-based verification techniques Thomas Wilson, Savi Maharaj, Robert G.
De necessariis pre condiciones consequentia sine machina P. Consobrinus, R. Consobrinus M. Aquilifer, F. Oratio.
Modular and Verified Automatic Program Repair Francesco Logozzo, Thomas Ball RiSE - Microsoft Research Redmond.
An Abstract Interpretation Framework for Refactoring P. Cousot, NYU, ENS, CNRS, INRIA R. Cousot, ENS, CNRS, INRIA F. Logozzo, M. Barnett, Microsoft Research.
Abstraction and Modular Reasoning for the Verification of Software Corina Pasareanu NASA Ames Research Center.
Efficient Systematic Testing for Dynamically Updatable Software Christopher M. Hayden, Eric A. Hardisty, Michael Hicks, Jeffrey S. Foster University of.
An overview of JML tools and applications Lilian Burdy Gemplus Yoonsik Cheon, Gary Leavens Iowa Univ. David Cok Kodak Michael Ernst MIT Rustan Leino Microsoft.
Dynamically Discovering Likely Program Invariants to Support Program Evolution Michael D. Ernst, Jake Cockrell, William G. Griswold, David Notkin Presented.
Dynamically Discovering Likely Program Invariants to Support Program Evolution Michael Ernst, Jake Cockrell, William Griswold, David Notkin Presented by.
Houdini: An Annotation Assistant for ESC/Java Cormac Flanagan and K. Rustan M. Leino Compaq Systems Research Center.
OOP #10: Correctness Fritz Henglein. Wrap-up: Types A type is a collection of objects with common behavior (operations and properties). (Abstract) types.
Automatically Extracting and Verifying Design Patterns in Java Code James Norris Ruchika Agrawal Computer Science Department Stanford University {jcn,
Dynamically Discovering Likely Program Invariants to Support Program Evolution Michael D. Ernst, Jake Cockrell, William G. Griswold, David Notkin Presented.
Michael Ernst, page 1 Improving Test Suites via Operational Abstraction Michael Ernst MIT Lab for Computer Science Joint.
School of Computer ScienceG53FSP Formal Specification1 Dr. Rong Qu Introduction to Formal Specification
Michael Ernst, page 1 Static and dynamic analysis: synergy and duality Michael Ernst CSE 503, Winter 2010 Lecture 1.
Ernst, ICSE 99, page 1 Dynamically Detecting Likely Program Invariants Michael Ernst, Jake Cockrell, Bill Griswold (UCSD), and David Notkin University.
Automated Diagnosis of Software Configuration Errors
Computer Science 340 Software Design & Testing Design By Contract.
Software Engineering Prof. Dr. Bertrand Meyer March 2007 – June 2007 Chair of Software Engineering Static program checking and verification Slides: Based.
Mathematical Modeling and Formal Specification Languages CIS 376 Bruce R. Maxim UM-Dearborn.
The Program Development Cycle
CSC-682 Cryptography & Computer Security Sound and Precise Analysis of Web Applications for Injection Vulnerabilities Pompi Rotaru Based on an article.
Inferring Specifications to Detect Errors in Code Mana Taghdiri Presented by: Robert Seater MIT Computer Science & AI Lab.
Extended Static Checking for Java  ESC/Java finds common errors in Java programs: null dereferences, array index bounds errors, type cast errors, race.
The Daikon system for dynamic detection of likely invariants MIT Computer Science and Artificial Intelligence Lab. 16 January 2007 Presented by Chervet.
Introduction CS 3358 Data Structures. What is Computer Science? Computer Science is the study of algorithms, including their  Formal and mathematical.
Applications of extended static checking K. Rustan M. Leino Compaq SRC K. Rustan M. Leino Compaq SRC Systems Research Center Invited talk, SAS’01, Paris,
User-defined type checkers for error detection and prevention in Java Michael D. Ernst MIT Computer Science & AI Lab
1 Test Selection for Result Inspection via Mining Predicate Rules Wujie Zheng
Software Engineering Laboratory, Department of Computer Science, Graduate School of Information Science and Technology, Osaka University IWPSE 2003 Program.
“Isolating Failure Causes through Test Case Generation “ Jeremias Rößler Gordon Fraser Andreas Zeller Alessandro Orso Presented by John-Paul Ore.
Week 14 Introduction to Computer Science and Object-Oriented Programming COMP 111 George Basham.
Demo of Scalable Pluggable Types Michael Ernst MIT Dagstuhl Seminar “Scalable Program Analysis” April 17, 2008.
Extended Static Checking for Java Cormac Flanagan Joint work with: Rustan Leino, Mark Lillibridge, Greg Nelson, Jim Saxe, and Raymie Stata.
Inculcating Invariants in Introductory Courses David Evans and Michael Peck University of Virginia ICSE 2006 Education Track Shanghai, 24 May
(1) Test Driven Development Philip Johnson Collaborative Software Development Laboratory Information and Computer Sciences University of Hawaii Honolulu.
Grigore Rosu Founder, President and CEO Professor of Computer Science, University of Illinois
Extended Static Checking for Java Cormac Flanagan Joint work with: Rustan Leino, Mark Lillibridge, Greg Nelson, Jim Saxe, and Raymie Stata Compaq Systems.
Combining Static and Dynamic Reasoning for Bug Detection Yannis Smaragdakis and Christoph Csallner Elnatan Reisner – April 17, 2008.
/ PSWLAB Thread Modular Model Checking by Cormac Flanagan and Shaz Qadeer (published in Spin’03) Hong,Shin Thread Modular Model.
/ PSWLAB Evidence-Based Analysis and Inferring Preconditions for Bug Detection By D. Brand, M. Buss, V. C. Sreedhar published in ICSM 2007.
1 CEN 4020 Software Engineering PPT4: Requirement analysis.
WP4 Models and Contents Quality Assessment
The Development Process of Web Applications
Automatic Generation of Program Specifications
Verification and Testing
Specification-Based Unit Test Data Selection: Integrating Daikon and Jtest Tao Xie and David Notkin Computer Science & Engineering, University of.
Accessible Formal Methods A Study of the Java Modeling Language
runtime verification Brief Overview Grigore Rosu
State your reasons or how to keep proofs while optimizing code
Marcelo d’Amorim (UIUC)
Eclat: Automatic Generation and Classification of Test Inputs
The Future of Software Engineering: Tools
Automated Support for Program Refactoring Using Invariants
Introduction to Data Structures
Test Case Purification for Improving Fault Localization
Programming Languages 2nd edition Tucker and Noonan
Static and dynamic analysis: synergy and duality
Mock Object Creation for Test Factoring
Hoare-style program verification
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
Stacks.
Assertions References: internet notes; Bertrand Meyer, Object-Oriented Software Construction; 4/25/2019.
Computer Science 340 Software Design & Testing
Chapter 7 Software Testing.
Verifying Dijktra’s algorithm in Jahob
Programming Languages 2nd edition Tucker and Noonan
Design Contracts and Errors A Software Development Strategy
Presentation transcript:

Jeremy Nimmer, page 1 Automatic Generation of Program Specifications Jeremy Nimmer MIT Lab for Computer Science Joint work with Michael Ernst

Jeremy Nimmer, page 2 Synopsis Specifications useful for many tasks Use of specifications has practical difficulties Dynamic analysis can capture specifications Recover from existing code Infer from traces Results are accurate (90%+) Specification matches implementation

Jeremy Nimmer, page 3 Outline Motivation Approach: Generate and check specifications Evaluation: Accuracy experiment Conclusion

Jeremy Nimmer, page 4 Advantages of specifications Describe behavior precisely Permit reasoning using summaries Can be verified automatically

Jeremy Nimmer, page 5 Problems with specifications Describe behavior precisely Tedious and difficult to write and maintain Permit reasoning using summaries Must be accurate if used in lieu of code Can be verified automatically Verification may require uninteresting annotations

Jeremy Nimmer, page 6 Solution Automatically generate and check specifications from the code

Jeremy Nimmer, page 7 Solution scope Generate and check “complete” specifications Very difficult Generate and check partial specifications Nullness, types, bounds, modification targets,... Need not operate in isolation User might have some interaction Goal: decrease overall effort

Jeremy Nimmer, page 8 Outline Motivation Approach: Generate and check specifications Evaluation: Accuracy experiment Conclusion

Jeremy Nimmer, page 9 Previous approaches Generation: By hand Static analysis Checking By hand Non-executable models

Jeremy Nimmer, page 10 Our approach Dynamic detection proposes likely properties Static checking verifies properties Combining the techniques overcomes the weaknesses of each Ease annotation Guarantee soundness

Jeremy Nimmer, page 11 Daikon: Dynamic invariant detection Look for patterns in values the program computes: Instrument the program to write data trace files Run the program on a test suite Invariant detector reads data traces, generates potential invariants, and checks them

Jeremy Nimmer, page 12 ESC/Java: Invariant checking ESC/Java: Extended Static Checker for Java Lightweight technology: intermediate between type-checker and theorem-prover; unsound Intended to detect array bounds and null dereference errors, and annotation violations requires x != null */ ensures this.a[this.top] == x */ void push(Object x); Modular: checks, and relies on, specifications

Jeremy Nimmer, page 13 Integration approach Run Daikon over target program Insert results into program as annotations Run ESC/Java on the annotated program All steps are automatic.

Jeremy Nimmer, page 14 invariant theArray != null; invariant \typeof(theArray) == \type(Object[]); invariant topOfStack >= -1; invariant topOfStack < theArray.length; invariant theArray[0..topOfStack] != null; invariant theArray[topOfStack+1..] == null; */ public class StackAr { Object[] theArray; int topOfStack;... Stack object invariants invariant theArray != null; invariant \typeof(theArray) == \type(Object[]); invariant topOfStack >= -1; invariant topOfStack < theArray.length; invariant theArray[0..topOfStack] != null; invariant theArray[topOfStack+1..] == null;

Jeremy Nimmer, page 15 Stack push method requires x != null; requires topOfStack < theArray.length - 1; modifies topOfStack, theArray[*]; ensures topOfStack == \old(topOfStack) + 1; ensures x == theArray[topOfStack]; ensures theArray[0..\old(topOfStack)]; == \old(theArray[0..topOfStack]); */ public void push( Object x ) {... } requires x != null; requires topOfStack < theArray.length - 1; modifies topOfStack, theArray[*]; ensures topOfStack == \old(topOfStack) + 1; ensures x == theArray[topOfStack]; ensures theArray[0..\old(topOfStack)]; == \old(theArray[0..topOfStack]); */

Jeremy Nimmer, page 16 Stack summary Reveal properties of the implementation (e.g., garbage collection of popped elements) No runtime errors if callers satisfy preconditions Implementation meets generated specification ESC/Java verified all 25 Daikon invariants

Jeremy Nimmer, page 17 Outline Motivation Approach: Generate and check specifications Evaluation: Accuracy experiment Conclusion

Jeremy Nimmer, page 18 Accuracy experiment Dynamic generation is potentially unsound How accurate are its results in practice? Combining static and dynamic analyses should produce benefits But perhaps their domains are too dissimilar?

Jeremy Nimmer, page 19 Programs studied 11 programs from libraries, assignments, texts Total 2449 NCNB LOC in 273 methods Test suites Used program’s test suite if provided (9 did) If just example calls, spent <30 min. enhancing ~70% statement coverage

Jeremy Nimmer, page 20 invariant theArray != null; invariant topOfStack >= -1; invariant topOfStack < theArray.length; invariant theArray[0..length-1] == null; invariant theArray[0..topOfStack] != null; invariant theArray[topOfStack+1..] == null; Accuracy measurement Standard measures from info ret [Sal68, vR79] Precision (correctness) : 3 / 4 = 75% Recall (completeness) : 3 / 5 = 60% Compare generated specification to a verifiable specification

Jeremy Nimmer, page 21 Experiment results Daikon reported 554 invariants Precision: 96 % of reported invariants verified Recall: 91 % of necessary invariants were reported

Jeremy Nimmer, page 22 Causes of inaccuracy Limits on tool grammars Daikon: May not propose relevant property ESC: May not allow statement of relevant property Incompleteness in ESC/Java Always need programmer judgment Insufficient test suite Shows up as overly-strong specification Verification failure highlights problem; helpful in fixing System tests fared better than unit tests

Jeremy Nimmer, page 23 Experiment conclusions Our dynamic analysis is accurate Recovered partial specification Even with limited test suites Enabled verifying lack of runtime exceptions Specification matches the code Results should scale Larger programs dominate results Approach is class- and method-centric

Jeremy Nimmer, page 24 Value to programmers Generated specifications are accurate Are the specifications useful? How much does accuracy matter? How does Daikon compare with other annotation assistants? Answers at FSE'02

Jeremy Nimmer, page 25 Outline Motivation Approach: Generate and check specifications Evaluation: Accuracy experiment Conclusion

Jeremy Nimmer, page 26 Conclusion Specifications via dynamic analysis Accurately produced from limited test suites Automatically verifiable (minor edits) Specification characterizes the code Unsound techniques useful in program development

Jeremy Nimmer, page 27 Questions?

Jeremy Nimmer, page 28 Specifications Precise, mathematical desc. of behavior [LG01] Standard definition; novel use Generated after implementation Still useful to produce [PC86] Many specifications for a program Depends on task; variety of forms e.g. runtime performance

Jeremy Nimmer, page 29 Effect of bugs Case 1: Bug is exercised by test suite Falsifies one or more invariants Weaker specification May cause verification to fail Case 2: Bug is not exercised by test suite Not reflected in specification Code and specification disagree Verifier points out inconsistency