1 Aditya P. Mathur Professor, Department of Computer Science, Associate Dean, Graduate Education and International Programs Purdue University Department.

Slides:



Advertisements
Similar presentations
Defect testing Objectives
Advertisements

Software Testing 3 Damian Gordon.
Software Testing By Marcin Starzomski :P. What is Testing ? Testing is a process used to help identify the correctness, completeness and quality of developed.
T. E. Potok - University of Tennessee Software Engineering Dr. Thomas E. Potok Adjunct Professor UT Research Staff Member ORNL.
All Hands Meeting, 2006 Title: Grid Workflow Scheduling in WOSE (Workflow Optimisation Services for e- Science Applications) Authors: Yash Patel, Andrew.
CMSC 345, Version 11/07 SD Vick from S. Mitchell Software Testing.
Software testing.
Foundations of Software Testing Chapter 1: Section 1.19 Coverage Principle and the Saturation Effect Aditya P. Mathur Purdue University Last update: August.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
1 These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 5/e and are provided with permission by.
Software Testing and Quality Assurance
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Ray. A. DeCarlo School of Electrical and Computer Engineering Purdue University, West Lafayette, IN Aditya P. Mathur Department of Computer Science Friday.
Aditya P. Mathur Professor, Department of Computer Science, Associate Dean, Graduate Education and International Programs Purdue University Wednesday July.
These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 6/e and are provided with permission by.
Software Testing and Reliability Testing Real-Time Systems Aditya P. Mathur Purdue University May 19-23, Corporation Minneapolis/St Paul,
1 Validation and Verification of Simulation Models.
Coverage Principle: A Mantra for Software Testing and Reliability Aditya P. Mathur Purdue University August 28, Cadence Labs, Chelmsford Last update:August.
Testing an individual module
Software Testing. “Software and Cathedrals are much the same: First we build them, then we pray!!!” -Sam Redwine, Jr.
Reliability Modeling for Design Diversity: A Review and Some Empirical Studies Teresa Cai Group Meeting April 11, 2006.
Types and Techniques of Software Testing
Software Integration and Documenting
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Models for Software Reliability N. El Kadri SEG3202.
1 Aditya P. Mathur Head and Professor Department of Computer Science, Purdue University ABB, Sweden Monday April 7, 2008 Towards a Radically New Theory.
University of Toronto Department of Computer Science © 2001, Steve Easterbrook CSC444 Lec22 1 Lecture 22: Software Measurement Basics of software measurement.
Software Testing and Reliability Software Test Process
1 Validation & Verification Chapter VALIDATION & VERIFICATION Very Difficult Very Important Conceptually distinct, but performed simultaneously.
CPIS 357 Software Quality & Testing
CMSC 345 Fall 2000 Unit Testing. The testing process.
IV&V Facility PI: Katerina Goseva – Popstojanova Students: Sunil Kamavaram & Olaolu Adekunle Lane Department of Computer Science and Electrical Engineering.
1 Software testing. 2 Testing Objectives Testing is a process of executing a program with the intent of finding an error. A good test case is in that.
 CS 5380 Software Engineering Chapter 8 Testing.
Testing Workflow In the Unified Process and Agile/Scrum processes.
West Virginia University Towards Practical Software Reliability Assessment for IV&V Projects B. Cukic, E. Gunel, H. Singh, V. Cortellessa Department of.
Ch. 1.  High-profile failures ◦ Therac 25 ◦ Denver Intl Airport ◦ Also, Patriot Missle.
University of Palestine software engineering department Testing of Software Systems Testing throughout the software life cycle instructor: Tasneem.
Software Testing Yonsei University 2 nd Semester, 2014 Woo-Cheol Kim.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
Testing, Monitoring, and Control of Internet Services Aditya P. Mathur Purdue University Friday, April 15, Washington State University, Pullman,
TESTING LEVELS Unit Testing Integration Testing System Testing Acceptance Testing.
Software Testing and Quality Assurance Software Quality Assurance 1.
Chapter 10 Verification and Validation of Simulation Models
A13. Testing Intro Data Structures & SE Computer Science Dept Va Tech Aug., 2001 © Barnette ND, McQuain WD 1 Levels of Verification The Unreachable.
1 These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 5/e and are provided with permission by.
1 Software Architecture in Practice Quality attributes (The amputated version)
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
CSC 480 Software Engineering Test Planning. Test Cases and Test Plans A test case is an explicit set of instructions designed to detect a particular class.
Software Testing Mehwish Shafiq. Testing Testing is carried out to validate and verify the piece developed in order to give user a confidence to use reliable.
Software Quality Assurance and Testing Fazal Rehman Shamil.
Aditya P. Mathur Professor Department of Computer Science Purdue University, West Lafayette Wednesday January 19, 2011 Capsules, Micropayments, and the.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Software Testing Sudipto Ghosh CS 406 Fall 99 November 23, 1999.
Lecturer: Eng. Mohamed Adam Isak PH.D Researcher in CS M.Sc. and B.Sc. of Information Technology Engineering, Lecturer in University of Somalia and Mogadishu.
SOFTWARE TESTING LECTURE 9. OBSERVATIONS ABOUT TESTING “ Testing is the process of executing a program with the intention of finding errors. ” – Myers.
Testing Integral part of the software development process.
Aditya P. Mathur Purdue University
Software Development and Safety Critical Decisions
Regression Testing with its types
Software Testing Day 1: Preliminaries
Handouts Software Testing and Quality Assurance Theory and Practice Chapter 2 Theory of Program Testing
Chapter 10 Verification and Validation of Simulation Models
Software testing.
Software Reliability Models.
Software testing.
Coverage Principle: A Mantra for Software Testing and Reliability
Aditya P. Mathur Professor, Department of Computer Science,
Applying Use Cases (Chapters 25,26)
© Oxford University Press All rights reserved.
Presentation transcript:

1 Aditya P. Mathur Professor, Department of Computer Science, Associate Dean, Graduate Education and International Programs Purdue University Department of Computer Science North Dakota State University, Fargo, ND Thursday April 19, 2007 Saturation effect and the need for a new theory of software reliability

2 Dependability Availability: Readiness for correct service Reliability: Continuity of correct service Safety: Absence of catastrophic consequences on the user(s) and the environment Security: The concurrent existence of (a) availability for authorized users only, (b) confidentiality, and (c) integrity. Source: Wikipedia. Focus of this talk The presence of software errors has the potential for negative impact on each aspect of Dependability.

3 Reliability Probability of failure free operation in a given environment over a given time. Mean Time To Failure (MTTF) Mean Time To Disruption (MTTD) Mean Time To Restore (MTTR)

4 Operational profile Probability distribution of usage of features and/or scenarios. Captures the usage pattern with respect to a class of customers.

5 Reliability estimation-Early Work Operational profile Random or semi-random Test generation Test execution Failure data collection Reliability estimation Decision process [ Shooman ‘72, Littlewood ‘73, Musa ‘75, Thayer ‘76, Goel et al. ‘78, Yamada et al. ‘83, Laprie ‘84, Malaiya et al. ‘92, Miller et al. ‘92, Singpurwalla ‘95 ]

6 Reliability estimation: Correlation, Coverage, Architecture Cheung ’80: Markovian model Ohba ’92. Piwowarski et al.‘93: Coverage based Chen et al. ’92: Coverage based Garg ’95, Del Frate et al.’95 : Coverage/reliability model and correlation. Littlewood ’79: architecture based Malaiya et al. ’94: Coverage based Xiaoguang et al. ‘03: architecture based Krishnamurthy et al. ’97: architecture based Gokhale et al. ’98: architecture based Chen et al. ’94, Musa ’94: Reliability/testing sensitivity

7 Need for Ultrahigh Reliability Medical devices Automobile engine controllers Aircraft controllers Track/train control systems No known escaped defects that might create unsafe situations and /or might lead to ineffective performance.

8 A reliability estimation scenario (slightly unrealistic) An integrated version of the software P for a cardiac pacemaker is available for system test. Operational profile from an earlier version of the pacemaker is available. P has never been used in any implanted pacemaker. Tests are generated using the operational file and P tested. Three distinct failures are foundand analyzed. The management asks the development team to debug P and remove causes of all failures. The updated P is retested using the same operational profile. No failures are observed. What is the reliability of the updated P? Unrealistic

9 Issues: Operational profile Variable. Becomes known only after customers have access to the product. Is a stochastic process…a moving target! Random test generation requires an oracle. Hence is generally limited to specific outcomes, e.g. crash, hang. In some cases, however, random variation of input scenarios is useful and is done for embedded systems. Human heart: Variability across humans and over time.

10 Issues: Failure data Should we analyze the failures? If yes then after the cause is removed, the reliability estimate is invalid. If the cause is not removed, because the failure is a “minor incident,” then the reliability estimate corresponds to irrelevant incidents.

11 Issues: Model selection Rarely does a model fit the failure data. Model selection becomes a problem. ~200 models to choose from? New ones keep arriving!

12 Issues: Markovian models Markov models suffer from a lack of estimate of transition probabilities. To compute these probabilities, you need to execute the application. During execution you obtain failure data. Then why proceed further with the model? C1C3C =1

13 Issues: Assumptions Software does not degrade over time; e.g. memory leak is not degradation and is not a random process; a new version is a different piece of software. Reliability estimate varies with operational profile. Different customers see different reliability. Can we have a reliability estimate independent of the operational profile? Can we not advertise quality based on metric that are a true representation of reliability..not with respect to a subset of features but over the entire set of features?

14 Sensitivity of Reliability to test adequacy Coverage low high Desirable Suspect modelUndesirable Risky Reliability Problem with existing approaches to reliability estimation.

15 Basis for an alternate approach Why not develop a theory based on coverage of testable items and test adequacy? Testable items: Variables, statements,conditions, loops, data flows, methods, classes, etc. Pros: Errors hide in testable items. Cons: Coverage of testable items is inadequate. Is it a good predictor of reliability? Yes, but only when used carefully. Let us see what happens when coverage is not used or not used carefully. Are we interested in reliability or in confidence?

16 Saturation Effect FUNCTIONAL, DECISION, DATAFLOW AND MUTATION TESTING PROVIDE TEST ADEQUACY CRITERIA. Reliability Testing Effort True reliability (R) Estimated reliability (R’) Saturation region Mutation Dataflow Decision Functional RmRm R df RdRd RfRf R’ f R’ d R’ df R’ m tfstfs tfetfe tdstds tdetde t df s t df e tmstms tfetfe

17 An experiment Tests generated randomly exercise less code than those generated using a mix of black box and white box techniques. Application: TeX. Creator: Donald Knuth. [Leath ‘92]

18 Modeling an application OS Component Interactions Component Interactions Component Interactions ……….

19 Reliability of a component R(f)=  (covered/total), 0<  <1. Reliability, probability of correct operation, of function f based on a given finite set of testable items. Issue: How to compute  ? Approach: Empirical studies provide estimate of  and its variance for different sets of testable items.

20 Reliability of a subsystem R(C)= g(R(f1), R(f2),..R(fn), R(I)) C={f1, f2,..fn} is a collection of components that collaborate with each other to provide services. Issue 1: How to compute R(I), reliability of component interactions? Issue 2: What is g ? Issue 3: Theory of systems reliability creates problems when (a) components are in a loop and (b) are dependent on each other.

21 Scalability Is the component based approach scalable? Powerful coverage measures lead to better reliability estimates whereas measurement of coverage becomes increasingly difficult as more powerful criteria are used. Solution: Use component based, incremental, approach. Estimate reliability bottom-up. No need to measure coverage of components whose reliability is known.

22 Next steps Develop component based theory of reliability. [Littlewood 79, Kubat 89, Krishnamurthy et al. 95, Hamlet et al. 01, Goseva-Popstojanova et al. 01, May 02] Do experimentation with large systems to investigate the applicability and effectiveness in predicting and estimating various reliability (confidence) metrics. Base the new theory on existing work in software testing and reliability.

23 The Future Apple Confidence: Level 0: 1.0 Level 1: Level 2: 0.98 Boxed and embedded software with independently variable Levels of Confidence. Mackie Confidence: 0.99 Level 0: 1.0 Level 1: