Bayesian Graphical Models for Software Testing David A Wooff, Michael Goldstein, Frank P.A. Coolen Presented By Scott Young.

Slides:



Advertisements
Similar presentations
Advanced Information Systems Development (SD3043)
Advertisements

Integra Consult A/S Safety Assessment. Integra Consult A/S SAFETY ASSESSMENT Objective Objective –Demonstrate that an acceptable level of safety will.
Software Engineering CSE470: Process 15 Software Engineering Phases Definition: What? Development: How? Maintenance: Managing change Umbrella Activities:
Chapter 4 Quality Assurance in Context
Degree and Graduation Seminar Scope Management
Validata Release Coordinator Accelerated application delivery through automated end-to-end release management.
Modeling Human Reasoning About Meta-Information Presented By: Scott Langevin Jingsong Wang.
Vancouver, October 08th 2013 DB Systemtechnik GmbH Marc Geisler The challenge of transforming a rule-based system into a risk-based culture on an example.
CS 589 Information Risk Management 30 January 2007.
Overview Lesson 10,11 - Software Quality Assurance
Software Configuration Management
Software Engineering Laboratory1 Introduction of Bayesian Network 4 / 20 / 2005 CSE634 Data Mining Prof. Anita Wasilewska Hiroo Kusaba.
Soft. Eng. II, Spr. 02Dr Driss Kettani, from I. Sommerville1 CSC-3325: Chapter 6 Title : The Software Quality Reading: I. Sommerville, Chap: 24.
SE curriculum in CC2001 made by IEEE and ACM: Overview and Ideas for Our Work Katerina Zdravkova Institute of Informatics
Soft. Eng. II, Spr. 2002Dr Driss Kettani, from I. Sommerville1 CSC-3325: Chapter 9 Title : Reliability Reading: I. Sommerville, Chap. 16, 17 and 18.
 QUALITY ASSURANCE:  QA is defined as a procedure or set of procedures intended to ensure that a product or service under development (before work is.
Software Process and Product Metrics
©Ian Sommerville 2006Critical Systems Slide 1 Critical Systems Engineering l Processes and techniques for developing critical systems.
Software Documentation Written By: Ian Sommerville Presentation By: Stephen Lopez-Couto.
S Neuendorf 2004 Prediction of Software Defects SASQAG March 2004 by Steve Neuendorf.
Software Reliability Categorising and specifying the reliability of software systems.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 27 Slide 1 Quality Management 1.
McGraw-Hill/Irwin © 2006 The McGraw-Hill Companies, Inc. All rights reserved. BUSINESS DRIVEN TECHNOLOGY Business Plug-In B10 Project Management.
Dillon: CSE470: SE, Process1 Software Engineering Phases l Definition: What? l Development: How? l Maintenance: Managing change l Umbrella Activities:
Copyright 2002 Prentice-Hall, Inc. Chapter 1 The Systems Development Environment 1.1 Modern Systems Analysis and Design.
Validation Metrics. Metrics are Needed to Answer the Following Questions How much time is required to find bugs, fix them, and verify that they are fixed?
VTT-STUK assessment method for safety evaluation of safety-critical computer based systems - application in BE-SECBS project.
1 Software Quality CIS 375 Bruce R. Maxim UM-Dearborn.
BUSINESS PLUG-IN B15 Project Management.
SOFTWARE ENGINEERING1 Introduction. Software Software (IEEE): collection of programs, procedures, rules, and associated documentation and data SOFTWARE.
1 Chapter 5 Project management. 2 Project management : Is Organizing, planning and scheduling software projects.
Software Engineering Modern Approaches Eric Braude and Michael Bernstein 1.
Software Metrics and Reliability. Definitions According to ANSI, “ Software Reliability is defined as the probability of failure – free software operation.
Software Project Management Lecture # 3. Outline Chapter 22- “Metrics for Process & Projects”  Measurement  Measures  Metrics  Software Metrics Process.
OHTO -99 SOFTWARE ENGINEERING “SOFTWARE PRODUCT QUALITY” Today: - Software quality - Quality Components - ”Good” software properties.
From Quality Control to Quality Assurance…and Beyond Alan Page Microsoft.
21-22 May 2004IMPROQ 2004 / Impact of SW Processes on Quality Workshop 1 Quality for Components: Component and Component- Based Software Quality Issues.
Microsoft Reseach, CambridgeBrendan Murphy. Measuring System Behaviour in the field Brendan Murphy Microsoft Research Cambridge.
SEG3300 A&B W2004R.L. Probert1 COCOMO Models Ognian Kabranov.
Learning the Structure of Related Tasks Presented by Lihan He Machine Learning Reading Group Duke University 02/03/2006 A. Niculescu-Mizil, R. Caruana.
1 EE29B Feisal Mohammed EE29B: Introduction to Software Engineering Feisal Mohammed Ph: x3156.
Introduction to Measurement. According to Lord Kelvin “When you can measure what you are speaking about and express it in numbers, you know something.
Using Social Network Analysis Methods for the Prediction of Faulty Components Gholamreza Safi.
SOFTWARE METRICS Software Metrics :Roadmap Norman E Fenton and Martin Neil Presented by Santhosh Kumar Grandai.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
System Testing Earlier we have stated the 2 views of testing:
Approaches to ---Testing Software Some of us “hope” that our software works as opposed to “ensuring” that our software works? Why? Just foolish Lazy Believe.
Software Engineering Saeed Akhtar The University of Lahore.
Microsoft Dynamics ® NAV 2009 Service Management.
Personal Design and Development Software Process PD 2 SP “The unexamined life is not worth living.” Plato.
FINANCIAL RATIO ANALYSIS
Copyright © Cengage Learning. All rights reserved. 2 Probability.
Introduction to Performance Testing Performance testing is the process of determining the speed or effectiveness of a computer, network, software program.
Testing Integral part of the software development process.
Software Metrics and Reliability
Software Configuration Management
Software Project Configuration Management
Software Quality Control and Quality Assurance: Introduction
CompSci 280 S Introduction to Software Development
BUSINESS PLUG-IN B15 Project Management.
BUSINESS DRIVEN TECHNOLOGY
Software Engineering (CSI 321)
Software Metrics 1.
Approaches to ---Testing Software
Software Testing An Introduction.
Software Reliability Definition: The probability of failure-free operation of the software for a specified period of time in a specified environment.
Software Documentation
BASICS OF SOFTWARE TESTING Chapter 1. Topics to be covered 1. Humans and errors, 2. Testing and Debugging, 3. Software Quality- Correctness Reliability.
Program Planning and Evaluation Methods
Metrics for process and Projects
Presentation transcript:

Bayesian Graphical Models for Software Testing David A Wooff, Michael Goldstein, Frank P.A. Coolen Presented By Scott Young

Applying Detailed Metrics to Testing  Can provide insight for performing risk analysis  Can provide concrete values to inform the customer or management, concerning software quality  May result in more efficient testing procedures

Meaningful Metrics  Simple metrics concerning testing can inform concerning the success of the testing process  They also can provide insight for when the test process is reaching completion  More complex testing analysis needs to be done in order to locate points of inefficiency in testing, as well as providing more fine-grained knowledge about test results

What is a Bayesian Graphical Model?  Also commonly called a Bayesian Belief Network  Is a directed graph, with nodes signifying indeterminate factors  Bayesian models are most commonly heard of today in relation to spam filtering  They are used to calculate probability based on pre-defined knowledge and relationships between components

Software Actions (SA’s)  A Software Action is an individual, fine grained component of the software project which accomplishes a single task.  An example of a software action in a system would be the processing of a credit card number.

Specifying Nodes  A node should be a collection of operations with the same prior probability of failure, as well as the same change in probability of failure given a test covering that set of operations.

Factors For Defining Probabilities  Level of code complexity  Reliability comparison with existing code which has been evaluated  Maturity of codebase  Typical reliability of author’s code  Similarities to existing code

Updating The Model  As testing continues, the model must be update in stepwise fashion to follow changes to components as they occur.  The probability of an individual node can be updated according to multiple criteria (which are necessarily assumptions) about remaining defects.

What The Results Provide  Tests should be arranged according to the software action(s) which they provide coverage for. Tests discovered to be redundant may be safely removed  Results demonstrate the perceived probability (or strength of belief/confidence) that there are no more existing faults within each SA

What Does This Mean For V&V?  Software producers can demand a level of confidence for components from their testing according to the role of the software and potential financial impact of defects in specific components.

Drawbacks  Informed knowledge is required in order to build a reliable model  This “informed knowledge” still consists of assumptions of relationships (though an assumption within an order of magnitude can still provide useful results)  The amount of additional work to formally track every SA may be prohibitive

Resources  David A Wooff, Michael Goldstein, Frank P.A. Coolen, “Bayesian Graphical Models for Software Testing”. IEEE Transactions on Software Engineering, May  Murray Cumming, “Bayesian Belief Networks”. Date unknown.  Kevin Murphy, “A brief introduction to Bayes’ Rule”. e.html, Jan 2004.