Center for Reliability Engineering Integrating Software into PRA B. Li, M. Li, A. Sinha, Y. Wei, C. Smidts Presented by Bin Li Center for Reliability Engineering.

Slides:



Advertisements
Similar presentations
Performance Testing - Kanwalpreet Singh.
Advertisements

Testing Workflow Purpose
Test Yaodong Bi.
Lecture # 2 : Process Models
5 december 2011 Living Probabilistic Asset Management Dr.ir. J.A. van den Bogaard.
Engineering Economic Analysis Canadian Edition
Lecture 12 Reengineering Computer-aided Software Engineering Cleanroom Software Engineering.
SE 450 Software Processes & Product Metrics Reliability: An Introduction.
CSE 300: Software Reliability Engineering Topics covered: Project overview.
1 These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 5/e and are provided with permission by.
Copyright © 2006 Software Quality Research Laboratory DANSE Software Quality Assurance Tom Swain Software Quality Research Laboratory University of Tennessee.
Software Testing and Quality Assurance
1 Software Testing and Quality Assurance Lecture 31 – Testing Systems.
Cleanroom Method CS 415, Software Engineering II Mark Ardis, Rose-Hulman Institute March 20, 2003.
Program Testing Nelson Padua-Perez Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 6/e and are provided with permission by.
Software Testing and Quality Assurance
CSC 402, Fall Requirements Analysis for Special Properties Systems Engineering (def?) –why? increasing complexity –ICBM’s (then TMI, Therac, Challenger...)
Testing Components in the Context of a System CMSC 737 Fall 2006 Sharath Srinivas.
University of Southern California Center for Software Engineering CSE USC 9/14/05 1 COCOMO II: Airborne Radar System Example Ray Madachy
Software Process and Product Metrics
Annex I: Methods & Tools prepared by some members of the ICH Q9 EWG for example only; not an official policy/guidance July 2006, slide 1 ICH Q9 QUALITY.
Introduction to Software Testing
Software Testing and QA Theory and Practice (Chapter 10: Test Generation from FSM Models) © Naik & Tripathy 1 Software Testing and Quality Assurance Theory.
Software Integration and Documenting
Romaric GUILLERM Hamid DEMMOU LAAS-CNRS Nabil SADOU SUPELEC/IETR.
University of Toronto Department of Computer Science © 2001, Steve Easterbrook CSC444 Lec22 1 Lecture 22: Software Measurement Basics of software measurement.
S/W Project Management
Chap 11 Engineering Statistics PREP004 – Introduction to Applied Engineering College of Engineering - University of Hail Fall 2009.
RUP Implementation and Testing
OSMA2003 Center for Reliability Engineering 1 Integrating Software into PRA Presented by C. Smidts Center for Reliability Engineering University of Maryland.
VTT-STUK assessment method for safety evaluation of safety-critical computer based systems - application in BE-SECBS project.
Chapter 6: Testing Overview Basic idea of testing - execute software and observe its behavior or outcome. If failure is observed, analyze execution record.
This document is proprietary to Project Consulting Group, Inc. and contains confidential information which is solely the property of Project Consulting.
What is a life cycle model? Framework under which a software product is going to be developed. – Defines the phases that the product under development.
IV&V Facility PI: Katerina Goseva – Popstojanova Students: Sunil Kamavaram & Olaolu Adekunle Lane Department of Computer Science and Electrical Engineering.
From Use Cases to Test Cases 1. A Tester’s Perspective  Without use cases testers will approach the system to be tested as a “black box”. “What, exactly,
Testing Workflow In the Unified Process and Agile/Scrum processes.
April 2004 At A Glance CAT is a highly portable exception monitoring and action agent that automates a set of ground system functions. Benefits Automates.
Lecture 7: Requirements Engineering
Today’s Agenda  HW #1  Finish Introduction  Input Space Partitioning Software Testing and Maintenance 1.
MURI: Integrated Fusion, Performance Prediction, and Sensor Management for Automatic Target Exploitation 1 Dynamic Sensor Resource Management for ATE MURI.
Testing “The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some.
Safety Critical Systems 5 Testing T Safety Critical Systems.
TESTING LEVELS Unit Testing Integration Testing System Testing Acceptance Testing.
Software Testing and Quality Assurance Software Quality Assurance 1.
1 UP MBT Extending the Unified Process with Model-Based Testing Fabrice Bouquet, Stéphane Debricon, Bruno Legeard and Jean-Daniel Nicolet MoDeV 2 a 2006.
Chapter 10 Verification and Validation of Simulation Models
March 2004 At A Glance autoProducts is an automated flight dynamics product generation system. It provides a mission flight operations team with the capability.
System Testing Beyond unit testing. 2 System Testing Of the three levels of testing, system level testing is closest to everyday experience We evaluate.
Software Architecture Evaluation Methodologies Presented By: Anthony Register.
1 These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 5/e and are provided with permission by.
Estimating “Size” of Software There are many ways to estimate the volume or size of software. ( understanding requirements is key to this activity ) –We.
Chapter 8 Testing. Principles of Object-Oriented Testing Å Object-oriented systems are built out of two or more interrelated objects Å Determining the.
WERST – Methodology Group
RLV Reliability Analysis Guidelines Terry Hardy AST-300/Systems Engineering and Training Division October 26, 2004.
Process Asad Ur Rehman Chief Technology Officer Feditec Enterprise.
WHAT IF ANALYSIS USED TO IDENTIFY HAZARDS HAZARDOUS EVENTS
What is a software? Computer Software, or just Software, is the collection of computer programs and related data that provide the instructions telling.
Review on Test-Based Approach of Software Reliability November 22 nd, 2010 Nuclear I&C and Information Engineering LabKAIST Bo Gyung Kim.
Introduction to GO-FLOW Method and Comparison to RGGG Method Lab Seminar Dec. 13th, 2010 Seung Ki Shin.
Testing Integral part of the software development process.
1 Visual Computing Institute | Prof. Dr. Torsten W. Kuhlen Virtual Reality & Immersive Visualization Till Petersen-Krauß | GUI Testing | GUI.
Security SIG in MTS 05th November 2013 DEG/MTS RISK-BASED SECURITY TESTING Fraunhofer FOKUS.
Authors: Maria de Fatima Mattiello-Francisco Ana Maria Ambrosio
Input Space Partition Testing CS 4501 / 6501 Software Testing
Introduction to Software Testing
2) For each of the finite state machines above, use the sets derived in part 1) to produce the following:   i.                  a set of sequences that.
Automated Analysis and Code Generation for Domain-Specific Models
© Oxford University Press All rights reserved.
Presentation transcript:

Center for Reliability Engineering Integrating Software into PRA B. Li, M. Li, A. Sinha, Y. Wei, C. Smidts Presented by Bin Li Center for Reliability Engineering University of Maryland, College Park July 20, 2004

Center for Reliability Engineering Integrating Software into PRA Research Objectives The objective of our research is to extend the current PRA (Probabilistic Risk Assessment) methodology to integrate software in the risk assessment process. Such extension requires modeling the software, the computer platform on which it resides and the interactions it has with other systems.

Center for Reliability Engineering Framework

Center for Reliability Engineering Software related failure mode taxonomy

Center for Reliability Engineering Software related failure mode taxonomy

Center for Reliability Engineering Validation of the Failure Mode Taxonomy Validation Criteria: –Completeness –Consistency –Repeatability –Applicability Validation Process

Center for Reliability Engineering Completeness and Applicability Failure Modes Added By JSC

Center for Reliability Engineering Repeatability and Consistency The conflicts in two rounds

Center for Reliability Engineering Repeatability The measurement of repeatability (R) is the repeatability coefficient (Cohen’s Kappa), Kappa values less than 0.45 indicate inadequate repeatability, values above 0.62 indicate good repeatability, and values above 0.78 indicate excellent repeatability R = 0.46

Center for Reliability Engineering Results of the Validation of the Taxonomy The UMD and the JSC teams reached the following consensus: completeapplied –The taxonomy is complete and can be applied to aerospace systems of various natures; –The taxonomy includes failure modes applicable to autonomous real time systems and mission critical systems; –The taxonomy considers all the failure modes in software; –There is sufficient data available for the validation and enough flexibility to use alternative data. RepeatabilityConsistencyRepeatability and Consistency are adequate.

Center for Reliability Engineering Test-Based Approach - Procedure Identify events/components controlled by software in the MLD Identify events/components controlled by software in accident scenarios Specify the functions involved Modeling of the Software Component in ESDs/ETs and Fault Trees Quantification

Center for Reliability Engineering Identify Software Controlled Events/Components in the MLD

Center for Reliability Engineering Identify Software Controlled Events/Components in Accident Scenarios

Center for Reliability Engineering Identify software behavior from ESD/ET –Identify stimuli and results Identify software component from requirements specifications –Identify inputs and outputs Match stimuli/inputs and results/outputs Specify the Functions Involved

Center for Reliability Engineering Modeling Software Component in ESDs/ETs and FTs

Center for Reliability Engineering Utilizing testing to obtain the probability that the software leads to an unsafe state The process is as follows: –Define the test cases. These test cases cover both the normal input and the abnormal input. The testing strategy includes the identification of normal input space and abnormal input space. Test cases are randomly sampled from these spaces. –Build a Finite State Machine model of the software component to represent its behavior (the oracle). The operational profile derived from the input tree is also embedded into this FSM model. –Automate the testing using the test scripts generated from the FSM model. –Define and identify the software component’s safe and unsafe conditions within the context of each ESD sequence. Quantification

Center for Reliability Engineering Scalability The test based approach can be used for large scale systems because large finite state machines have been built and large systems can be tested by WinRunner. Scalability, describes the relationship between the effort needed to use this method for large systems and the effort needed for the smaller systems which are part of the investigation. Contributors to the effort are: The modeling effort (time to build the finite state machine), The test case generation time (time to generate the test cases in TestMaster) The test execution time (time to execute test cases in WinRunner).

Center for Reliability Engineering Modeling Time COCOMO II is used to calculate the time to construct the finite state machine model. PM = A *(Size) E *27%*25% A=2.94 Emin=0.91 Emax=1.226 Size1=70FP PMmax= 1 and PMmin=0.63 Size2=700FP PMMax=16.5 and PMmin=5.4

Center for Reliability Engineering Test Generation time Test generation time in full coverage is a function of the size of the model. Empirical relations of the following forms can be found: where Empirical study shows :

Center for Reliability Engineering Calculation of FSM Model Size Size of the model is a function of Function Points and the Operation profile. Procedure for calculating the size –Determine the basic size from Function Point calculations for the system. –Determine the reliability requirement for the testing process. –Calculate the number of iterations required for the target reliability. –Calculate the size of the largest iterating sub-model. –Calculate the modified size.

Center for Reliability Engineering Test Execution Time Test Execution time( t exec ) is a linear function of the number of the input/output ( n i/o ), numbers of check points ( m ) and the waiting time for responses( Ts ) Empirical study shows:

Center for Reliability Engineering Summary of Scalability Study The results of the scalability show that: Modeling time can be calculated by using COCOMO II; Test generation time in full cover is a function of the size of the model; Test execution time is a linear function of the number of the input/output, numbers of check points and the waiting time for responses.

Center for Reliability Engineering Ongoing and Future Research Continue the application of a large scale system –The application we chose is CM1 from the Metrics Data Program Finalize the scalability study Continue the support failure modes study Continue the output failure modes study Conduct the fault propagation study