1 Improving the Risk Management Capability of the Reliability and Maintainability Program An introduction to the philosophy behind the AIAA S-102 Performance-Based.

Slides:



Advertisements
Similar presentations
Software Quality Assurance (SQA). Recap SQA goal, attributes and metrics SQA plan Formal Technical Review (FTR) Statistical SQA – Six Sigma – Identifying.
Advertisements

1 The Role of the Revised IEEE Standard Dictionary of Measures of the Software Aspects of Dependability in Software Acquisition Dr. Norman F. Schneidewind.
Hazard identification and Risk assessment
1 Independent Verification and Validation Current Status, Challenges, and Research Opportunities Dan McCaugherty IV&V Program Manager Titan Systems Corporation.
Introduction to the State-Level Mitigation 20/20 TM Software for Management of State-Level Hazard Mitigation Planning and Programming A software program.
Overview Lesson 10,11 - Software Quality Assurance
SE curriculum in CC2001 made by IEEE and ACM: Overview and Ideas for Our Work Katerina Zdravkova Institute of Informatics
18 th International Forum on COCOMO and Software Cost Modeling October 2003 Use of Historical Data by High Maturity Organizations Rick Hefner, Ph.D.
SwE 434. Rational Quality Manager Rational Quality Manager is a collaborative, Web-based tool that offers comprehensive test planning, test construction,
SQM - 1DCS - ANULECTURE Software Quality Management Software Quality Management Processes V & V of Critical Software & Systems Ian Hirst.
Secure System Administration & Certification DITSCAP Manual (Chapter 6) Phase 4 Post Accreditation Stephen I. Khan Ted Chapman University of Tulsa Department.
1.
Tony Gould Quality Risk Management. 2 | PQ Workshop, Abu Dhabi | October 2010 Introduction Risk management is not new – we do it informally all the time.
Chapter 2 A Strategy for the Appraisal of Public Sector Investments.
EADS TEST & SERVICES TS/EL/T N°08_04/08 Page 1© Copyright EADS TEST & SERVICES 2008 Engineering Process for Systems Testability Analysis. Presentation.
Using Six Sigma to Achieve CMMI Levels 4 and 5
Case 1: Optimum inspection and maintenance rates (wind turbine is available during inspection) Case 2: Optimum inspection and maintenance rates (wind turbine.
An Introduction to AlarmInsight
Understanding Data Analytics and Data Mining Introduction.
Integrated Capability Maturity Model (CMMI)
Risk management in Software Engineering T erm Paper By By Praveenkumar Sammita Praveenkumar Sammita CSC532 CSC532.
Capability Maturity Model Part One - Overview. History Effort started by SEI and MITRE Corporation  assess capability of DoD contractors First.
Unit 8 Syllabus Quality Management : Quality concepts, Software quality assurance, Software Reviews, Formal technical reviews, Statistical Software quality.
Chapter 13: Developing and Implementing Effective Accounting Information Systems
Analyze Opportunity Part 1
Project Tracking. Questions... Why should we track a project that is underway? What aspects of a project need tracking?
Software Estimation and Function Point Analysis Presented by Craig Myers MBA 731 November 12, 2007.
Managing Risks in Projects. Risk Concepts The Likelihood that some Problematical Event will Occur The Likelihood that some Problematical Event will Occur.
NDIA Systems Engineering Supportability & Interoperability Conference October 2003 Using Six Sigma to Improve Systems Engineering Rick Hefner, Ph.D.
『华东师范大学』 课程名称: 软件开发实践 Software Development Practice 课程类型: 实践课 第二讲: 项目管理 Lect_02: Manage the Project 主讲 : 软件学院 周勇 副 教授 日期 :
Page 1 Designing for Health; A Methodology for Integrated Diagnostics/Prognostics Raymond Beshears Raytheon 2501 W. University McKinney, TX
Slide 1V&V 10/2002 Software Quality Assurance Dr. Linda H. Rosenberg Assistant Director For Information Sciences Goddard Space Flight Center, NASA
IIL’s International Project Management Day, 2007 The Power of the Profession: A Lesson Learned and Solution Implemented Becomes a Best Practice in Project.
Lecture Topics covered CMMI- - Continuous model -Staged model PROCESS PATTERNS- -Generic Process pattern elements.
Assessment of Alternate Methodologies for Establishing Equivalent Satisfaction of the Ec Criterion for Launch Licensing Terry Hardy AST-300/Systems Engineering.
Chapter 1. Introduction.
CHAPTER 4: Procurement.
ENGINEERING LESSONS LEARNED AND SYSTEMS ENGINEERING APPLICATIONS Paul S. Gill and Danny Garcia, NASA Technical Standards Program Office William W. Vaughan,
Advance Planning Briefing for Industry 10 October 2008 Mr. Chuck Comaty.
1 | 2010 Lecture 3: Project processes. Covered in this lecture Project processes Project Planning (PP) Project Assessment & Control (PAC) Risk Management.
Project Portfolio Management Business Priorities Presentation.
Cmpe 589 Spring 2006 Lecture 2. Software Engineering Definition –A strategy for producing high quality software.
ON CONDITION TASK Module 3 UNIT III HOW TO PERFORM RCM " Copyright 2002, Information Spectrum, Inc. All Rights Reserved."
RLV Reliability Analysis Guidelines Terry Hardy AST-300/Systems Engineering and Training Division October 26, 2004.
Minimizing SCAMPI Costs via Quantitative Methods Ron Ulrich, Northrop Grumman Rick Hefner, Northrop Grumman CMMI.
1 Lecture 12: Chapter 16 Software Quality Assurance Slide Set to accompany Software Engineering: A Practitioner’s Approach, 7/e by Roger S. Pressman Slides.
Software reviews Cost impact of software defects Defect amplification model Review metrics and their use – Preparation effort (E p ), assessment effort.
SRR and PDR Charter & Review Team Linda Pacini (GSFC) Review Chair.
Life Cycle Cost Savings by Improving Reliability Dr. Charles E. McQueary Director, Operational Test and Evaluation January 15, 2009.
Project Management Strategies Hidden in the CMMI Rick Hefner, Northrop Grumman CMMI Technology Conference & User Group November.
1 Overview of SAE’s AS6500 “Manufacturing Management Program” David Karr Technical Advisor for Mfg/QA AFLCMC/EZSM
Cmpe 589 Spring Fundamental Process and Process Management Concepts Process –the people, methods, and tools used to produce software products. –Improving.
Failure Modes, Effects and Criticality Analysis
Intelligent Maintenance Program
ON “SOFTWARE ENGINEERING” SUBJECT TOPIC “RISK ANALYSIS AND MANAGEMENT” MASTER OF COMPUTER APPLICATION (5th Semester) Presented by: ANOOP GANGWAR SRMSCET,
Software Project Configuration Management
Software Quality Assurance
Software Project Management
CS4311 Spring 2011 Process Improvement Dr
Enterprise Algorithm Change Process
FMEA.
Failure Modes and Effects Analysis (FMEA)
Why Do We Measure? assess the status of an ongoing project
Quality Risk Management
Instrument PDR Summary of Objectives
Why Do We Measure? assess the status of an ongoing project
Failure Mode and Effect Analysis
PFMEA Summary Process Steps
Jeff Dutton/NASA COR August 26, 2019
Presentation transcript:

1 Improving the Risk Management Capability of the Reliability and Maintainability Program An introduction to the philosophy behind the AIAA S-102 Performance-Based R&M Program Standards Tyrone Jackson; The Aerospace Corporation David Oberhettinger, Northrop Grumman © 2004 Northrop Grumman and The Aerospace Corporation 20 February 2004

2 Topics Proposed Hypothesis Conventional Risk Management Versus Design Reliability Risk Avoidance S-102 Standards Establish FMECA Process as Focal Point of R&M Program Distinguishing Characteristics of S-102 FMECA Process Standard S-102 FMECA Process Standard Outline Case Study Application of S-102 FMECA Process Evaluation Criteria Differences between Project “A” and Project “B” Satellites and FMECA Processes Project “A” & “B” Test Anomaly Event and Cost Summary Conclusions

3 Proposed Hypothesis Cost of Failure Reporting, Analysis, and Corrective Action System (FRACAS) decreases in accordance with increases in capability of Failure Mode, Effects and Criticality (FMECA) process to eliminate latent design concerns early Or stated another way, contractors can significantly reduce incidence of test anomalies by providing designers with tools that identify latent design concerns TEST ANOMALIES S-102 FMECA PROCESS CAPABILITY LEVEL

4 Example Latent Design Concern Transistor Reverse Current Path

5 Benefit of Integrating Risk and Issue Item Management A risk is a potential problem An issue is an existing problem Sometimes same risk/problem items are worked in parallel by different 3-Letters Mission impact of some “likely” risk items is more severe than some issue items Limited SPO manpower and management reserve can be better used to assure Mission Success by racking and stacking all risk and issue items together Integrating risk and issue item management across all 3-Letters will minimize duplication in effort and maximize use of Management Reserve Integrating risk and issue item management across all 3-Letters will minimize duplication in effort and maximize use of Management Reserve SPO Risk Board Racked and Stacked Risk/Issue Items Issue Items Risk Items SPO and Contractor Program Concerns

6 There are 40 standards in AIAA S-102 Performance-Based R&M Program document tree Most tasks in S-102 R&M Program schema are impacted by Product FMECA Process or supply data to it Depending on how Product FMECA is performed in terms of quality and completeness could be difference between a FRACAS that stays within its budget and one that over-runs its budget S-102 FMECA Process Standard requires that an implementation plan be developed and integrated with in conjunction with R&M Program Plan Desired FMECA process capability level is to be specified in contract as defined in S-102 FMECA Process Standard S-102 Standards Establish FMECA Process as Focal Point of R&M Program

7 Distinguishing Characteristics of S-102 FMECA Process Standard It calls for use of knowledge-based approaches to identify, analyze, and manage design weaknesses It provides consistent criteria for rating “capability” of an FMECA process –Defines a five-level capability rating for each R&M task –Capability level ratings can help an organization plan systems engineering process improvement strategies by determining current capability levels of their R&M practices and most critical areas for improvement It provides consistent criteria for rating “maturity” of key FMECA data products It calls for use of predefined FMECA data parameters to facilitate the interchange of FMECA data products with computer aided tools and other project databases

8 S-102 FMECA Process Standard Outline 1.System Design Data Collection 2.Failure Modes And Effects Analysis 3.Criticality Analysis 4.Failure Detection Analysis 5.Failure Isolation Analysis 6.Detection and Isolation Risk Priority Analyses 7.Reliability, System Safety, and Maintainability Critical Item Analyses 8.Failure Compensation Analysis 9.Product FMECA Database 10.Data Interchange Between Product FMECA Process And Other Activities 11.FMECA Data Product Residual Risk Assessment

9 Case Study Application of S-102 FMECA Process Evaluation Criteria FMECA processes for two space vehicle acquisition projects were evaluated in 2003 using S-102 FMECA Process evaluation criteria Project “A” implemented an FMECA process that was tailored down from Task 204 in MIL-STD-1543B Project “B” implemented an FMECA process that was “tailored down” from Task 101 in MIL-STD-1629A Used S-102 FMECA Process Capability Level criteria to rate FMECA processes implemented in Project “A” and Project “B” Evaluated FRACAS history of Project “A” and Project “B” to determine number of test anomalies caused by design concerns

10 Project “A” and Project “B” FMECA Processes were Tailored Down from Military Standards Root causes were missing in Project “A” FMECA Failure mechanisms were missing in both Project “A” and Project “B” FMECA Sneak circuit conditions were missing in both Project “A” and Project “B” FMECA Physical design failure modes were missing in Project “A” FMECA Software design failure modes were missing in both Project “A” and Project “B” FMECA Cascading and multiple failure modes were missing in both Project “A” and Project “B” FMECA Human errors were missing in both Project “A” and Project “B” FMECA

11 Differences Between Project “A” and Project “B” Satellites and FMECA Processes Project “A” satellite is a payload only Project “B” satellite is a full space vehicle Project “A” used 4 engineering man-years to complete FMECA Project “B” used 12 engineering man-years to complete FMECA Project “A” FMECA process is approximately equivalent to a Capability Level 1 S-102 FMECA Process Project “B” FMECA process is approximately equivalent to a Capability Level 2 S-102 FMECA Process Project “B” satellite is 19 times heavier than Project “A” satellite

12 Project “A” Test Anomaly Event Summary

13 Project “A” Test Anomaly Cost Summary

14 Project “B” Test Anomaly Event Summary

15 Project “B” Test Anomaly Cost Summary

16 Summary of Case Study Findings There were 55 test anomalies caused by design concerns in Project “A” versus 17 test anomalies caused by design concerns in Project “B” Ratio of satellite weight versus number of design concern initiated test anomalies is 9.6 lbs/anomaly for Project “A” and 588 lbs/anomaly for Project “B” Estimated cost impact on Project “A” and Project “B” was $2,915,000 and $901,000, respectively, based on an average cost of $53,000 per anomaly analysis

17 Conclusions Case study shows that incidence of test anomalies caused by design concerns may possibly be significantly decreased by implementing a Capability Level 2 S-102 Product FMECA Process Validation of proposed hypothesis would require analyzing production FRACAS data of several more satellite projects If hypothesis proves valid, then application of S-102 would save millions of dollars and thousands of labor hours in typical satellite development project