Fault Tolerance Benchmarking. 2 Owerview What is Benchmarking? What is Dependability? What is Dependability Benchmarking? What is the relation between.

Slides:



Advertisements
Similar presentations
Software Quality Assurance Plan
Advertisements

Chapter 13 Managing Computer and Data Resources. Introduction A disciplined, systematic approach is needed for management success Problem Management,
Term Paper OLOMOLA,Afolabi( ). Dependability Modellling.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
1 Software Testing and Quality Assurance Lecture 33 – Software Quality Assurance.
SE 450 Software Processes & Product Metrics Reliability: An Introduction.
Software project management (intro) Quality assurance.
MSIS 110: Introduction to Computers; Instructor: S. Mathiyalakan1 Systems Design, Implementation, Maintenance, and Review Chapter 13.
SIM5102 Software Evaluation
SIMULATING ERRORS IN WEB SERVICES International Journal of Simulation: Systems, Sciences and Technology 2004 Nik Looker, Malcolm Munro and Jie Xu.
SOFTWARE PROJECT MANAGEMENT Project Quality Management Dr. Ahmet TÜMAY, PMP.
Software Process and Product Metrics
Fraud Prevention and Risk Management
Software Dependability CIS 376 Bruce R. Maxim UM-Dearborn.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Management Information Systems
Computer System Lifecycle Chapter 1. Introduction Computer System users, administrators, and designers are all interested in performance evaluation. Whether.
 The software systems must do what they are supposed to do. “do the right things”  They must perform these specific tasks correctly or satisfactorily.
Cmpe 589 Spring Software Quality Metrics Product  product attributes –Size, complexity, design features, performance, quality level Process  Used.
Reliability Andy Jensen Sandy Cabadas.  Understanding Reliability and its issues can help one solve them in relatable areas of computing Thesis.
Software Metrics - Data Collection What is good data? Are they correct? Are they accurate? Are they appropriately precise? Are they consist? Are they associated.
SOFTWARE ENGINEERING1 Introduction. Software Software (IEEE): collection of programs, procedures, rules, and associated documentation and data SOFTWARE.
Software Software is omnipresent in the lives of billions of human beings. Software is an important component of the emerging knowledge based service.
OHTO -99 SOFTWARE ENGINEERING “SOFTWARE PRODUCT QUALITY” Today: - Software quality - Quality Components - ”Good” software properties.
Software Testing Testing principles. Testing Testing involves operation of a system or application under controlled conditions & evaluating the results.
Topic (1)Software Engineering (601321)1 Introduction Complex and large SW. SW crises Expensive HW. Custom SW. Batch execution.
1 Software Testing and Quality Assurance Lecture 33 – Software Quality Assurance.
 CS 5380 Software Engineering Chapter 8 Testing.
Principles of Information Systems, Sixth Edition Systems Design, Implementation, Maintenance, and Review Chapter 13.
SOFTWARE SYSTEMS DEVELOPMENT 4: System Design. Simplified view on software product development process 2 Product Planning System Design Project Planning.
Dr. Tom WayCSC Testing and Test-Driven Development CSC 4700 Software Engineering Based on Sommerville slides.
OHTO -99 SOFTWARE ENGINEERING “SOFTWARE PRODUCT QUALITY” Today: - Software quality - Quality Components - ”Good” software properties.
University of Palestine software engineering department Testing of Software Systems Testing throughout the software life cycle instructor: Tasneem.
M. Frize, Winter 2003 Reliability and Medical Devices Prof. Monique Frize, P. Eng., O.C. ELG5123/ February 2003.
Today’s Agenda  HW #1  Finish Introduction  Input Space Partitioning Software Testing and Maintenance 1.
J1879 Robustness Validation Hand Book A Joint SAE, ZVEI, JSAE, AEC Automotive Electronics Robustness Validation Plan The current qualification and verification.
Building Dependable Distributed Systems Chapter 1 Wenbing Zhao Department of Electrical and Computer Engineering Cleveland State University
Secure Systems Research Group - FAU 1 Active Replication Pattern Ingrid Buckley Dept. of Computer Science and Engineering Florida Atlantic University Boca.
Code Complete Steve McConnell. 20. The Software-Quality Landscape.
Product and Service Design
Testing Techniques Software Testing Module ( ) Dr. Samer Hanna.
CS551 - Lecture 5 1 CS551 Lecture 5: Quality Attributes Yugi Lee FH #555 (816)
1 EE29B Feisal Mohammed EE29B: Introduction to Software Engineering Feisal Mohammed Ph: x3156.
Cmpe 589 Spring 2006 Lecture 2. Software Engineering Definition –A strategy for producing high quality software.
CS 505: Thu D. Nguyen Rutgers University, Spring CS 505: Computer Structures Fault Tolerance Thu D. Nguyen Spring 2005 Computer Science Rutgers.
Software Testing and Quality Assurance Software Quality Assurance 1.
Principles of Information Systems, Sixth Edition 1 Systems Design, Implementation, Maintenance, and Review Chapter 13.
1 Software Architecture in Practice Quality attributes (The amputated version)
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
Assoc. Prof. Dr. Ahmet Turan ÖZCERİT.  System and Software  System Engineering  Software Engineering  Software Engineering Standards  Software Development.
1 Fault-Tolerant Computing Systems #1 Introduction Pattara Leelaprute Computer Engineering Department Kasetsart University
Smart Home Technologies
1 INTRUSION TOLERANT SYSTEMS WORKSHOP Phoenix, AZ 4 August 1999 Jaynarayan H. Lala ITS Program Manager.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
SwCDR (Peer) Review 1 UCB MAVEN Particles and Fields Flight Software Critical Design Review Peter R. Harvey.
Slide 1SATC June 2000 Dolores R. Wallace* NASA Goddard Space Flight Center Greenbelt, Maryland for the American Society.
What is a software? Computer Software, or just Software, is the collection of computer programs and related data that provide the instructions telling.
ISQB Software Testing Section Meeting 10 Dec 2012.
Software Dependability
SOFTWARE TESTING Date: 29-Dec-2016 By: Ram Karthick.
PREPARED BY G.VIJAYA KUMAR ASST.PROFESSOR
Classifications of Software Requirements
Security SIG in MTS 05th November 2013 DEG/MTS RISK-BASED SECURITY TESTING Fraunhofer FOKUS.
Fault Tolerance & Reliability CDA 5140 Spring 2006
Fault Injection: A Method for Validating Fault-tolerant System
J1879 Robustness Validation Hand Book A Joint SAE, ZVEI, JSAE, AEC Automotive Electronics Robustness Validation Plan Robustness Diagram Trends and Challenges.
Testing and Test-Driven Development CSC 4700 Software Engineering
Progression of Test Categories
Information Systems, Ninth Edition
Overview Dependability: "[..] the trustworthiness of a computing system which allows reliance to be justifiably placed on the service it delivers [..]"
Presentation transcript:

Fault Tolerance Benchmarking

2 Owerview What is Benchmarking? What is Dependability? What is Dependability Benchmarking? What is the relation between Dependability & Fault tolerance Benchmarking? Tsai (A Fault tolerance Benchmark) / 20

3 What is Benchmarking? Benchmarking is testing of several computer systems or peripheral devices with the aim to compare their performance-cost relationship. Benchmarking means measuring something In an automated, reproducible, portable, and truthful way, and this can only probably be achieved by relying on direct results. (not by model analysis). / 20

4 What is a Benchmark ? A specification of all elements necessary for performing the benchmark tests. What elements? See later! / 20

5 Benchmarking Context Definition of benchmark elements depends on:  the system under benchmark  the benchmark scope (internal/external)  the application area  the phase of system life cycle in mind  design  Implementation  integration  test  production  maintenance / 20

6 Dependability is the trustworthiness of a computer system such that reliance can justifiably be placed on the service it delivers. Dependability is normally described by a set of dependability attributes. What is Dependability? / 20

7 Dependability Attributes  Reliability Measure of continuous correct service delivery(dependability with respect to continuity of service).  Availability Measure of correct service delivery with respect to the alternation of correct and incorrect service (dependability with respect to readiness for usage).  Safety Measure of continuous delivery of either correct service or incorrect service after benign failure (dependability with respect to the non- occurrence of catastrophic failures).  Security Dependability with respect to the prevention of unauthorized access and/or handling of information.  Robustness The degree to which a system or component can function correctly in the presence of invalid inputs or stressful environment conditions. / 20

8 Why Benchmark Availability?  System availability is a pressing problem –modern applications demand near-100% availability  e-commerce, enterprise apps, online services, ISPs at all scales and price points –we don’t know how to build highly-available systems!  except at the very high-end  Few tools exist to provide insight into system availability –most existing benchmarks ignore availability  focus on performance, and under ideal conditions –no comprehensive, well-defined metrics for availability / 20

9 Availability metrics  Traditionally, percentage of time system is up –time-averaged, binary view of system state (up/down)  This metric is inflexible –doesn’t capture degraded states  a non-binary spectrum between “up” and “down”  time-averaging discards important temporal behavior –compare 2 systems with 96.7% traditional availability:  system A is down for 2 seconds per minute  system B is down for 1 day per month / 20

10 Dependability Attributes How do we achiev these attributes? –start by understanding them –figure out how to measure them –evaluate existing systems and techniques –develop new approaches based on what we’ve learned –and measure them as well! Dependability Benchmarks make these tasks possible! / 20

11 What is Dependability Benchmarking? Dependability benchmarking is performing a set of tests to quantify computer dependability. This quantification is supported by the evaluation of dependability attributes (e.g., reliability, availability, safety) through the assessment of direct measures related to the behavior of a computer in the presence of faults. Examples of these direct measures are failure modes, error detection coverage, error latency, diagnosis efficiency, recovery time, and recovery losses. / 20

12 Dependability Benchmarking The goal of dependability benchmarking is to quantify the dependability features of a computer or a computer component in a truthful and reproducible way. Unlike functionality and performance features, that are normally available to the customers or can be certified (functionality) or measured (performance), component or system dependability cannot be easily assessed today. The objective of dependability benchmarking is to change this picture, especially for COTS components and COTS-based systems. / 20

13 What is expected from a Software Dependability Benchmark?  Software characterization with respect to: – Internal faults – External faults  other software component(s)  hardware  Aims – Properties + quantification of some specific measures – Avoid undesirable behavior → “environment” modification – Enhancement  Correction  Wrapping / 20

14 Availability Benchmark / 20

15 Software Testing  Aims – Activate / identify faults correction – Characterize software behavior  Various kinds of tests for characterization – Functional (statistical / operational) testing Validate functionality under ordinary operating conditions (typical load) Software reliability evaluation – Load testing Performance under heavy load (peak, worst case) – Robustness / stress testing Pushes the software beyond its specified limits (invalid inputs, stressful environmental conditions) / 20

16 Software Reliability  Aim – Evaluate software reliability measures (MTTF, failure intensity)  Means – Failure data collection and processing From functional test  Advantage Early estimation  Limitation Representative operational profile Duration accuracy / 20

17 Software Performance Benchmark Most of the time Performance benchmark = system benchmark Aims of software performance benchmark Measure system performance (or price / performance) to compare: –different software products on the same machine Competition between software vendors –different releases/versions of the same product on the same machine Performance improvement? Two categories of benchmarks – Coarse-grain benchmarks => execution times of the entire application – Fine-grain benchmarks => execution time (rate) of specific operations / 20

18 Fault Tolerance & Robustness Benchmarks  Fault tolerance benchmarks – Ability to tolerate faults – Effectiveness of error detection and recovery mechanisms – Performance degradation due to faults  Robustness benchmarks – Ability to tolerate / resist to unexpected conditions caused by:  Hardware failures  Other user programs (system call with illegal parameters) – Failure modes = Robustness test + repeatable  Fault injection / 20

19 The Fault Tolerance Benchmark (Tsai et al., 1996)  This fault tolerance benchmark uses a two-phase procedure: 1) Determine whether the system tolerates the faults that it is intended to tolerate, and evaluate the effect these faults have on the fault tolerance mechanisms. 2) Evaluate the reaction of the system to faults that it is not designed to handle (a demonstration of the degree of fault tolerance beyond what is expected).  For phase 1 injections, three types of measures are obtained: –Error/fault ratio: the ratio of the number of errors detected to the number of faults injected. –Performance degradation: two times are measured:  The time to execute the benchmark with faults.  The time to execute the benchmark without faults. –Number of catastrophic incidents. / 20

20 Conclusion Dependability benchmarks rely on: development / validation process & field information + additional specific work What about COTS? well tested BUT information not available all the time / 20