CSCI 5801: Software Engineering

Slides:



Advertisements
Similar presentations
Design Validation CSCI 5801: Software Engineering.
Advertisements

Test process essentials Riitta Viitamäki,
Verification and Validation
Software Engineering-II Sir zubair sajid. What’s the difference? Verification – Are you building the product right? – Software must conform to its specification.
1 Software Engineering Lecture 11 Software Testing.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 1 Critical Systems Validation.
Figures – Chapter 24.
Testing Without Executing the Code Pavlina Koleva Junior QA Engineer WinCore Telerik QA Academy Telerik QA Academy.
CS CS 5150 Software Engineering Lecture 21 Reliability 2.
CS CS 5150 Software Engineering Lecture 22 Reliability 2.
SE 450 Software Processes & Product Metrics Reliability Engineering.
©Ian Sommerville 2000Software Engineering, 6th edition. Chapter 19Slide 1 Verification and Validation l Assuring that a software system meets a user's.
CS CS 5150 Software Engineering Lecture 20 Reliability 1.
1 / 28 CS 425/625 Software Engineering Verification and Validation Based on Chapter 19 of the textbook [SE-6] Ian Sommerville, Software Engineering, 6.
Modified from Sommerville’s originals Software Engineering, 7th edition. Chapter 24 Slide 1 Critical Systems Validation.
CS CS 5150 Software Engineering Lecture 21 Reliability 3.
Soft. Eng. II, Spr. 2002Dr Driss Kettani, from I. Sommerville1 CSC-3325: Chapter 9 Title : Reliability Reading: I. Sommerville, Chap. 16, 17 and 18.
Testing an individual module
CS 501: Software Engineering Fall 2000 Lecture 22 Dependable Systems II Validation and Verification.
CS CS 5150 Software Engineering Lecture 21 Reliability 1.
Swami NatarajanJuly 14, 2015 RIT Software Engineering Reliability: Introduction.
 QUALITY ASSURANCE:  QA is defined as a procedure or set of procedures intended to ensure that a product or service under development (before work is.
Design, Implementation and Maintenance
Verification and Validation CIS 376 Bruce R. Maxim UM-Dearborn.
Verification and Validation
Examining the Code [Reading assignment: Chapter 6, pp ]
1CMSC 345, Version 4/04 Verification and Validation Reference: Software Engineering, Ian Sommerville, 6th edition, Chapter 19.
Chapter 24 - Quality Management Lecture 1 1Chapter 24 Quality management.
Software Dependability CIS 376 Bruce R. Maxim UM-Dearborn.
Software Reliability Categorising and specifying the reliability of software systems.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 24 Slide 1 Critical Systems Validation 1.
Software Testing Verification and validation planning Software inspections Software Inspection vs. Testing Automated static analysis Cleanroom software.
©Ian Sommerville 2000Software Engineering, 6th edition. Chapter 19Slide 1 Verification and Validation l Assuring that a software system meets a user's.
COMP 354 Software Engineering I Section BB Summer 2009 Dr Greg Butler
Project Quality Management
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Verification and Validation.
CS 501: Software Engineering Fall 1999 Lecture 16 Verification and Validation.
CMSC 345 Fall 2000 Unit Testing. The testing process.
Software Inspection A basic tool for defect removal A basic tool for defect removal Urgent need for QA and removal can be supported by inspection Urgent.
Software Reliability SEG3202 N. El Kadri.
CS CS 5150 Software Engineering Lecture 20 Reliability 2.
Security - Why Bother? Your projects in this class are not likely to be used for some critical infrastructure or real-world sensitive data. Why should.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 20 Slide 1 Critical systems development 3.
©Ian Sommerville 2000Software Engineering, 6th edition. Chapter 19Slide 1 Chapter 19 Verification and Validation.
Chapter 12: Software Inspection Omar Meqdadi SE 3860 Lecture 12 Department of Computer Science and Software Engineering University of Wisconsin-Platteville.
1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 22 Reliability II.
CS CS 5150 Software Engineering Lecture 20 Reliability 2.
Figures – Chapter 15. Figure 15.1 Model checking.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
CS 360 Lecture 16.  For a software system to be reliable:  Each stage of development must be done well, with incremental verification and testing. 
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
1 CS 501 Spring 2003 CS 501: Software Engineering Lecture 21 Reliability II.
Verification vs. Validation Verification: "Are we building the product right?" The software should conform to its specification.The software should conform.
CS 5150 Software Engineering Lecture 21 Reliability 2.
1 CS 501 Spring 2004 CS 501: Software Engineering Lecture 20 Reliability 2.
CMSC 345 Defensive Programming Practices from Software Engineering 6th Edition by Ian Sommerville.
CSC 480 Software Engineering
Verification and Validation
Verification & Validation
Verification and Validation
Verification and Validation
Critical Systems Validation
Software Reliability Models.
Software testing strategies 2
Lecture 09:Software Testing
Critical Systems Validation
Verification & Validation
System Testing.
Presentation transcript:

CSCI 5801: Software Engineering Software Reliability CSCI 5801: Software Engineering

Software Reliability

Software Reliability What you know after testing: Software passes all cases in test suite What customer wants to know: Is the code well written in general? How often will it fail? What has to happen for it to fail? What happens when it fails?

Larger Context of Reliability Fault detection (testing and validation) Detect faults before the system is put into operation Fault avoidance Build systems with the objective of creating fault-free software Fault tolerance Build systems that continue to operate when faults occur

Code Reviews Examining code without running it Remove dependency on test cases Methodology: look for typical flaws Best done by others who have different POV Code walkthroughs done by other programmers Pair programming in XP Static analysis tools Goal: Detect flaws before they become faults (fault avoidance)

Code Walkthroughs Estimated to find 60% to 90% of code errors Going through code by hand, statement by statement 90 – 125 statements/hour on average Team with ~4 members, with specific roles: Moderator: runs session, insure proceeds smoothly Code author Inspectors (at least 2) Scribe: writes down results/suggestions Estimated to find 60% to 90% of code errors

Code Walkthroughs Preparation Meeting Developer provides colleagues with code listing and documentation Participants study the documentation in advance Meeting Developer leads reviewers through the code, describing what each section does and encouraging questions Inspectors look for possible flaws and suggest improvements

Code Walkthroughs Example checklist: Data faults: Initialization, constants, array bounds, character strings Control faults: Conditions, loop termination, compound statements, case statements Input/output faults: All inputs used; all outputs assigned a value Interface faults: Parameter numbers, types, and order; structures and shared memory Storage management faults: Modification of links, allocation and de-allocation of memory Exceptions: Possible errors, error handlers

Static Analysis Tools Scan source code for possible faults and anomalies Lint for C programs PMD for Java Examples: Control flow: Loops with multiple exit or entry points Data use: Undeclared or uninitialized variables, unused variables, multiple assignments, array bounds Interface faults: Parameter mismatches, non-use of functions results, uncalled procedures Storage management: Unassigned pointers, pointer arithmetic Good programming practice eliminates all warnings from source code

PMD Example

Static Analysis Tools Cross-reference table: Shows every use of a variable, procedure, object, etc. Information flow analysis: Identifies input variables on which an output depends. Path analysis: Identifies all possible paths through the program.

Software reliability Definition: Probability that the system will not fail during a certain period of time in a certain environment Failures/CPU hour, etc. Questions: How much more testing is needed to reach required reliability? What is expected reliability gain for further testing?

Statistical Testing Testing software for reliability rather than fault detection Measuring the number of errors/transaction allows the reliability of the software to be predicted Key problem: Software will never be 100% reliable! An acceptable level of reliability should be specified in RSD, and the software tested and modified until that level of reliability is reached

Reliability Prediction Reliability growth model Mathematical model of how system reliability is predicted to change over time as faults found and removed Extrapolated from current data about failures Can be used to determine whether system meets reliability requirements Mean time to failure Average failures per transaction Can be used to predict when testing will be completed and what level of reliability is feasible

Operational Profile Problem: Statistical testing requires large number of test cases for statistical significance (thousands) Where do such test cases come from? Often too many to create by hand Random generation not sufficient

Operational Profile Operational profile: Set of test data whose frequency matches the actual frequency of these inputs from ‘normal’ usage of the system Close match with actual usage is necessary or the measured reliability will not be reflected in the actual usage of the system Can be generated from real data collected from an existing system or (more often) depends on assumptions made about the pattern of usage of a system.

Example Operational Profile Note that some types of inputs much more likely than others

LPM Estimates Logarithmic Poisson execution time model (LPM) Major bugs found quickly Those major bugs cause most failures Effectiveness of fault correction decreases over time There exists a point at which further testing has little gain

Reliability prediction

Reliability Measurement Problems Operational profile uncertainty The operational profile may not be an accurate reflection of the real use of the system High costs of test data generation Costs can be very high if the test data for the system cannot be generated automatically Statistical uncertainty You need a statistically significant number of failures to compute the reliability but highly reliable systems will rarely fail

Stress Testing Goal of stress testing: Determine what it will take to “break” system “Break” = no longer meets requirements in some way Functional: fails to perform required functions Reliability: fails more often than specified Performance: slower than required Approaches: Increase load/decrease resources until system breaks Perform “attacks” designed to produce undesirable result

Stress Testing Increase load on system in different ways Number of students simultaneously adding courses Size of files/databases that must be read … Decrease resources available to system (may require fault injection software) Increase number of other processes running on system Increase lag time of networked resources Goal: point at which system fails should be much greater than scenarios listed in RSD

Stress Testing “Attack” testing common in security Goal of normal testing: Goal of secure programming: Input for specific test case Desired response for specific test case System Does not produce undesirable result System Any input

Stress Testing Based on risk analysis from design stage: Can roster database be deleted? Can intruder read files (in violation of FERPA)? Can a student add a course but not be added to the roster?

Fault Tolerance Goals: Problems can occur from many sources System continues to operate when problems occur System avoids critical failures (data loss, etc.) Problems can occur from many sources Anticipated at design stage Unanticipated (hardware faults, etc.) Cannot prevent all failures!

Fault Tolerance Usually based on idea of “backward recovery” Record system state at specific events (checkpoints). After failure, recreate state at last checkpoint. Combine checkpoints with system log (audit trail of transactions) that allows transactions from last checkpoint to be repeated automatically. Note that backward recovery software must also be thoroughly tested!