1 Software Engineering II Software Reliability. 2 Dependable and Reliable Systems: The Royal Majesty From the report of the National Transportation Safety.

Slides:



Advertisements
Similar presentations
Verification and Validation
Advertisements

Chapter 4 Quality Assurance in Context
Programming Languages Marjan Sirjani 2 2. Language Design Issues Design to Run efficiently : early languages Easy to write correctly : new languages.
Software Engineering-II Sir zubair sajid. What’s the difference? Verification – Are you building the product right? – Software must conform to its specification.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 20 Slide 1 Critical systems development.
Testing Without Executing the Code Pavlina Koleva Junior QA Engineer WinCore Telerik QA Academy Telerik QA Academy.
CS CS 5150 Software Engineering Lecture 21 Reliability 2.
Overview Lesson 10,11 - Software Quality Assurance
CS CS 5150 Software Engineering Lecture 22 Reliability 2.
CS CS 5150 Software Engineering Lecture 22 Reliability 3.
Week 7: Requirements validation Structured walkthroughs Why have walkthroughs When to have walkthroughs Who participates What procedures are helpful Thoughtless.
1 CS 501 Spring 2005 CS 501: Software Engineering Lecture 19 Reliability 1.
1 CS 501 Spring 2008 CS 501: Software Engineering Lecture 20 Reliability 2.
©Ian Sommerville 2000Software Engineering, 6th edition. Chapter 19Slide 1 Verification and Validation l Assuring that a software system meets a user's.
CS CS 5150 Software Engineering Lecture 20 Reliability 1.
CSI 101 Elements of Computing Spring 2009 Lecture #2 Development Life Cycle of a Computer Application Monday January 26th, 2009.
1 CS 501 Spring 2008 CS 501: Software Engineering Lecture 19 Reliability 1.
1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 20 Reliability 2.
CS CS 5150 Software Engineering Lecture 21 Reliability 3.
Developing Dependable Systems CIS 376 Bruce R. Maxim UM-Dearborn.
1 CS 501 Spring 2006 CS 501: Software Engineering Lecture 20 Reliability 2.
CS 501: Software Engineering Fall 2000 Lecture 21 Dependable Systems I Reliability.
1 CS 501 Spring 2004 CS 501: Software Engineering Lecture 19 Reliability 1.
CS 501: Software Engineering Fall 2000 Lecture 22 Dependable Systems II Validation and Verification.
CS CS 5150 Software Engineering Lecture 21 Reliability 1.
CS CS 5150 Software Engineering Lecture 19 Reliability 1.
Design, Implementation and Maintenance
Verification and Validation CIS 376 Bruce R. Maxim UM-Dearborn.
1CMSC 345, Version 4/04 Verification and Validation Reference: Software Engineering, Ian Sommerville, 6th edition, Chapter 19.
Chapter 24 - Quality Management Lecture 1 1Chapter 24 Quality management.
CSCI 5801: Software Engineering
Software Testing Verification and validation planning Software inspections Software Inspection vs. Testing Automated static analysis Cleanroom software.
System/Software Testing
Project Quality Management
Verification and Validation Yonsei University 2 nd Semester, 2014 Sanghyun Park.
Chapter 2 The process Process, Methods, and Tools
CS 501: Software Engineering Fall 1999 Lecture 16 Verification and Validation.
CMSC 345 Fall 2000 Unit Testing. The testing process.
1 Debugging and Testing Overview Defensive Programming The goal is to prevent failures Debugging The goal is to find cause of failures and fix it Testing.
1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 23 Reliability III.
CS CS 5150 Software Engineering Lecture 20 Reliability 2.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
CS CS 5150 Software Engineering Lecture 19 Reliability 1.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 20 Slide 1 Critical systems development 3.
©Ian Sommerville 2000Software Engineering, 6th edition. Chapter 19Slide 1 Chapter 19 Verification and Validation.
CprE 458/558: Real-Time Systems
Chapter 12: Software Inspection Omar Meqdadi SE 3860 Lecture 12 Department of Computer Science and Software Engineering University of Wisconsin-Platteville.
1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 22 Reliability II.
CS CS 5150 Software Engineering Lecture 20 Reliability 2.
CS 5150 Software Engineering Lecture 20 Reliability 1 (and a little Privacy)
CS 360 Lecture 17.  Software reliability:  The probability that a given system will operate without failure under given environmental conditions for.
MNP1163 (Software Construction).  SDLC and Construction Models  Construction Planning  Construction Measurement.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
CS 360 Lecture 16.  For a software system to be reliable:  Each stage of development must be done well, with incremental verification and testing. 
1 The Software Development Process ► Systems analysis ► Systems design ► Implementation ► Testing ► Documentation ► Evaluation ► Maintenance.
T EST T OOLS U NIT VI This unit contains the overview of the test tools. Also prerequisites for applying these tools, tools selection and implementation.
Testing Overview Software Reliability Techniques Testing Concepts CEN 4010 Class 24 – 11/17.
1 CS 501 Spring 2003 CS 501: Software Engineering Lecture 21 Reliability II.
CS 5150 Software Engineering Lecture 21 Reliability 2.
1 CS 501 Spring 2004 CS 501: Software Engineering Lecture 20 Reliability 2.
 System Requirement Specification and System Planning.
Verification and Validation
Verification & Validation
Verification and Validation
Verification and Validation
Software testing strategies 2
CS 5150 Software Engineering
CS 501: Software Engineering Fall 1999
Critical Systems Development
Baisc Of Software Testing
Presentation transcript:

1 Software Engineering II Software Reliability

2 Dependable and Reliable Systems: The Royal Majesty From the report of the National Transportation Safety Board: "On June 10, 1995, the Panamanian passenger ship Royal Majesty grounded on Rose and Crown Shoal about 10 miles east of Nantucket Island, Massachusetts, and about 17 miles from where the watch officers thought the vessel was. The vessel, with 1,509 persons on board, was en route from St. George’s, Bermuda, to Boston, Massachusetts." "The Raytheon GPS unit installed on the Royal Majesty had been designed as a standalone navigation device in the mid- to late 1980s,...The Royal Majesty’s GPS was configured by Majesty Cruise Line to automatically default to the Dead Reckoning mode when satellite data were not available."

3 The Royal Majesty: Analysis The ship was steered by an autopilot that relied on position information from the Global Positioning System (GPS). If the GPS could not obtain a position from satellites, it provided an estimated position based on Dead Reckoning (distance and direction traveled from a known point). The GPS failed one hour after leaving Bermuda. The crew failed to see the warning message on the display (or to check the instruments). 34 hours and 600 miles later, the Dead Reckoning error was 17 miles.

4 The Royal Majesty: Software Lessons All the software worked as specified (no bugs), but... Since the GPS software had been specified, the requirements had changed (stand alone system to part of integrated system). The manufacturers of the autopilot and GPS adopted different design philosophies about the communication of mode changes. The autopilot was not programmed to recognize valid/invalid status bits in message from the GPS (NMEA 0183). The warnings provided by the user interface were not sufficiently conspicuous to alert the crew. The officers had not been properly trained on this equipment.

5 Reliability Reliability: Probability of a failure occurring in operational use. Perceived reliability: Depends upon: user behavior set of inputs pain of failure

6 User Perception of Reliability 1. A personal computer that crashes frequently v. a machine that is out of service for two days. 2. A database system that crashes frequently but comes back quickly with no loss of data v. a system that fails once in three years but data has to be restored from backup. 3. A system that does not fail but has unpredictable periods when it runs very slowly.

7 Reliability Metrics Traditional Measures Mean time between failures Availability (up time) Mean time to repair Market Measures Complaints Customer retention User Perception is Influenced by Distribution of failures Hypothetical example: Cars are less safe than airplanes in accidents per hour, but safer in accidents per mile.

8 Reliability Metrics for Distributed Systems Traditional metrics are hard to apply in multi-component systems: In a big network, at any given moment something will be giving trouble, but very few users will see it. A system that has excellent average reliability may give terrible service to certain users. There are so many components that system administrators rely on automatic reporting systems to identify problem areas.

9 Requirements Specification of System Reliability Example: ATM card reader Failure class ExampleMetric Permanent System fails to operate1 per 1,000 days non-corrupting with any card -- reboot Transient System can not read1 in 1,000 transactions non-corrupting an undamaged card Corrupting A pattern ofNever transactions corrupts database

10 Cost of Improved Reliability $ Up time 99% 100% Will you spend your money on new functionality or improved reliability?

11 Example: Central Computing System A central computer serves the entire organization. Any failure is serious. Step 1: Gather data on every failure 10 years of data in a simple data base Every failure analyzed: hardware software (default) environment (e.g., power, air conditioning) human (e.g., operator error)

12 Example: Central Computing System Step 2: Analyze the data Weekly, monthly, and annual statistics Number of failures and interruptions Mean time to repair Graphs of trends by component, e.g., Failure rates of disk drives Hardware failures after power failures Crashes caused by software bugs in each module

13 Example: Central Computing System Step 3: Invest resources where benefit will be maximum, e.g., Orderly shut down after power failure Priority order for software improvements Changed procedures for operators Replacement hardware

14 Building Dependable Systems: Three Principles For a software system to be dependable: Each stage of development must be done well. Changes should be incorporated into the structure as carefully as the original system development. Testing and correction do not ensure quality, but dependable systems are not possible without systematic testing.

15 Reliability: Modified Waterfall Model Requirements System design Testing Operation & maintenance Program design Coding Acceptance Feasibility study Changes

16 Key Factors for Reliable Software Organization culture that expects quality Approach to software design and implementation that hides complexity (e.g., structured design, object-oriented programming) Precise, unambiguous specification Use of software tools that restrict or detect errors (e.g., strongly typed languages, source control systems, debuggers) Programming style that emphasizes simplicity, readability, and avoidance of dangerous constructs Incremental validation

17 Building Dependable Systems: Organizational Culture Good organizations create good systems: Acceptance of the group's style of work (e.g., meetings, preparation, support for juniors) Visibility Completion of a task before moving to the next (e.g., documentation, comments in code)

18 Building Dependable Systems: Complexity The human mind can encompass only limited complexity: Comprehensibility Simplicity Partitioning of complexity A simple system or subsystem is easier to get right than a complex one.

19 Building Dependable Systems: Specifications for the Client Specifications are of no value if they do not meet the client's needs The client must understand and review the requirements specification in detail Appropriate members of the client's staff must review relevant areas of the design (e.g., operations, training materials, system administration) The acceptance tests must belong to the client

20 Building Dependable Systems: Quality Management Processes Assumption: Good processes lead to good software The importance of routine: Standard terminology (requirements, specification, design, etc.) Software standards (naming conventions, etc.) Internal and external documentation Reporting procedures

21 Building Dependable Systems: Change Change management: Source code management and version control Tracking of change requests and bug reports Procedures for changing requirements specifications, designs and other documentation Regression testing Release control

22 Reviews: Process (Plan) Objectives: To review progress against plan (formal or informal). To adjust plan (schedule, team assignments, functionality, etc.). Impact on quality: Good quality systems usually result from plans that are demanding but realistic. Good people like to be stretched and to work hard, but must not be pressed beyond their capabilities.

23 Reviews: Design and Code DESIGN AND CODE REVIEWS ARE A FUNDAMENTAL PART OF GOOD SOFTWARE DEVELOPMENT Concept Colleagues review each other's work: can be applied to any stage of software development can be formal or informal

24 Benefits of Design and Code Reviews Benefits: Extra eyes spot mistakes, suggest improvements Colleagues share expertise; helps with training An occasion to tidy loose ends Incompatibilities between components can be identified Helps scheduling and management control Fundamental requirements: Senior team members must show leadership Good reviews require good preparation Everybody must be helpful, not threatening

25 Review Team (Full Version) A review is a structured meeting, with the following people Moderator -- ensures that the meeting moves ahead steadily Scribe -- records discussion in a constructive manner Developer -- person(s) whose work is being reviewed Interested parties -- people above and below in the software process Outside experts -- knowledgeable people who have are not working on this project Client -- representatives of the client who are knowledgeable about this part of the process

26 Example: Program Design Moderator Scribe Developer -- the design team Interested parties -- people who created the system design and/or requirements specification, and the programmers who will implement the system Outside experts -- knowledgeable people who have are not working on this project Client -- only if the client has a strong technical representative

27 Review Process Preparation The developer provides colleagues with documentation (e.g., specification or design), or code listing Participants study the documentation in advance Meeting The developer leads the reviewers through the documentation, describing what each section does and encouraging questions Must allow plenty of time and be prepared to continue on another day.

28 Static and Dynamic Verification Static verification: Techniques of verification that do not include execution of the software. May be manual or use computer tools. Dynamic verification: Testing the software with trial data. Debugging to remove errors.

29 Static Validation & Verification Carried out throughout the software development process. Validation & verification Requirements specification Design Program REVIEWS

30 Static Verification: Program Inspections Formal program reviews whose objective is to detect faults Code may be read or reviewed line by line. 150 to 250 lines of code in 2 hour meeting. Use checklist of common errors. Requires team commitment, e.g., trained leaders So effective that it is claimed that it can replace unit testing

31 Inspection Checklist: Common Errors Data faults: Initialization, constants, array bounds, character strings Control faults: Conditions, loop termination, compound statements, case statements Input/output faults: All inputs used; all outputs assigned a value Interface faults: Parameter numbers, types, and order; structures and shared memory Storage management faults: Modification of links, allocation and de-allocation of memory Exceptions: Possible errors, error handlers

32 Static Analysis Tools Program analyzers scan the source of a program for possible faults and anomalies (e.g., Lint for C programs). Control flow: loops with multiple exit or entry points Data use: Undeclared or uninitialized variables, unused variables, multiple assignments, array bounds Interface faults: Parameter mismatches, non-use of functions results, uncalled procedures Storage management: Unassigned pointers, pointer arithmetic

33 Static Analysis Tools (continued) Static analysis tools Cross-reference table: Shows every use of a variable, procedure, object, etc. Information flow analysis: Identifies input variables on which an output depends. Path analysis: Identifies all possible paths through the program.

34 Failures and Faults Failure: Software does not deliver the service expected by the user (e.g., mistake in requirements, confusing user interface) Fault (BUG): Programming or design error whereby the delivered system does not conform to specification (e.g., coding error, interface error)

35 Faults and Failures? Actual examples (a) A mathematical function loops for ever from rounding error. (b) A distributed system hangs because of a concurrency problem. (c) After a network is hit by lightning, it crashes on restart. (d) A program dies because the programmer typed: x = 1 instead of x == 1. (e) The head of an organization is paid $5 a month instead of $10,005 because the maximum salary allowed by the program is $10,000. (f) An operating system fails because of a page-boundary error in the firmware.

36 Terminology Fault avoidance Build systems with the objective of creating fault- free (bug-free) software Fault tolerance Build systems that continue to operate when faults (bugs) occur Fault detection (testing and validation) Detect faults (bugs) before the system is put into operation.

37 Fault Avoidance Software development process that aims to develop zero-defect software. Formal specification Incremental development with customer input Constrained programming options Static verification Statistical testing It is always better to prevent defects than to remove them later. Example: The four color problem.

38 Defensive Programming Murphy's Law: If anything can go wrong, it will. Defensive Programming: Redundant code is incorporated to check system state after modifications. Implicit assumptions are tested explicitly. Risky programming constructs are avoided.

39 Defensive Programming: Error Avoidance Risky programming constructs Pointers Dynamic memory allocation Floating-point numbers Parallelism Recursion Interrupts All are valuable in certain circumstances, but should be used with discretion

40 Defensive Programming Examples Use boolean variable not integer Test i <= n not i = = n Assertion checking (e.g., validate parameters) Build debugging code into program with a switch to display values at interfaces Error checking codes in data (e.g., checksum or hash)

41 Maintenance Most production programs are maintained by people other than the programmers who originally wrote them. (a) What factors make a program easy for somebody else to maintain? (b) What factors make a program hard for somebody else to maintain?

42 Fault Tolerance General Approach: Failure detection Damage assessment Fault recovery Fault repair N-version programming -- Execute independent implementation in parallel, compare results, accept the most probable.

43 Fault Tolerance Basic Techniques: After error continue with next transaction (e.g., drop packet) Timers and timeout in networked systems Error correcting codes in data Bad block tables on disk drives Forward and backward pointers in databases Report all errors for quality control

44 Fault Tolerance Backward Recovery: Record system state at specific events (checkpoints). After failure, recreate state at last checkpoint. Combine checkpoints with system log that allows transactions from last checkpoint to be repeated automatically. Test the restore software!

45 Software Engineering for Real Time The special characteristics of real time computing require extra attention to good software engineering principles: Requirements analysis and specification Development of tools Modular design Exhaustive testing Heroic programming will fail!

46 Software Engineering for Real Time Testing and debugging need special tools and environments Debuggers, etc., can not be used to test real time performance Simulation of environment may be needed to test interfaces -- e.g., adjustable clock speed General purpose tools may not be available

47 Some Notable Bugs Built-in function in Fortran compiler (e 0 = 0) Japanese microcode for Honeywell DPS virtual memory The microfilm plotter with the missing byte (1:1023) The Sun 3 page fault that IBM paid to fix Left handed rotation in the graphics package Good people work around problems. The best people track them down and fix them!

48 End of Lecture 6