CS 501: Software Engineering Fall 1999 Lecture 16 Verification and Validation.

Slides:



Advertisements
Similar presentations
Defect testing Objectives
Advertisements

Lecture 8: Testing, Verification and Validation
Verification and Validation
Testing and Quality Assurance
1 CS 501 Spring 2005 CS 501: Software Engineering Lecture 21 Reliability 3.
Software Engineering-II Sir zubair sajid. What’s the difference? Verification – Are you building the product right? – Software must conform to its specification.
CMSC 345, Version 11/07 SD Vick from S. Mitchell Software Testing.
CS CS 5150 Software Engineering Lecture 21 Reliability 2.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
CS CS 5150 Software Engineering Lecture 22 Reliability 2.
1 CS 501 Spring 2007 CS 501: Software Engineering Lecture 21 Reliability 3.
1 CS 501 Spring 2005 CS 501: Software Engineering Lecture 19 Reliability 1.
©Ian Sommerville 2000Software Engineering, 6th edition. Chapter 19Slide 1 Verification and Validation l Assuring that a software system meets a user's.
CS CS 5150 Software Engineering Lecture 20 Reliability 1.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
1 CS 501 Spring 2008 CS 501: Software Engineering Lecture 19 Reliability 1.
CS CS 5150 Software Engineering Lecture 21 Reliability 3.
CS 501: Software Engineering Fall 2000 Lecture 22 Dependable Systems II Validation and Verification.
CS CS 5150 Software Engineering Lecture 21 Reliability 1.
1 CS 501 Spring 2008 CS 501: Software Engineering Lecture 21 Reliability 3.
Verification and Validation CIS 376 Bruce R. Maxim UM-Dearborn.
BY RAJESWARI S SOFTWARE TESTING. INTRODUCTION Software testing is the process of testing the software product. Effective software testing will contribute.
1CMSC 345, Version 4/04 Verification and Validation Reference: Software Engineering, Ian Sommerville, 6th edition, Chapter 19.
Chapter 24 - Quality Management Lecture 1 1Chapter 24 Quality management.
CSCI 5801: Software Engineering
Software Testing Verification and validation planning Software inspections Software Inspection vs. Testing Automated static analysis Cleanroom software.
©Ian Sommerville 2000Software Engineering, 6th edition. Chapter 19Slide 1 Verification and Validation l Assuring that a software system meets a user's.
System/Software Testing
COMP 354 Software Engineering I Section BB Summer 2009 Dr Greg Butler
Project Quality Management
Verification and Validation Yonsei University 2 nd Semester, 2014 Sanghyun Park.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Verification and Validation.
Objectives Understand the basic concepts and definitions relating to testing, like error, fault, failure, test case, test suite, test harness. Explore.
CMSC 345 Fall 2000 Unit Testing. The testing process.
1 Debugging and Testing Overview Defensive Programming The goal is to prevent failures Debugging The goal is to find cause of failures and fix it Testing.
1 Software testing. 2 Testing Objectives Testing is a process of executing a program with the intent of finding an error. A good test case is in that.
Testing Basics of Testing Presented by: Vijay.C.G – Glister Tech.
1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 23 Reliability III.
Dr. Tom WayCSC Testing and Test-Driven Development CSC 4700 Software Engineering Based on Sommerville slides.
CS CS 5150 Software Engineering Lecture 20 Reliability 2.
Software Testing Yonsei University 2 nd Semester, 2014 Woo-Cheol Kim.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
©Ian Sommerville 2000Software Engineering, 6th edition. Chapter 19Slide 1 Chapter 19 Verification and Validation.
Chapter 12: Software Inspection Omar Meqdadi SE 3860 Lecture 12 Department of Computer Science and Software Engineering University of Wisconsin-Platteville.
1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 22 Reliability II.
CS CS 5150 Software Engineering Lecture 20 Reliability 2.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 20 Slide 1 Defect testing l Testing programs to establish the presence of system defects.
Chapter 8 Lecture 1 Software Testing. Program testing Testing is intended to show that a program does what it is intended to do and to discover program.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
CS 360 Lecture 16.  For a software system to be reliable:  Each stage of development must be done well, with incremental verification and testing. 
CS451 Lecture 10: Software Testing Yugi Lee STB #555 (816)
1 Software Testing Strategies: Approaches, Issues, Testing Tools.
Dynamic Testing.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
1 CS 501 Spring 2003 CS 501: Software Engineering Lecture 21 Reliability II.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
References & User group Reference: Software Testing and Analysis Mauro Pezze Software Engineering Ian Sommerville Eight Edition (2007) User group:
SOFTWARE TESTING LECTURE 9. OBSERVATIONS ABOUT TESTING “ Testing is the process of executing a program with the intention of finding errors. ” – Myers.
Defect testing Testing programs to establish the presence of system defects.
CS 5150 Software Engineering Lecture 21 Reliability 2.
1 CS 501 Spring 2004 CS 501: Software Engineering Lecture 20 Reliability 2.
Verification and Validation
Chapter 8 – Software Testing
Verification & Validation
Verification and Validation
Chapter 18 Software Testing Strategies
Verification and Validation
Software testing strategies 2
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
Presentation transcript:

CS 501: Software Engineering Fall 1999 Lecture 16 Verification and Validation

2 Administration 

3 Reading Sommerville: Chapters 22 to 25, pages 443 to 502.

4 Validation and Verification Validation: Are we building the right product? Verification: Are we building the product right? In practice, it is sometimes difficult to distinguish between the two (e.g., Assignment 4). That's not a bug. That's a feature!

5 Static and Dynamic Verification Static verification: Techniques of verification that do not include execution of the software.  May be manual or use computer tools. Dynamic verification  Testing the software with trial data.  Debugging to remove errors.

6 Static Validation & Verification Carried out throughout the software development process. Validation & verification Requirements specification Design Program

7 Cleanroom Software Development Software development process that aims to develop zero-defect software.  Formal specification  Incremental development with customer input  Constrained programming options  Static verification  Statistical testing It is always better to prevent defects than to remove them later. Example: The four color problem.

8 Static Verification: Program Inspections Program reviews whose objective is to detect faults  Code may be read or reviewed line by line.  150 to 250 lines of code in 2 hour meeting.  Use checklist of common errors.  Requires team commitment, e.g., trained leaders So effective that it can replace unit testing

9 Inspection Checklist: Common Errors Data faults: Initialization, constants, array bounds, character strings Control faults: Conditions, loop termination, compound statements, case statements Input/output faults: All inputs used; all outputs assigned a value Interface faults: Parameter numbers, types, and order; structures and shared memory Storage management faults: Modification of links, allocation and de- allocation of memory Exceptions: Possible errors, error handlers

10 Static Analysis Tools Program analyzers scan the source of a program for possible faults and anomalies (e.g., Lint for C programs).  Control flow: loops with multiple exit or entry points  Data use: Undeclared or uninitialized variables, unused variables, multiple assignments, array bounds  Interface faults: Parameter mismatches, non-use of functions results, uncalled procedures  Storage management: Unassigned pointers, pointer arithmetic

11 Static Analysis Tools (continued)  Cross-reference table: Shows every use of a variable, procedure, object, etc.  Information flow analysis: Identifies input variables on which an output depends.  Path analysis: Identifies all possible paths through the program.

12 Testing and Debugging Testing is most effective if divided into stages:  Unit testing at various levels of granularity tests by the developer emphasis is on accuracy of actual code  System and sub-system testing uses trial data emphasis is on integration and interfaces  Acceptance testing uses real data in realistic situations emphasis is on meeting requirements

13 Acceptance Testing Alpha Testing: Clients operate the system in a realistic but non-production environment Beta Testing: Clients operate the system in a carefully monitored production environment Parallel Testing: Clients operate new system alongside old production system with same data and compare results

14 The Testing Process System and Acceptance Testing is a major part of a software project  It requires time on the schedule  It may require substantial investment in datasets, equipment, and test software.  Good testing requires good people!  Management and client reports are important parts of testing. What is the definition of "done"?

15 Testing Strategies  Bottom-up testing. Each unit is tested with its own test environment.  Top-down testing. Large components are tested with dummy stubs. user interfaces work-flow client and management demonstrations  Stress testing. Tests the system at and beyond its limits. real-time systems transaction processing

16 Test Design Testing can never prove that a system is correct. It can only show that (a) a system is correct in a special case, or (b) that it has a fault.  The objective of testing is to find faults.  Testing is never comprehensive.  Testing is expensive.

17 Test Cases Test cases are specific tests that are chosen because they are likely to find faults. Test cases are chosen to balance expense against chance of finding serious faults.  Cases chosen by the development team are effective in testing known vulnerable areas.  Cases chosen by experienced outsiders and clients will be effective in finding gaps left by the developers.  Cases chosen by inexperienced users will find other faults.

18 Test Case Selection: Coverage of Inputs Objective is to test all classes of input  Classes of data -- major categories of transaction and data inputs. Cornell example: (undergraduate, graduate, transfer,...) by (college, school, program,...) by (standing) by (...)  Ranges of data -- typical values, extremes  Invalid data, reversals, and special cases.

19 Test Case Selection: Program Objective is to test all functions of each computer program  Paths through the computer programs Program flow graph Check that every path is executed at least once  Dynamic program analyzers Count number of times each path is executed Highlight or color source code Can not be used with time critical software

20 Program Flow Graph if-then-else loop-while

21 Fixing Bugs  Isolate the bug Intermittent --> repeatable Complex example --> simple example  Understand the bug Root cause Dependencies Structural interactions  Fix the bug Design changes Documentation changes Code changes

22 Moving the Bugs Around Fixing bugs is an error-prone process!  When you fix a bug, fix its environment  Bug fixes need static and dynamic testing  Repeat all tests that have the slightest relevance (regression testing) Bugs have a habit of returning!  When a bug is fixed, add the failure case to the test suite for the future.