University of Toronto Department of Computer Science CSC444 Lec08 1 Lecture 8: Testing Verification and Validation testing vs. static analysis Testing.

Slides:



Advertisements
Similar presentations
Software Testing. Quality is Hard to Pin Down Concise, clear definition is elusive Not easily quantifiable Many things to many people You'll know it when.
Advertisements

Test Yaodong Bi.
CompSci 230 Software Design and Construction Software Quality 2014S2 Black box testing.
Practical Testing Techniques. Verification and Validation Validation –does the software do what was wanted? “Are we building the right system?” –This.
Software Failure: Reasons Incorrect, missing, impossible requirements * Requirement validation. Incorrect specification * Specification verification. Faulty.
CMSC 345, Version 11/07 SD Vick from S. Mitchell Software Testing.
1 CODE TESTING Principles and Alternatives. 2 Testing - Basics goal - find errors –focus is the source code (executable system) –test team wants to achieve.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
IMSE Week 18 White Box or Structural Testing Reading:Sommerville (4th edition) ch 22 orPressman (4th edition) ch 16.
Testing an individual module
Software Testing. “Software and Cathedrals are much the same: First we build them, then we pray!!!” -Sam Redwine, Jr.
Copyright by Scott GrissomCh 1 Software Development Slide 1 Software Development The process of developing large software projects Different Approaches.
Outline Types of errors Component Testing Testing Strategy
OHT 9.1 Galin, SQA from theory to implementation © Pearson Education Limited 2004 Chapter 9.3 Software Testing Strategies.
Chapter 11: Testing The dynamic verification of the behavior of a program on a finite set of test cases, suitable selected from the usually infinite execution.
CSE 403 Lecture 13 Black/White-Box Testing Reading: Software Testing: Principles and Practices, Ch. 3-4 (Desikan, Ramesh) slides created by Marty Stepp.
1 Functional Testing Motivation Example Basic Methods Timing: 30 minutes.
University of Toronto Department of Computer Science CSC444 Lec04- 1 Lecture 4: Software Lifecycles The Software Process Waterfall model Rapid Prototyping.
Software Testing Sudipto Ghosh CS 406 Fall 99 November 9, 1999.
University of Toronto Department of Computer Science © 2001, Steve Easterbrook CSC444 Lec22 1 Lecture 22: Software Measurement Basics of software measurement.
Software Testing Verification and validation planning Software inspections Software Inspection vs. Testing Automated static analysis Cleanroom software.
Dr. Pedro Mejia Alvarez Software Testing Slide 1 Software Testing: Building Test Cases.
System/Software Testing
Lecture 18: Specifications
CMSC 345 Fall 2000 Unit Testing. The testing process.
CS4311 Spring 2011 Unit Testing Dr. Guoqiang Hu Department of Computer Science UTEP.
EECE 310: Software Engineering Exception handling and Testing.
1 Software testing. 2 Testing Objectives Testing is a process of executing a program with the intent of finding an error. A good test case is in that.
University of Toronto Department of Computer Science CSC444 Lec05- 1 Lecture 5: Decomposition and Abstraction Decomposition When to decompose Identifying.
EECE 310: Software Engineering Testing / Validation.
Testing Basics of Testing Presented by: Vijay.C.G – Glister Tech.
Ch6: Software Verification. 1 Decision table based testing  Applicability:  Spec. is described by a decision table.  Tables describe:  How combinations.
Software Testing Testing types Testing strategy Testing principles.
University of Toronto Department of Computer Science © 2001, Steve Easterbrook CSC444 Lec23 1 Lecture 23: Course Summary Course Goals Summary of what we.
Software Testing. 2 CMSC 345, Version 4/12 Topics The testing process  unit testing  integration and system testing  acceptance testing Test case planning.
Testing and Debugging Version 1.0. All kinds of things can go wrong when you are developing a program. The compiler discovers syntax errors in your code.
1 Intro to Java Week 12 (Slides courtesy of Charatan & Kans, chapter 8)
CSE403 Software Engineering Autumn 2001 More Testing Gary Kimura Lecture #10 October 22, 2001.
Unit Testing 101 Black Box v. White Box. Definition of V&V Verification - is the product correct Validation - is it the correct product.
Software Testing Yonsei University 2 nd Semester, 2014 Woo-Cheol Kim.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
Today’s Agenda  Reminder: HW #1 Due next class  Quick Review  Input Space Partitioning Software Testing and Maintenance 1.
1 Program Testing (Lecture 14) Prof. R. Mall Dept. of CSE, IIT, Kharagpur.
University of Toronto Department of Computer Science © 2001, Steve Easterbrook CSC444 Lec11- 1 Lecture 11: Debugging & Defensive Programming Terminology.
CSC 480 Software Engineering Testing - I. Plan project Integrate & test system Analyze requirements Design Maintain Test units Implement Software Engineering.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
Dynamic Testing.
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
ANOOP GANGWAR 5 TH SEM SOFTWARE TESTING MASTER OF COMPUTER APPLICATION-V Sem.
1 Software Testing. 2 What is Software Testing ? Testing is a verification and validation activity that is performed by executing program code.
Software Testing. Software Quality Assurance Overarching term Time consuming (40% to 90% of dev effort) Includes –Verification: Building the product right,
Software Testing Strategies for building test group
Software Testing.
Software Testing.
Software Testing Techniques
Input Space Partition Testing CS 4501 / 6501 Software Testing
Chapter 13 & 14 Software Testing Strategies and Techniques
Software Testing (Lecture 11-a)
Lecture 09:Software Testing
Static Testing Static testing refers to testing that takes place without Execution - examining and reviewing it. Dynamic Testing Dynamic testing is what.
CSE 403 Lecture 13 Black/White-Box Testing Reading:
CSE 303 Concepts and Tools for Software Development
Chapter 10 – Software Testing
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
CSE 1020:Software Development
Software Testing Strategies
Chapter 13 & 14 Software Testing Strategies and Techniques 1 Software Engineering: A Practitioner’s Approach, 6th edition by Roger S. Pressman.
Presentation transcript:

University of Toronto Department of Computer Science CSC444 Lec08 1 Lecture 8: Testing Verification and Validation testing vs. static analysis Testing - how to partition the space black box testing white box testing System level tests integration testing other system tests Automated testing © 2001, Steve Easterbrook

University of Toronto Department of Computer Science CSC444 Lec08 2 Verification and Validation Validation does the software do what was wanted? “Are we building the right system?” This is difficult to determine and involves subjective judgements Verification does the software meet its specification? “Are we building the system right?” This can be objective if the specifications are sufficiently precise Three approaches to verification Everything must be verified …including the verification process itself experiment with the program (testing) experiment with the program (testing) reason about the program (formal verification) reason about the program (formal verification) inspect the program (reviews) inspect the program (reviews) © 2001, Steve Easterbrook

University of Toronto Department of Computer Science CSC444 Lec08 3 Goals of Testing Goal: show a program meets its specification But: testing can never be complete for non-trivial programs What is a successful test? One in which no errors were found? One in which one or more errors were found? Testing should be: repeatable if you find an error, you’ll want to repeat the test to show others if you correct an error, you’ll want to repeat the test to check you did fix it systematic random testing is not enough select test sets that cover the range of behaviors of the program select test sets that are representative of real uses documented keep track of what tests were performed, and what the results were Source: Adapted from van Vliet, 2000, Section 13.1 © 2001, Steve Easterbrook

University of Toronto Department of Computer Science CSC444 Lec08 4 Random testing isn’t enough Structurally… Test strategy: pick random values for x and y and test ‘equals’ on them But:...we might never test the first branch of the ‘if’ statement boolean equal (int x, y) { /* effects: returns true if x=y, false otherwise */ if (x == y) return(TRUE) else return(FALSE) } boolean equal (int x, y) { /* effects: returns true if x=y, false otherwise */ if (x == y) return(TRUE) else return(FALSE) } int maximum (list a) /* requires: a is a list of integers effects: returns the maximum element in the list */ int maximum (list a) /* requires: a is a list of integers effects: returns the maximum element in the list */ Functionally… Try these test cases: Why is this not enough? Source: Adapted from Horton, 1999 © 2001, Steve Easterbrook

University of Toronto Department of Computer Science CSC444 Lec08 5 Partitioning Systematic testing depends on partitioning partition the set of possible behaviours of the system choose representative samples from each partition make sure we covered all partitions How do you identify suitable partitions? That’s what testing is all about!!! Methods: black box, white box,... Source: Adapted from Horton, 1999 © 2001, Steve Easterbrook

University of Toronto Department of Computer Science CSC444 Lec08 6 Black Box Testing Generate test cases from the specification only (i.e.don’t look at the code) Advantages: avoids making the same assumptions as the programmer test data is independent of the implementation results can be interpreted without knowing implementation details Three suggestions for selecting test cases: Paths through the spec e.g. choose test cases that cover each part of the ‘requires’, ‘modifies’ and ‘effects’ clauses Boundary conditions choose test cases that are at or close to boundaries for ranges of inputs test for aliasing errors (e.g. two parameters referring to the same object) Off-nominal cases choose test cases that try out every type of invalid input (the program should degrade gracefully, without loss of data) © 2001, Steve Easterbrook

University of Toronto Department of Computer Science CSC444 Lec08 7 Example paths through the spec: “x  0” means “x>0 or x=0”, so test both “paths” it is not always possible to choose tests to cover the effects clause… can’t choose test cases for “x-epsilon=y 2” or “y 2 =x+epsilon” if the algorithm always generates positive errors, can’t even generate y 2 < x boundary conditions: As “x  0” choose: 1, 0, -1 as values for x As “ < epsilon < 0.001” choose: , , , , 0.001, , as values for epsilon very large & very small values for x off-nominal cases: negative values for x and epsilon values for epsilon > 0.001, values for epsilon < real sqrt (real x, epsilon) { /* requires: x  0 and ( < epsilon < 0.001) effects: returns y such that x-epsilon  y 2  x+epsilon */ real sqrt (real x, epsilon) { /* requires: x  0 and ( < epsilon < 0.001) effects: returns y such that x-epsilon  y 2  x+epsilon */ Source: Adapted from Liskov & Guttag, 2000, pp224-5 © 2001, Steve Easterbrook

University of Toronto Department of Computer Science CSC444 Lec08 8 The classic example Consider the following program: Source: Adapted from Blum, 1992, pp char * triangle (unsigned x, y, z) { /* effects: If x, y and z are the lengths of the sides of a triangle, this function returns one of three strings, “scalene”, “isosceles” or “equilateral” for the given three inputs. */ char * triangle (unsigned x, y, z) { /* effects: If x, y and z are the lengths of the sides of a triangle, this function returns one of three strings, “scalene”, “isosceles” or “equilateral” for the given three inputs. */ How many test cases are enough? expected cases (one for each type of triangle): (3,4,5), (4,4,5), (5,5,5) boundary cases (only just not a triangle): (1,2,3) off-nominal cases (not valid triangle): (4,5,100) vary the order of inputs for expected cases: (4,5,4), (5,4,4) vary the order of inputs for the boundary case: (1,3,2), (2,1,3), (2,3,1), (3,2,1), (3,1,2) vary the order of inputs for the off-nominal case: (100,4,5), (4,100,5) choose two equal parameters for the off-nominal case: (100,4,4) Note: there is a bug in the specification!! © 2001, Steve Easterbrook

University of Toronto Department of Computer Science CSC444 Lec08 9 White box testing Examine the code and test all paths …because black box testing can never guarantee we exercised all the code Path completeness: A test set is path complete if each path through the code is exercised by at least one case in the test set (not the same as saying each statement in the code is reached!!) Example int midval (int x, y, z) { /* effects: returns median value of the three inputs */ if (x > y) { if (x > z) return x else return z } else { if (y > z) return y else return z } } int midval (int x, y, z) { /* effects: returns median value of the three inputs */ if (x > y) { if (x > z) return x else return z } else { if (y > z) return y else return z } } There are 4 paths through this code …so we need at least 4 test cases e.g.x=3, y=2, z=1 x=3, y=2, z=4 x=2, y=3, z=2 x=2, y=3, z=4 Source: Adapted from Liskov & Guttag, 2000, pp © 2001, Steve Easterbrook

University of Toronto Department of Computer Science CSC444 Lec08 10 Weaknesses of path completeness White box testing is insufficient e.g. The single test case x=4, y=1, z=2 is path complete the program performs correctly on this test case but the program is still wrong!! Path completeness is usually infeasible e.g. there are paths through this program segment loops are problematic. Try: test 0, 1, 2, n-1, and n iterations, (n is the max number of iterations possible) or try formal analysis - find the “loop invariant”!! int midval (int x, y, z) { /* effects: returns median value of the three inputs */ return z; } int midval (int x, y, z) { /* effects: returns median value of the three inputs */ return z; } for (j=0, i=0; i<100; i++) if a[i]=true then j=j+1 for (j=0, i=0; i<100; i++) if a[i]=true then j=j+1 Source: Adapted from Liskov & Guttag, 2000, pp and van Vliet 1999, section 13.5 © 2001, Steve Easterbrook

University of Toronto Department of Computer Science CSC444 Lec08 11 Unit testing each unit is tested separately to check it meets its specification Integration testing units are tested together to check they work together two strategies: Integration testing is hard: much harder to identify equivalence classes problems of scale tends to reveal specification errors rather than integration errors Integration Testing Bottom up for this dependency graph, the test order is: 1) d 2) e and r 3) q 4) p p q r ed Top down for this structure chart the order is: 1) test a with stubs for b, c, and d 2) test a+b+c+d with stubs for e…k 3) test whole system b a efg c hi d jk Source: Adapted from van Vliet 1999, section 13.9 © 2001, Steve Easterbrook

University of Toronto Department of Computer Science CSC444 Lec08 12 System testing Other types of test facility testing - does the system provide all the functions required? volume testing - can the system cope with large data volumes? stress testing - can the system cope with heavy loads? endurance testing - will the system continue to work for long periods? usability testing - can the users use the system easily? security testing - can the system withstand attacks? performance testing - how good is the response time? storage testing - are there any unexpected data storage issues? configuration testing - does the system work on all target hardware? installability testing - can we install the system successfully? reliability testing - how reliable is the system over time? recovery testing - how well does the system recover from failure? serviceability testing - how maintainable is the system? documentation testing - is the documentation accurate, usable, etc. operations testing - are the operators’ instructions right? regression testing - repeat all testing every time we modify the system! Source: Adapted from Blum, 1992, pp © 2001, Steve Easterbrook

University of Toronto Department of Computer Science CSC444 Lec08 13 Automated Testing Ideally, testing should be automated tests can be repeated whenever the code is modified (“regression testing”) takes the tedium out of extensive testing makes more extensive testing possible Will need: test driver - automates the process of running a test set sets up the environment makes a series of calls to the unit-under-test saves results and checks they were right generates a summary for the developers test stub - simulates part of the program called by the unit-under-test checks whether the UUT set up the environment correctly checks whether the UUT passed sensible input parameters to the stub passes back some return values to the UUT (according to the test case) (stubs could be interactive - ask the user to supply return values) Source: Adapted from Liskov & Guttag, 2000, pp © 2001, Steve Easterbrook

University of Toronto Department of Computer Science CSC444 Lec08 14 References Horton, D. “Testing Software” Course handout, University of Toronto, This excellent introduction to systematic testing is available from the readings page on the course website, or at van Vliet, H. “Software Engineering: Principles and Practice (2nd Edition)” Wiley, Chapter 13 provides an excellent overview of the whole idea of testing software. van Vliet’s treatment complements this lecture very nicely - he covers the philosophy of testing and the kinds of errors that occur in software. Instead of using black box vs white box as his test selection criteria, he uses “coverage-based”, “fault-based” and “error-based”. As an exercise, try mapping one of these classifications onto the other, and see what you get! Liskov, B. and Guttag, J., “Program Development in Java: Abstraction, Specification and Object-Oriented Design”, 2000, Addison-Wesley. Liskov and Guttag’s chapter 10 is a nice introduction to testing of procedural and data abstractions. Blum, B. “Software Engineering: A Holistic View”. Oxford University Press, 1992 Section 5.3 provides an excellent overview of the whole idea of testing software. © 2001, Steve Easterbrook