SE 433/333 Software Testing & Quality Assurance Dennis Mumaugh, Instructor Office: CDM, Room 428 Office Hours: Tuesday, 4:00 –

Slides:



Advertisements
Similar presentations
Testing Relational Database
Advertisements

Overview Functional Testing Boundary Value Testing (BVT)
SOFTWARE TESTING. INTRODUCTION  Software Testing is the process of executing a program or system with the intent of finding errors.  It involves any.
Annoucements  Next labs 9 and 10 are paired for everyone. So don’t miss the lab.  There is a review session for the quiz on Monday, November 4, at 8:00.
(c) 2007 Mauro Pezzè & Michal Young Ch 10, slide 1 Functional testing.
Testing an individual module
Software Testing. “Software and Cathedrals are much the same: First we build them, then we pray!!!” -Sam Redwine, Jr.
Chapter 13 & 14 Software Testing Strategies and Techniques
Equivalence Class Testing
1 Functional Testing Motivation Example Basic Methods Timing: 30 minutes.
Software Testing Sudipto Ghosh CS 406 Fall 99 November 9, 1999.
© Dr. A. Williams, Fall Present Software Quality Assurance – JUnit Lab 1 JUnit A unit test framework for Java –Authors: Erich Gamma, Kent Beck Objective:
CMSC 345 Fall 2000 Unit Testing. The testing process.
1 Software testing. 2 Testing Objectives Testing is a process of executing a program with the intent of finding an error. A good test case is in that.
Testing Basics of Testing Presented by: Vijay.C.G – Glister Tech.
Dr. Tom WayCSC Testing and Test-Driven Development CSC 4700 Software Engineering Based on Sommerville slides.
Introduction to Software Testing. Types of Software Testing Unit Testing Strategies – Equivalence Class Testing – Boundary Value Testing – Output Testing.
Black Box Testing Techniques Chapter 7. Black Box Testing Techniques Prepared by: Kris C. Calpotura, CoE, MSME, MIT  Introduction Introduction  Equivalence.
Unit Testing 101 Black Box v. White Box. Definition of V&V Verification - is the product correct Validation - is it the correct product.
Software Construction Lecture 18 Software Testing.
Today’s Agenda  Reminder: HW #1 Due next class  Quick Review  Input Space Partitioning Software Testing and Maintenance 1.
Software Engineering 2004 Jyrki Nummenmaa 1 BACKGROUND There is no way to generally test programs exhaustively (that is, going through all execution.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
SOFTWARE TESTING. Introduction Software Testing is the process of executing a program or system with the intent of finding errors. It involves any activity.
Software Quality Assurance and Testing Fazal Rehman Shamil.
PROGRAMMING TESTING B MODULE 2: SOFTWARE SYSTEMS 22 NOVEMBER 2013.
Dynamic Testing.
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
1 Phase Testing. Janice Regan, For each group of units Overview of Implementation phase Create Class Skeletons Define Implementation Plan (+ determine.
SE 433/333 Software Testing & Quality Assurance Dennis Mumaugh, Instructor Office: CDM, Room 428 Office Hours: Tuesday, 4:00 –
Boundary Value Testing 1.A type of “Black box” functional testing –The program is viewed as a mathematical “function” –The program takes inputs and maps.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
1 Software Testing. 2 Equivalence Class Testing 3 The use of equivalence class testing has two motivations: –Sense of complete testing –Avoid redundancy.
Boundary Value Testing
SOFTWARE TESTING LECTURE 9. OBSERVATIONS ABOUT TESTING “ Testing is the process of executing a program with the intention of finding errors. ” – Myers.
Dynamic Black-Box Testing Part 1 What is dynamic black-box testing? How to reduce the number of test cases using: Equivalence partitioning Boundary value.
Functional testing, Equivalence class testing
SE 433/333 Software Testing & Quality Assurance
Software Testing.
Software Testing.
SOFTWARE TESTING Date: 29-Dec-2016 By: Ram Karthick.
Testing Verification and the Joy of Breaking Code
Introduction Edited by Enas Naffar using the following textbooks: - A concise introduction to Software Engineering - Software Engineering for students-
Testing Tutorial 7.
Software Testing.
Input Space Partition Testing CS 4501 / 6501 Software Testing
Data Abstraction: The Walls
Chapter 8 – Software Testing
CS5123 Software Validation and Quality Assurance
Chapter 13 & 14 Software Testing Strategies and Techniques
Overview Functional Testing Boundary Value Testing (BVT)
Introduction Edited by Enas Naffar using the following textbooks: - A concise introduction to Software Engineering - Software Engineering for students-
Software Development Cycle
Teaching slides Chapter 9.
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Chapter 14 Software Testing Techniques
Lecture 09:Software Testing
Testing and Test-Driven Development CSC 4700 Software Engineering
Software testing.
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
CSE403 Software Engineering Autumn 2000 More Testing
Overview Functional Testing Boundary Value Testing (BVT)
Boolean Expressions to Make Comparisons
Software Testing “If you can’t test it, you can’t design it”
Chapter 1: Boundary Value Testing
Extreme Programming.
Chapter 7 Software Testing.
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Testing Slides adopted from John Jannotti, Brown University
Chapter 13 & 14 Software Testing Strategies and Techniques 1 Software Engineering: A Practitioner’s Approach, 6th edition by Roger S. Pressman.
Presentation transcript:

SE 433/333 Software Testing & Quality Assurance Dennis Mumaugh, Instructor Office: CDM, Room 428 Office Hours: Tuesday, 4:00 – 5:30 April 19, 2016SE 433: Lecture 41 of 101

Administrivia  Comments and feedback  Announcements  Assignment 1 both parts have been graded.  Solution to Assignment 1 and 2 have been posted to D2L  Hints  Look at the Java documentation. For example:  Look at the examples (JUnit2.zip) mentioned on the reading list and provided in D2L.  In solving a problem, try getting the example working first. April 19, 2016SE 433: Lecture 4 2 of 101

SE 433 – Class 4 Topic:  Black Box Testing, JUnit Part 2 Reading:  Pezze and Young: Chapters 9-10  JUnit documentation: JUnit documentation:  An example of parameterized test: JUnit2.zip in D2L  See also reading list See also reading list April 19, 2016SE 433: Lecture 4 3 of 101

Assignments 4 and 5 Assignment 4: Parameterized Test  The objective of this assignment is to develop a parameterized JUnit test and run JUnit tests using Eclipse IDE.  Due Date: April 26, 2016, 11:59pm Assignment 5: Black Box Testing – Part 1: Test Case Design  The objective of this assignment is to design test suites using black- box techniques to adequately test the programs specified below. You may use any combination of black-box testing techniques to design the test cases.  Due date: May 3, 2016, 11:59pm April 19, 2016SE 433: Lecture 4 4 of 101

Thought for the Day “More than the act of testing, the act of designing tests is one of the best bug preventers known. The thinking that must be done to create a useful test can discover and eliminate bugs before they are coded – indeed, test-design thinking can discover and eliminate bugs at every stage in the creation of software, from conception to specification, to design, coding and the rest.” – Boris Beizer April 19, 2016SE 433: Lecture 45 of 101

Assignment 1 Lessons Learned  The assignment requirements may have some unwritten items:  Being able to test the program  Programs need to “defend” themselves from  Illegal input  Boundary conditions April 19, 2016SE 433: Lecture 4 6 of 101

Assignment 1 Part I, Grading rubric Part I: 10 Points  Compiles and runs and handles standard tests  Handles invalid cases -1  Not a triangle (a + b < c)  Degenerate triangle (a + b = c), aka line  Handles illegal input -1  Negative numbers  Floating point numbers  Extra large integers and MAXINT [See above triangle test].  Text  If the code handles most of the special condition give it the point. April 19, 2016SE 433: Lecture 4 7 of 101

Assignment 1 Part I, Grading rubric  For triangles:  = 30 hence it is a line  < 30 hence it is not a triangle  Negative length triangles are bad.  Also triangles with long numbers such as This number is 2^ ) i.e. MAX_VALUE ”A constant holding the maximum value an int can have, ”. »The problem is in testing for illegal triangles one must check a + b <= c and we get arithmetic overflows. [See example on next slide.] »I got bit the first time myself.  The problem in using is simply that it is too large April 19, 2016SE 433: Lecture 4 8 of 101

Assignment 1 Lessons Learned  Consider: public class Triangle { public static void main(String[] args) { int a = ; int b = ; int c = ; if (a + b <= c) // Check a c b { System.out.println("Not a triangle"); } else { System.out.println("Triangle");} } Result: Not a triangle April 19, 2016SE 433: Lecture 4 9 of 101

Case Study – Knight Capital High Frequency Trading (HFT) April 19, 2016SE 433: Lecture 410 of 101

Case Study – Knight Capital: High Frequency Trading (HFT)  The Knight Capital Group is an American global financial services firm.  Its high-frequency trading algorithms Knight was the largest trader in U.S. equities  with a market share of 17.3% on NYSE and 16.9% on NASDAQ. April 19, 2016SE 433: Lecture 4 11 of 101

Case Study – Knight Capital  Aug 1,  In the first 45 minutes, Knight Capital's computers executed a series of unusually large automatic orders. “… spit out duplicate buy and sell orders, jamming the market with high volumes of trades that caused the wild swings in stock prices.”  By the end of day: $460 million loss  “Trading Program Ran Amok, With No ‘Off’ Switch”  In two days, the company's market value plunged by 75% April 19, 2016SE 433: Lecture 4 12 of 101

Case Study – Knight Capital: What Happened?  "Zombie Software" Blamed for Knight Capital Trading Snafu  A new algorithmic trading program had just been installed, and began operation on Aug 1.  A dormant legacy program was somehow "inadvertently reactivated"  Once activated, the dormant system started multiplying stock trades by one thousand  Sent 4 million orders when attempting to fill just 212 customer orders  “Knight’s staff looked through eight sets of software before determining what happened.” April 19, 2016SE 433: Lecture 4 13 of 101

Case Study – Knight Capital: The Investigation and Findings  SEC launched an investigation in Nov Findings:  Code changes in 2005 introduced defects. Although the defective function was not meant to be used, it was kept in.  New code deployed in late July The defective function was triggered under new rules. Unable to recognize when orders have been filled.  Ignored system generated warning s.  Inadequate controls and procedures for code deployment and testing.  Charges filed in Oct 2013  Knights Capital settled charges for $12 million April 19, 2016SE 433: Lecture 4 14 of 101

Regression Test April 19, 2016SE 433: Lecture 415 of 101

Software Evolution  Change happens throughout the software development life cycle.  Before and after delivery  Change can happen to every aspect of software  Changes can affect unchanged areas »break code, introduce new bugs »uncover previous unknown bugs »reintroduce old bugs April 19, 2016SE 433: Lecture 4 16 of 101

Regression Test  Testing of a previously tested program following modification to ensure that new defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made.  It should be performed whenever the software or its environment is changed.  It applies to testing at all levels. April 19, 2016SE 433: Lecture 4 17 of 101

Regression Test  Keep a test suite  Use the test suite after every change  Compare output with previous tests  Understand all changes  If new tests are needed, add to the test suite. April 19, 2016SE 433: Lecture 4 18 of 101

Test Driven Development (TDD) April 19, 2016SE 433: Lecture 419 of 101

Test Early  Testing should start as early as possible  design test cases  Test early has several advantages  independence from design & code  discover inconsistencies and incompleteness of the specifications  serve as a compendium of the specifications April 19, 2016SE 433: Lecture 4 20 of 101

Test Driven Development  Test driven development (TDD) is one of the corner stones of agile software development  Agile, iterative, incremental development  Small iterations, a few units  Verification and validation carried out for each iteration.  Design & implement test cases before implementing the functionality  Run automated regression test of whole system continuously April 19, 2016SE 433: Lecture 4 21 of 101

Process of Test Driven Development  Tests should be written first (before any code)  Execute all test cases => all fail  Implement some functions  Execute all test cases => some pass  Repeat implement and re-execute all test cases  Until all test cases => pass  Refactoring, to improve design & implementation  re-execute all test cases => all pass  Every time changes are made  re-execute all test cases => all pass April 19, 2016SE 433: Lecture 4 22 of 101

Black Box Testing April 19, 2016SE 433: Lecture 423 of 101

Black Box View  The system is viewed as a black box  Provide some input  Observe the output April 19, 2016SE 433: Lecture 4 24 of 101

Functional Testing: A.k.a.: Black Box Testing  Derive test cases from the functional specifications  functional refers to the source of information  not to what is tested  Also known as:  specification-based testing (from specifications)  black-box testing (no view of the code)  Functional specification = description of intended program behavior  either formal or informal April 19, 2016SE 433: Lecture 4 25 of 101

Systematic vs. Random Testing  Random (uniform) testing  Pick possible inputs randomly and uniformly  Avoids designer bias  But treats all inputs as equally valuable  Systematic (non-uniform) testing  Select inputs that are especially valuable  Choose representatives  Black box testing is systematic testing April 19, 2016SE 433: Lecture 4 26 of 101

Why Not Random Testing?  Non-uniform distribution of defects  Example:  Program: solve a quadratic equation: a x 2 + b x + c = 0  Defect: incomplete implementation logic does not properly handle special cases: b 2 - 4ac = 0 and a = 0  Failing values are sparse in the input space — needles in a very big haystack.  Random sampling is unlikely to choose a=0.0 and b=0.0 April 19, 2016SE 433: Lecture 4 27 of 101

Systematic Partition of Input Space Failure No failure Failures are sparse in the space of possible inputs but dense in some parts of the space If we systematically test some cases from each part, we will include the dense parts Functional testing is one way of drawing lines to isolate regions with likely failures The space of possible input values April 19, 2016SE 433: Lecture 4 28 of 101

The Partition Principle  Exploit knowledge in problem domain to choose samples for testing  Focus on “special” or trouble-prone regions of the input space  Failures are sparse in the whole input space... ... but we may find regions in which they are dense  (Quasi*-) Partition testing  Separates the input space into classes whose union is the entire space  *Quasi because the classes may overlap April 19, 2016SE 433: Lecture 4 29 of 101

The Partition Principle  Desirable case for partitioning  Input values that lead to failures are dense (easy to find) in some classes of input space  Sampling each class in the quasi-partition by selecting at least one input value that leads to a failure, revealing the defect  Seldom guaranteed, depend on experience- based heuristics April 19, 2016SE 433: Lecture 4 30 of 101

Black Box Testing Exploiting the functional specification  Uses the specification to partition the input space  e.g., specification of “roots” program suggests division between cases with zero, one, and two real roots  Test each partition, and boundaries between partitions  No guarantees, but experience suggests failures often lie at the boundaries (as in the “roots” program) April 19, 2016SE 433: Lecture 4 31 of 101

Why Black Box Testing?  Early.  can start before code is written  Effective.  find some classes of defects, e.g., missing logic  Widely applicable  any description of program behavior as spec  at any level of granularity, from module to system testing.  Economical  less expensive than structural (white box) testing The base-line technique for designing test cases April 19, 2016SE 433: Lecture 4 32 of 101

Early Black Box Testing  Program code is not necessary  Only a description of intended behavior is needed  Even incomplete and informal specifications can be used »Although precise, complete specifications lead to better test suites  Early test design has side benefits  Often reveals ambiguities and inconsistency in spec  Useful for assessing testability »And improving test schedule and budget by improving spec  Useful explanation of specification »or in the extreme case (as in XP), test cases are the spec April 19, 2016SE 433: Lecture 4 33 of 101

Functional versus Structural: Classes of faults  Different testing strategies (functional, structural, fault- based, model-based) are most effective for different classes of faults  Functional testing is best for missing logic faults  A common problem: Some program logic was simply forgotten  Structural (code-based) testing will never focus on code that isn’t there! April 19, 2016SE 433: Lecture 4 34 of 101

Functional vs. Structural Test  Functional test is applicable in testing at all granularity levels:  Unit test(from module interface spec)  Integration test(from API or subsystem spec)  System test (from system requirements spec)  Regression test(from system requirements + bug history)  Structural test is applicable in testing relatively small parts of a system:  Unit test April 19, 2016SE 433: Lecture 4 35 of 101

Steps: From specification to test cases 1. Decompose the specification If the specification is large, break it into independently testable features to be considered in testing 2. Select representatives  Representative values of each input, or  Representative behaviors of a model Often simple input/output transformations don’t describe a system. We use models in program specification, in program design, and in test design 3. Form test specifications Typically: combinations of input values, or model behaviors 4. Produce and execute actual tests April 19, 2016SE 433: Lecture 4 36 of 101

From specification to test cases April 19, 2016SE 433: Lecture 4 37 of 101

An Example: Postal Code Lookup  Input: ZIP code (5-digit US Postal code)  Output: List of cities What are some representative values to test? April 19, 2016SE 433: Lecture 4 38 of 101

Example: Representative Values  Correct zip code  With 0, 1, or many cities  Malformed zip code  Empty; 1-4 characters; 6 characters; very long  Non-digit characters  Non-character data Simple example with one input, one output Note prevalence of boundary values (0 cities, 6 characters) and error cases April 19, 2016SE 433: Lecture 4 39 of 101

Summary  Functional testing, i.e., generation of test cases from specifications is a valuable and flexible approach to software testing  Applicable from very early system specs right through module specifications  (quasi-)Partition testing suggests dividing the input space into (quasi-)equivalent classes  Systematic testing is intentionally non-uniform to address special cases, error conditions, and other small places  Dividing a big haystack into small, hopefully uniform piles where the needles might be concentrated April 19, 2016SE 433: Lecture 4 40 of 101

Basic Techniques of Black Box Testing April 19, 2016SE 433: Lecture 441 of 101

Single Defect Assumption Failures are rarely the result of the simultaneous effects of two (or more) defects. April 19, 2016SE 433: Lecture 4 42 of 101

Functional Testing Concepts The four key concepts in functional testing are:  Precisely identify the domain of each input and each output variable  Select values from the data domain of each variable having important properties  Consider combinations of special values from different input domains to design test cases  Consider input values such that the program under test produces special values from the domains of the output variables April 19, 2016SE 433: Lecture 4 43 of 101

Developing Test Cases  Consider: Test cases for input box accepting numbers between 1 and 1000  If you are testing for an input box accepting numbers from 1 to 1000 then there is no use in writing thousand test cases for all 1000 valid input numbers plus other test cases for invalid data.  Using equivalence partitioning method, above test cases can be divided into three sets of input data called as classes. Each test case is a representative of respective class.  We can divide our test cases into three equivalence classes of some valid and invalid inputs. April 19, 2016SE 433: Lecture 4 44 of 101

Developing Test Cases 1. One input data class with all valid inputs. Pick a single value from range 1 to 1000 as a valid test case. If you select other values between 1 and 1000 then result is going to be same. So one test case for valid input data should be sufficient. 2. Input data class with all values below lower limit. I.e. any value below 1, as a invalid input data test case. 3. Input data with any value greater than 1000 to represent third invalid input class. So using equivalence partitioning you have categorized all possible test cases into three classes. Test cases with other values from any class should give you the same result. April 19, 2016SE 433: Lecture 4 45 of 101

Equivalence Classes  Equivalence classes are the sets of values in a (quasi-) partition of the input, or output domain  Values in an equivalence class cause the program to behave in a similar way:  failure or success  Motivation:  gain a sense of complete testing and avoid redundancy  First determine the boundaries … then determine the equivalencies April 19, 2016SE 433: Lecture 4 46 of 101

Determining Equivalence Classes  Look for ranges of numbers or values  Look for memberships in groups  Some may be based on time  Include invalid inputs  Look for internal boundaries  Don’t worry if they overlap with each other —  better to be redundant than to miss something  However, test cases will easily overlap with boundary value test cases April 19, 2016SE 433: Lecture 4 47 of 101

Selecting Data Points  Determining equivalence classes for each input variable or field  Single input variable  Normal test »Select one data point from each valid equivalence class  Robustness test »Include invalid equivalence class April 19, 2016SE 433: Lecture 4 48 of 101

Selecting Data Points  Multiple input variables  Weak normal test: »Select one data point from each valid equivalence class  Strong normal test: »Select one data point from each combination of (the cross product of) the valid equivalence classes  Weak/strong robustness test: »Include invalid equivalence classes  How many test cases do we need? April 19, 2016SE 433: Lecture 4 49 of 101

Example of Selecting Data Points  Suppose a program has 2 input variables, x and y  Suppose x can lie in 3 valid equivalence classes:  a ≤ x < b  b ≤ x < c  c ≤ x ≤ d  Suppose y can lie in 2 valid equivalence classes:  e ≤ y < f  f ≤ y ≤ g April 19, 2016SE 433: Lecture 4 50 of 101

Weak Normal Test  Every normal, i.e., valid, equivalence class of every input variable is tested in at least one test case.  A representative value of each normal equivalence class of each input variable appears in at least one test case.  Economical, requires few test cases if the values are selected prudently.  Complete. April 19, 2016SE 433: Lecture 4 51 of 101

Weak Normal Test April 19, 2016SE 433: Lecture 4 52 of 101

Strong Normal Test  Every combination of normal equivalence classes of every input variable is tested in at least one test cases.  More comprehensive.  Requires more test cases.  May not be practical for programs with large number of input variables. April 19, 2016SE 433: Lecture 4 53 of 101

Strong Normal Test a bc d y x e f g April 19, 2016SE 433: Lecture 4 54 of 101

Weak Robustness Test  Add robustness test cases to weak normal test suite.  Every invalid equivalence class of every input variable is tested in at least one robustness test case.  Each robustness test case include only one invalid input value.  No combination of invalid input values. April 19, 2016SE 433: Lecture 4 55 of 101

Weak Robustness Test April 19, 2016SE 433: Lecture 4 56 of 101

Strong Robustness Test  Add robustness test cases to strong normal test suite.  Every invalid equivalence class of an input variable is tested with all combinations of valid equivalence classes of other input variable.  Each robustness test case include only one invalid input value.  No combination of invalid input values. April 19, 2016SE 433: Lecture 4 57 of 101

Strong Robustness Test Cases a bc d y x e f g April 19, 2016SE 433: Lecture 4 58 of 101

Summary  For Multiple input variables  Weak normal test: »Select one data point from each valid equivalence class  Strong normal test: »Select one data point from each combination of (the cross product of) the valid equivalence classes  Weak/strong robustness test: »Include invalid equivalence classes April 19, 2016SE 433: Lecture 4 59 of 101

Example: nextDate() Function  This program reads a date in the format of mm/dd/yyyy and prints out the next date.  For example, an input of 03/31/2014 gives an output of 04/01/2014  A constraint (arbitrary, for illustration purpose only)  The year is between 1800 and 2200 inclusive April 19, 2016SE 433: Lecture 4 60 of 101

Example: nextDate(): Valid Equivalence Classes  The valid equivalence classes for the Day  { 1 ≤ Day ≤ 28 }  { Day = 29 }  { Day = 30 }  { Day = 31 }  The valid equivalence classes for the Month  { Month has 30 days }  { Month has 31 days }  { Month = February }  The valid equivalence classes for the Year  { Year is not a leap year }  { Year is a leap year } April 19, 2016SE 433: Lecture 4 61 of 101

Example: nextDate(): Invalid Equivalence Classes  The invalid equivalence classes for the Day { Day 31 } { Incorrect format of Day } { Illegal characters of Day }  The invalid equivalence classes for the Month { Month 12 } { Incorrect format of Month }{ Illegal characters of Month }  The invalid equivalence classes for the Year { Year 2200 } { Incorrect format of Year }{ Illegal characters of Year }  Other invalid equivalence classes { Incorrect order of Day, Month, Year } { Missing Day, Month, or Year } { Extra number or character } April 19, 2016SE 433: Lecture 4 62 of 101

Example: nextDate(): Test Cases: Weak Normal  Valid equivalence classes and data points  Day Data Points »{ 1 ≤ Day ≤ 28 } 10 »{ Day = 29 } 29 »{ Day = 30 }30 »{ Day = 31 }31  Month »{ Month has 30 days }04 »{ Month has 31 days } 03 »{ Month = February }02  Year »{ Year is not a leap year }2009 »{ Year is a leap year }2008  Weak normal test cases (4 cases) 1. 02/10/ /29/ /30/ /31/2008 April 19, 2016SE 433: Lecture 4 63 of 101

Example: nextDate(): Test Cases: Strong Normal  Strong normal test cases (17 cases) 02/10/ /29/ /10/ /10/ /29/ /30/ /31/ /10/ /29/ /30/ /31/ /10/ /29/ /30/ /10/ /29/ /30/2009  Note: some combinations are invalid, thus excluded  e.g., 02/30/2008 April 19, 2016SE 433: Lecture 4 64 of 101

Example: nextDate(): Test Cases: Weak Robustness  Add a test case for each invalid equivalence class  { Day < 1 }02/00/2008  { Day > 31 } 03/36/2009  { Incorrect format of Day } 02/7/2008  { Illegal characters of Day } 02/First/2008  { Month < 1 } 00/10/2009  { Month > 12 }15/10/2008  { Incorrect format of Month }3/10/2008  { Illegal characters of Month } Mar/10/2009  { Year < 1800 }02/10/1745  { Year > 2200 }02/10/2350  { Incorrect format of Year }02/10/10  { Illegal characters of Year } 02/10/’00  { Incorrect order of Day, Month, Year }29/03/2008  { Missing Day, Month, or Year }02/10  { Extra number or character }02/20/2008/2009 April 19, 2016SE 433: Lecture 4 65 of 101

Example: nextDate(): Test Cases: Strong Robustness  Add invalid test cases resulting from combination of valid equivalence classes 04/31/ /29/200902/30/ /31/ /30/200802/31/2008  Ensure each invalid test case contains only one invalid value.  Single defect assumption April 19, 2016SE 433: Lecture 4 66 of 101

Boundary Value Testing  Test values, sizes, or quantities near the design limits  value limits, length limits, volume limits  null strings vs. empty strings  Errors tend to occur near the extreme values of inputs (off by one is an example)  Robustness:  How does the software react when boundaries are exceeded?  Use input values  at their minimum, just above the minimum  at a nominal value,  at the maximum, just below the maximum April 19, 2016SE 433: Lecture 4 67 of 101

Input Boundary Values  Test cases for a variable x, where a ≤ x ≤ b  Experience shows that errors occur more frequently for extreme values of a variable. a b x x(min)x(min+)x(nom)x(max -)x(max) April 19, 2016SE 433: Lecture 4 68 of 101

Input Boundary Values – 2 Variables Test cases for a variables x 1 and x 2, where a ≤ x 1 ≤ b and c ≤ x 2 ≤ d  Single defect assumption a b c d x 2 x 1 April 19, 2016SE 433: Lecture 4 69 of 101

Example: nextDate() – Test Cases: Boundary Values  Additional test cases, valid input 04/01/200904/30/ /01/200903/31/ /01/200902/28/ /29/ /01/200812/31/ /01/180012/31/2200 April 19, 2016SE 433: Lecture 4 70 of 101

Robustness Testing  Test cases for a variable x, where a ≤ x ≤ b  Stress input boundaries  Acceptable response for invalid inputs?  Leads to exploratory testing (test hackers)  Can discover hidden functionality ab x April 19, 2016SE 433: Lecture 4 71 of 101

Robustness Testing – 2 Variables a b c d x 2 x 1 April 19, 2016SE 433: Lecture 4 72 of 101

Example: nextDate() – Test Cases: Boundary Values  Additional robustness test cases, invalid input 04/00/200904/31/ /00/200903/32/ /00/200902/29/ /30/ /00/200812/32/ /31/179901/01/ /01/200913/01/2009 April 19, 2016SE 433: Lecture 4 73 of 101

Worst-Case Testing  Discard the single-defect assumption  Worst-case boundary testing:  Allow the input values to simultaneously approach their boundaries  Worst-case robustness testing:  Allow the input values to simultaneously approach and exceed their boundaries April 19, 2016SE 433: Lecture 4 74 of 101

Worst Case Boundary Testing – 2 Variables a b c d x 2 x 1 April 19, 2016SE 433: Lecture 4 75 of 101

Worst Case Robustness Testing – 2 Variables a b c d x 2 x 1 April 19, 2016SE 433: Lecture 4 76 of 101

Limitations of Boundary Value Testing  Doesn’t require much thought  May miss internal boundaries  Usually assumes the variables are independent  Values at the boundary may not have meaning April 19, 2016SE 433: Lecture 4 77 of 101

Special Value Testing  The most widely practiced form of functional testing  The tester uses his or her domain knowledge, experience, or intuition to probe areas of probable errors  Other terms: “hacking”, “out-of-box testing”, “ad hoc testing”, “seat of the pants testing”, “guerilla testing” April 19, 2016SE 433: Lecture 4 78 of 101

Uses of Special Value Testing  Complex mathematical (or algorithmic) calculations  Worst case situations (similar to robustness)  Problematic situations from past experience  “Second guess” the likely implementation April 19, 2016SE 433: Lecture 4 79 of 101

Characteristics of Special Value Testing  Experience really helps  Frequently done by the customer or user  Defies measurement  Highly intuitive  Seldom repeatable  Often, very effective April 19, 2016SE 433: Lecture 4 80 of 101

Summary: Key Concepts  Black-box testing  vs. random testing, white-box testing  Partitioning principle  Black box testing techniques  Equivalence class  Boundary value testing  Special value testing  Single defect assumption  Normal vs. robustness testing  Weak and strong combinations April 19, 2016SE 433: Lecture 4 81 of 101

Guidelines and observations  Equivalence Class Testing is appropriate when input data is defined in terms of intervals and sets of discrete values.  Equivalence Class Testing is strengthened when combined with Boundary Value Testing  Strong equivalence takes the presumption that variables are independent. If that is not the case, redundant test cases may be generated April 19, 2016SE 433: Lecture 4 82 of 101

An Introduction to JUnit Part 2 April 19, 2016SE 433: Lecture 483 of 101

JUnit Best Practices  Each test case should be independent.  Test cases should be independent of execution order.  No dependencies on the state of previous tests. April 19, 2016SE 433: Lecture 4 84 of 101

JUnit Test Fixtures  The context in which a test case is executed.  Typically include:  Common objects or resources that are available for use by any test case.  Activities to manage these objects  Set-up: object and resource allocation  Tear-down: object and resource de-allocation April 19, 2016SE 433: Lecture 4 85 of 101

Set-Up  Tasks that must be done prior to each test case  Examples:  Create some objects to work with  Open a network connection  Open a file to read/write April 19, 2016SE 433: Lecture 4 86 of 101

Tear-Down  Tasks to clean up after execution of each test case.  Ensures  Resources are released  the system is in a known state for the next test case  Clean up should not be done at the end of a test case,  since a failure ends execution of a test case at that point April 19, 2016SE 433: Lecture 4 87 of 101

Method Annotations for Set-Up and Tear-Down annotation: set-up  code to run before each test case. annotation: Teardown  code to run after each test case.  will run regardless of the verdict, even if exceptions are thrown in the test case or an assertion fails.  Multiple annotations are allowed  all methods annotated will be run before each test case  but no guarantee of execution order April 19, 2016SE 433: Lecture 4 88 of 101

Example: Using a File as a Test Fixture public class OutputTest { private File public void createOutputFile() { output = new File(...); public void deleteOutputFile() { output.close(); output.delete(); public void test1WithFile() { // code for test case … public void test2WithFile() { // code for test case … } April 19, 2016SE 433: Lecture 4 89 of 101

Method Execution Order 1. createOutputFile() 2. test1WithFile() 3. deleteOutputFile() 4. createOutputFile() 5. test2WithFile() 6. deleteOutputFile() Not guaranteed: test1WithFile runs before test2WithFile April 19, 2016SE 433: Lecture 4 90 of 101

Once-Only Set-Up annotation on a static method  one method only  Run the method once only for the entire test class  before any of the tests, and  before method(s)  Useful for starting servers, opening connections, etc.  No need to reset/restart for each test case  Shared, public static void anyName() { // class setup code here } April 19, 2016SE 433: Lecture 4 91 of 101

Once-Only Tear-Down After Class annotation on a static method  one method only  Run the method once only for the entire test class  after any of the tests  after method(s)  Useful for stopping servers, closing connections, public static void anyName() { // class clean up code here } April 19, 2016SE 433: Lecture 4 92 of 101

Timed Tests  Useful for simple performance test  Network communication  Complex computation  The timeout parameter annotation  in milliseconds  The test fails  if timeout occurs before the test method public void testLengthyOperation() {... } April 19, 2016SE 433: Lecture 4 93 of 101

Parameterized Tests  Repeat a test case multiple times with different data  Define a parameterized test  Class annotation, defines a test  Define a constructor »Input and expected output values for one data point  Define a static method returns a Collection of data points depending] »Each data point: an array, whose elements match the constructor arguments April 19, 2016SE 433: Lecture 4 94 of 101

Running a Parameterized Test  Use a parameterized test runner  For each data point provided by the parameter method  Construct an instance of the class with the data point  Execute all test methods defined in the class April 19, 2016SE 433: Lecture 4 95 of 101

Parameterized Test Example – Program Under Test public class Calculator { public long factorial(int n) { … return result; } public class Calculator { public long factorial(int n) { … return result; } See Junit2.zip April 19, 2016SE 433: Lecture 4 96 of 101

Parameterized Test Example – The Test public class CalculatorTest { private long expected; // expected output private int value; // input value public CalculatorTest(long expected, int value) { this.expected = expected; this.value = value; public class CalculatorTest { private long expected; // expected output private int value; // input value public CalculatorTest(long expected, int value) { this.expected = expected; this.value = value; } April 19, 2016SE 433: Lecture 4 97 of 101

Parameterized Tests Example – The Parameter public static Collection data() { return Arrays.asList(new Integer[][] { { 1, 0 }, // expected, value { 1, 1 }, { 2, 2 }, { 24, 4 }, { 5040, 7 }, }); public static Collection data() { return Arrays.asList(new Integer[][] { { 1, 0 }, // expected, value { 1, 1 }, { 2, 2 }, { 24, 4 }, { 5040, 7 }, }); } April 19, 2016SE 433: Lecture 4 98 of 101

Parameterized Tests Example – The Test Method private long expected; // expected output private int value; // input value public void factorialTest() { Calculator calc = new Calculator(); assertEquals(expected, calc.factorial(value)); } private long expected; // expected output private int value; // input value public void factorialTest() { Calculator calc = new Calculator(); assertEquals(expected, calc.factorial(value)); } April 19, 2016SE 433: Lecture 4 99 of 101

Readings and References  Chapter 10 of the textbook.  JUnit documentation   An example of parameterized test  JUnit2.zip in D2L April 19, 2016SE 433: Lecture of 101

Next Class Topic: Black Box Testing Part 2, JUnit & Ant Reading: Chapter 10 of the textbook. Articles on the class page and reading list Assignment 4 – Parameterized Test  Due April 26, 2016 Assignment 5 – Black Box Testing – Part 1: Test Case Design  Due May 3, 2016 April 19, 2016SE 433: Lecture of 101