Presentation is loading. Please wait.

Presentation is loading. Please wait.

Lecture 9 Testing Topics TestingReadings: Spring, 2008 CSCE 492 Software Engineering.

Similar presentations


Presentation on theme: "Lecture 9 Testing Topics TestingReadings: Spring, 2008 CSCE 492 Software Engineering."— Presentation transcript:

1 Lecture 9 Testing Topics TestingReadings: Spring, 2008 CSCE 492 Software Engineering

2 – 2 – CSCE 492 Spring 2008 Overview Last Time Achieving Quality Attributes (Nonfunctional) requirements Today’s Lecture Testing = Achieving Functional requirementsReferences: Chapter 8 - Testing Next Time: Requirements meetings with individual groups Start at 10:15 Sample test -

3 – 3 – CSCE 492 Spring 2008 Testing Why Test? The earlier an error is found the cheaper it is to fix. Errors/bugs terminology A fault is a condition that causes the software to fail. A fault is a condition that causes the software to fail. A failure is an inability of a piece of software to perform according to specifications A failure is an inability of a piece of software to perform according to specifications

4 – 4 – CSCE 492 Spring 2008 Testing Approaches Development time techniques Automated tools: compilers, lint, etc Automated tools: compilers, lint, etc Offline techniques Walkthroughs Walkthroughs Inspections Inspections Online Techniques Black box testing (not looking at the code) Black box testing (not looking at the code) White box testing White box testing

5 – 5 – CSCE 492 Spring 2008 Testing Levels Unit level testing Integration testing System testing Test cases/test suites Regression tests

6 – 6 – CSCE 492 Spring 2008 Simple Test for a Simple Function Test cases for the function ConvertToFahrenheit Formula Fahrenheit = Celsius * scale + 32 ; // scale = 1.8 Test Cases for f = convertToFahrenheit(input);  convertToFahrenheit(0); // result should be 32  convertToFahrenheit(100); // result should be 212  convertToFahrenheit(-10); // result should be ???

7 – 7 – CSCE 492 Spring 2008 Principles of Object-Oriented Testing Object-oriented systems are built out of two or more interrelated objects Determining the correctness of O-O systems requires testing the methods that change or communicate the state of an object Testing methods in an object-oriented system is similar to testing subprograms in process- oriented systems

8 – 8 – CSCE 492 Spring 2008 Testing Terminology Error - refers to any discrepancy between an actual, measured value and a theoretical, predicted value. Error also refers to some human action that results in some sort of failure or fault in the software Fault - is a condition that causes the software to malfunction or fail Failure - is the inability of a piece of software to perform according to its specifications. Failures are caused by faults, but not all faults cause failures. A piece of software has failed if its actual behaviour differs in any way from its expected behaviour

9 – 9 – CSCE 492 Spring 2008 Code Inspections Formal procedure, where a team of programmers read through code, explaining what it does. Inspectors play “devils advocate”, trying to find bugs. Time consuming process! Can be divisive/lead to interpersonal problems. Often used only for safety/time critical systems.

10 – 10 – CSCE 492 Spring 2008 Walkthroughs Similar to inspections, except that inspectors “mentally execute” the code using simple test data. Expensive in terms of human resources. Impossible for many systems. Usually used as discussion aid.

11 – 11 – CSCE 492 Spring 2008 Test Plan A test plan specifies how we will demonstrate that the software is free of faults and behaves according to the requirements specification A test plan breaks the testing process into specific tests, addressing specific data items and values Each test has a test specification that documents the purpose of the test

12 – 12 – CSCE 492 Spring 2008 Test Plan If a test is to be accomplished by a series of smaller tests, the test specification describes the relationship between the smaller and the larger tests The test specification must describe the conditions that indicate when the test is complete and a means for evaluating the results

13 – 13 – CSCE 492 Spring 2008 Example Test Plan Deliverable 8.1 p267 Test #15 Specification: addPatron() while checking out resource  Requirement #3  Purpose: Create a new patron object when a new Patron is attempting to check out a resources  Test Description:  Enter check out screen  Press new patron button  … (next slide)  Test Messages  Evaluation – print patron list to ensure uniqueness and that data was entered correctly

14 – 14 – CSCE 492 Spring 2008 Example Test Description of Test Plan 3.Test Description:  Enter check out screen  Press new patron button  Enter Jill Smith  Patron Name field  Enter New Boston Rd.  Address field  Enter …  Choose Student from status choice box  A new Patron ID Number is generated if the name?? is new.

15 – 15 – CSCE 492 Spring 2008 Test Oracle A test oracle is the set of predicted results for a set of tests, and is used to determine the success of testing Test oracles are extremely difficult to create and are ideally created from the requirements specification

16 – 16 – CSCE 492 Spring 2008 Test Cases A test case is a set of inputs to the system Successfully testing a system hinges on selecting representative test cases Poorly chosen test cases may fail to illuminate the faults in a system In most systems exhaustive testing is impossible, so a white box or black box testing strategy is typically selected

17 – 17 – CSCE 492 Spring 2008 Black Box Testing The tester knows nothing about the internal structure of the code Test cases are formulated based on expected output of methods Tester generates test cases to represent all possible situations in order to ensure that the observed and expected behaviour is the same

18 – 18 – CSCE 492 Spring 2008 Black Box Testing In black box testing, we ignore the internals of the system, and focus on relationship between inputs and outputs. Exhaustive testing would mean examining output of system for every conceivable input. Clearly not practical for any real system! Instead, we use equivalence partitioning and boundary analysis to identify characteristic inputs.

19 – 19 – CSCE 492 Spring 2008 Equivalence Partitioning Suppose system asks for “a number between 100 and 999 inclusive”. This gives three equivalence classes of input: – less that 100 – 100 to 999 – greater than 999 We thus test the system against characteristic values from each equivalence class. Example: 50 (invalid), 500 (valid), 1500(invalid).

20 – 20 – CSCE 492 Spring 2008 Boundary Values Arises from the fact that most program fail at input boundaries. Suppose system asks for “a number between 100 and 999 inclusive”. The boundaries are 100 and 999. We therefore test for values: 99 100 101 the lower boundary 998 999 1000 the upper boundary

21 – 21 – CSCE 492 Spring 2008 White Box Testing The tester uses knowledge of the programming constructs to determine the test cases to use If one or more loops exist in a method, the tester would wish to test the execution of this loop for 0, 1, max, and max + 1, where max represents a possible maximum number of iterations Similarly, conditions would be tested for true and false

22 – 22 – CSCE 492 Spring 2008 White Box Testing In white box testing, we use knowledge of the internal structure of systems to guide development of tests. The ideal: examine every possible run of a system. Not possible in practice! Instead: aim to test every statement at least once! EXAMPLE. if (x > 5) { System.out.println(‘‘hello’’); } else { System.out.println(‘‘bye’’); } There are two possible paths through this code, corresponding to x > 5 and x 5.

23 – 23 – CSCE 492 Spring 2008 Unit Testing The units comprising a system are individually tested The code is examined for faults in algorithms, data and syntax A set of test cases is formulated and input and the results are evaluated The module being tested should be reviewed in context of the requirements specification

24 – 24 – CSCE 492 Spring 2008 Integration Testing The goal is to ensure that groups of components work together as specified in the requirements document Four kinds of integration tests exist Structure tests Functional tests Stress tests Performance tests

25 – 25 – CSCE 492 Spring 2008 System Testing The goal is to ensure that the system actually does what the customer expects it to do Testing is carried out by customers mimicking real world activities Customers should also intentionally enter erroneous values to determine the system behaviour in those instances

26 – 26 – CSCE 492 Spring 2008 Testing Steps Determine what the test is supposed to measure Determine what the test is supposed to measure Decide how to carry out the tests Decide how to carry out the tests Develop the test cases Develop the test cases Determine the expected results of each test (test oracle) Determine the expected results of each test (test oracle) Execute the tests Execute the tests Compare results to the test oracle Compare results to the test oracle

27 – 27 – CSCE 492 Spring 2008 Analysis of Test Results The test analysis report documents testing and provides information that allows a failure to be duplicated, found, and fixed The test analysis report mentions the sections of the requirements specification, the implementation plan, the test plan, and connects these to each test

28 – 28 – CSCE 492 Spring 2008 Special Issues for Testing Object-Oriented Systems Because object interaction is essential to O-O systems, integration testing must be more extensive Inheritance makes testing more difficult by requiring more contexts (all sub classes) for testing an inherited module

29 – 29 – CSCE 492 Spring 2008 Configuration Management Software systems often have multiple versions or releases Configuration management is the process of controlling development that produces multiple software systems An evolutionary development approach often results in multiple versions of the system Regression testing is the process of retesting elements of the system that were tested in a previous version or release

30 – 30 – CSCE 492 Spring 2008 Alpha/beta testing In-house testing is usually called alpha testing. For software products, there is usually an additional stage of testing, called beta testing. Involves distributing tested code to “beta test sites” (usually prospective customers) for evaluation and use. Typically involves a formal procedure for reporting bugs. Delivering buggy beta test code is embarrassing!


Download ppt "Lecture 9 Testing Topics TestingReadings: Spring, 2008 CSCE 492 Software Engineering."

Similar presentations


Ads by Google