Strategic Approach to Testing

Slides:



Advertisements
Similar presentations
Unit-V testing strategies and tactics.
Advertisements

1 Integration Testing CS 4311 I. Burnstein. Practical Software Testing, Springer-Verlag, 2003.
SOFTWARE TESTING. INTRODUCTION  Software Testing is the process of executing a program or system with the intent of finding errors.  It involves any.
System/Software Testing Error detection and removal determine level of reliability well-planned procedure - Test Cases done by independent quality assurance.
Using UML, Patterns, and Java Object-Oriented Software Engineering Chapter 11: Integration- and System Testing.
CMSC 345, Version 11/07 SD Vick from S. Mitchell Software Testing.
Integration testing Satish Mishra
Creator: ACSession No: 13 Slide No: 1Reviewer: SS CSE300Advanced Software EngineeringFebruary 2006 Testing - Techniques CSE300 Advanced Software Engineering.
Illinois Institute of Technology
INTEGRATION TESTING ● After or during Unit Testing ● Putting modules together in a controlled way to incrementally build up the final system. ● Start with.
Outline Types of errors Component Testing Testing Strategy
Unit Testing CS 414 – Software Engineering I Don Bagert Rose-Hulman Institute of Technology January 16, 2003.
Using UML, Patterns, and Java Object-Oriented Software Engineering Chapter 11: Integration- and System Testing.
Software Testing & Strategies
Issues on Software Testing for Safety-Critical Real-Time Automation Systems Shahdat Hossain Troy Mockenhaupt.
BY RAJESWARI S SOFTWARE TESTING. INTRODUCTION Software testing is the process of testing the software product. Effective software testing will contribute.
1 Software Testing Techniques CIS 375 Bruce R. Maxim UM-Dearborn.
Chapter 13 & 14 Software Testing Strategies and Techniques
Software Systems Verification and Validation Laboratory Assignment 3
System/Software Testing
ECE 355: Software Engineering
Overview Integration Testing Decomposition Based Integration
SOFTWARE TESTING STRATEGIES CIS518001VA : ADVANCED SOFTWARE ENGINEERING TERM PAPER.
Object-Oriented Software Engineering, Ch. 9
CMSC 345 Fall 2000 Unit Testing. The testing process.
Software Testing.
Prof. Mohamed Batouche Software Testing.
INT-Evry (Masters IT– Soft Eng)IntegrationTesting.1 (OO) Integration Testing What: Integration testing is a phase of software testing in which.
Testing Basics of Testing Presented by: Vijay.C.G – Glister Tech.
Software Testing Testing types Testing strategy Testing principles.
Software Testing. 2 CMSC 345, Version 4/12 Topics The testing process  unit testing  integration and system testing  acceptance testing Test case planning.
TESTING LEVELS Unit Testing Integration Testing System Testing Acceptance Testing.
1 Integration Testing CS 4311 I. Burnstein. Practical Software Testing, Springer-Verlag, 2003.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
Software Engineering Saeed Akhtar The University of Lahore.
CSC 480 Software Engineering Test Planning. Test Cases and Test Plans A test case is an explicit set of instructions designed to detect a particular class.
Integration testing Integrate two or more module.i.e. communicate between the modules. Follow a white box testing (Testing the code)
1 Software Testing Strategies: Approaches, Issues, Testing Tools.
SOFTWARE TESTING. Introduction Software Testing is the process of executing a program or system with the intent of finding errors. It involves any activity.
Integration Testing Beyond unit testing. 2 Testing in the V-Model Requirements Detailed Design Module implementation Unit test Integration test System.
What is a level of test?  Defined by a given Environment  Environment is a collection of people, hard ware, software, interfaces, data etc.
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
White Box Testing by : Andika Bayu H.
System Testing 12/09. Hierarchy of Testing Testing Program Testing Top Down Bottom Up Integration TestingUnit Testing System Testing Big Bang Sandwich.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
SOFTWARE TESTING LECTURE 9. OBSERVATIONS ABOUT TESTING “ Testing is the process of executing a program with the intention of finding errors. ” – Myers.
ANOOP GANGWAR 5 TH SEM SOFTWARE TESTING MASTER OF COMPUTER APPLICATION-V Sem.
Verification vs. Validation Verification: "Are we building the product right?" The software should conform to its specification.The software should conform.
Software Testing Strategies for building test group
Software Testing.
Integration Testing.
Software Testing.
Rekayasa Perangkat Lunak Part-13
Software Engineering (CSI 321)
Levels Of Testing and Special Tests
Chapter 13 & 14 Software Testing Strategies and Techniques
White Box Testing.
UNIT-IV ECS-602 Software engineering PART-I
Lecture 09:Software Testing
Verification and Validation Unit Testing
Static Testing Static testing refers to testing that takes place without Execution - examining and reviewing it. Dynamic Testing Dynamic testing is what.
Chapter 10 – Software Testing
Integration Testing CS 4311
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
Software Testing “If you can’t test it, you can’t design it”
Chapter 11: Integration- and System Testing
Chapter 11: Integration and System Testing
CS410 – Software Engineering Lecture #11: Testing II
Software Testing Strategies
Chapter 13 & 14 Software Testing Strategies and Techniques 1 Software Engineering: A Practitioner’s Approach, 6th edition by Roger S. Pressman.
Presentation transcript:

Course Notes Set 10: Testing Strategies Computer Science and Software Engineering Auburn University

Strategic Approach to Testing Testing begins at the unit level and works toward integrating the entire system Various techniques for testing are appropriate at different times Conducted by the developer and by independent test groups

Requirements Specification Testing Strategies Requirements Specification System Testing Preliminary Design Integration Testing Detailed Design Unit Testing Coding [Adapted from Software Testing A Craftman’s Approach, by Jorgensen, CRC Press, 1995]

A Testing Strategy System engineering Requirements Design Code Unit Test Integration Test Validation Test System Test [Adapted from Software Engineering 4th Ed, by Pressman, McGraw-Hill, 1997]

Integration Testing After individual components have passed unit testing, they are merged together to form subsystems and ultimately one complete system. Integration testing is the process of exercising this “hierarchically accumulating” system.

Integration Testing We will (normally) view the system as a hierarchy of components. Call graph Structure chart Design tree Integration testing can begin at the top of this hierarchy and work downward, or it can begin at the bottom of the hierarchy and work upwards. It can also employ a combination of these two approaches.

Example Component Hierarchy B C D E F G [Figure and associated examples adapted from Pfleeger 2001]

Integration Testing Strategies Big-bang integration Bottom-up Integration Top-down Integration Sandwich Integration

Big-bang Integration All components are tested in isolation. Then, the entire system is integrated in one step and testing occurs at the top level. Often used (perhaps wrongly), particularly for small systems. Does not scale. Difficult or impossible to isolate faults.

Big-bang Integration A B C D E F G Test A Test B Test C Test A,B,C,

Bottom-up Integration Test each unit at the bottom of the hierarchy first. Then, test the components that call the previously tested ones (one layer up in the hierarchy). Repeat until all components have been tested. Component drivers are used to do the testing.

Bottom-up Integration Test E Test B,E,F Test F Test A,B,C, D,E,F, G Test C Test G Test D,G A B C D E F G

Bottom-up Integration The manner in which the software was designed will influence the appropriateness of bottom-up integration. While it is normally appropriate for object-oriented systems, bottom-up integration has disadvantages for functionally-decomposed systems: Top-level components are usually the most important, but the last to be tested. The upper levels are more general while the lower levels are more specific. Thus, by testing from the bottom up the discover of major faults can be delayed. Top-level faults are more likely to reflect design errors, which should obviously be discovered as soon as possible and are likely to have wide-ranging consequences. In timing-based systems, the timing control is usually in the top-level components.

Top-down Integration The top-level component is tested in isolation. Then, all the components called by the one just tested are combined and tested as a subsytem. This is repeated until all components have been integrated and tested. Stubs are used to fill in for components that are called but are not yet included in the testing.

Top-down Integration A B C D E F G Test A Test A,B, C,D Test A,B,C,

Top-down Integration Again, the design of the system influences the appropriateness of the integration strategy. Top-down integration is obviously well-suited to systems that have been created through top-down design. When major system functions are localized to components, top-down integration allows the testing to isolate one function at a time and follow its control flow from the highest levels of abstraction to the lowest levels. Also, design problems show up earlier rather than later.

Top-down Integration A major disadvantage is the need of stubs. Writing stubs can be complex since they must function under the same conditions as their real counterpart. The correctness of the stub will influence the validity of the test. A large number of stubs could be required, particularly when there are a large number of general-purpose components in the lowest layer. Another criticism is the lack of individual testing on interior components. To address this concern, a modified top-down integration strategy can be used. Instead of incorporating an entire layer at once, each component in a given layer is tested individually before the integration of that layer occurs. This introduces another problem, however: Now both stubs and component drivers are needed.

Modified Top-down Integration Test B Test E Test A Test C Test A,B, C,D Test F Test A,B,C, D,E,F, G Test D Test G A B C D E F G

Sandwich Integration Top-down and bottom-up can be combined into what Myers calls “sandwich integration.” The system is viewed as being composed of three major levels: the target layer in the middle, the layers above the target, and the layers below the target. A top-down approach is used for the top level while a bottom-up approach is used for the bottom level. Testing converges on the target level.

Sandwich Integration A B C D E F G Test E Test B,E,F Test F Test D,G

Measures for Integration Testing Recall v(G) is an upper bound on the number of independent/basis paths in a source module Similarly, we would like to limit the number of subtrees in a structure chart or call graph

Subtrees in Architecture vs. Paths in Units A call graph (or equivalent) architectural representation corresponds to a design tree representation, just as the source code for a unit corresponds to a flowgraph. Executing the design tree means it is entered at the root, modules in the subtrees are executed, and it eventually exits at the root. Just as the program can have a finite (if it halts), but overwhelming, number of paths, a design tree can have an inordinately large number of subtrees as a result of selection and iteration. We need a measure for design trees that is the analog of the basis set of independent paths for units.

Design Tree : Complexity of 1 2 3 4 5 6 7 8 9 10 11 12 13 14 [Adapted from McCabe and Butler, “Design Complexity Measurement and Testing,” CACM 32(12)]

Design Tree : Complexity > 1 2 3 4 5 6 7 8 9 10 11 12 13 14 [Adapted from McCabe and Butler, “Design Complexity Measurement and Testing,” CACM 32(12)]

Design Tree Notation Possible Paths: Neither A B A B Possible Paths: A M M M A B A B A B Possible Paths: Neither A B A B Possible Paths: A B Possible Paths: Neither A B

Subtrees vs. Paths M’s Flowgraph Design Tree C E M A B A B X [Adapted from McCabe and Butler, “Design Complexity Measurement and Testing,” CACM 32(12)]

Flowgraph Information Flowgraph symbols A black dot is a call to a subordinate module A white dot is a sequential statement (or a collection of sequential statements) Rules for reduction Sequential black dot : may not be reduced Sequential white dot : a sequential node may be reduced to a single edge Repetitive white dots : a logical repetition without a black dot can be reduced to a single node Conditional white dots : a logical decision with two paths without a black dot may be reduced to one path

4. Conditional or Looping White Dot Decisions Reduction Rules 1. Sequential Black Dot 2. Sequential White Dot 3. Repetitive White Dot 4. Conditional or Looping White Dot Decisions or [Adapted from McCabe and Butler, “Design Complexity Measurement and Testing,” CACM 32(12)]

Example Reduction 1 1 1 2 3 2 3 2 3 5 4 4 4 7 6 6 6 From 1 to 2: 8 Nodes 5 and 7 are eliminated using Rule 2 From 2 to 3 An edge from node2 to node 6 removed using rule 4 8 8 9 9 8 9 [Adapted from McCabe and Butler, “Design Complexity Measurement and Testing,” CACM 32(12)]

Example Reduction 1 1 1 1 3 3 3 3 4 4 4 4 6 From 3 to 4 Node 2 is eliminated using rule 2 From 4 to 5 Node 6 is eliminated using rule 2 From 5 to 6 The edge from node 1 to node 8 is removed using rule 4 From 6 to 7 Node 8 is eliminated using rule 2 8 8 8 9 9 9 9 [Adapted from McCabe and Butler, “Design Complexity Measurement and Testing,” CACM 32(12)]

Architectural Design Measures Number of subtrees The set of all subtrees is not particularly useful, but a basis set would be. Module Design Complexity : iv(G) The cyclomatic complexity of the reduced flowgraph of the module Design Complexity: S0 S0 of a module M is S0 = iv(Gj) j D where D is the set of descendants of M unioned with M Note: If a module is called several times, it is added only once

Design Complexity Example iv=3 S0=3 iv=2 S0=4 iv=2 S0=1 iv=1 S0=1 iv=1 S0=1 iv=1 S0=1 iv=1 [Adapted from McCabe and Butler, “Design Complexity Measurement and Testing,” CACM 32(12)]

Design Complexity Example iv=2 A S0=6 iv=2 B S0=1 iv=1 S0(A) = iv(A) + iv(C) + iv(D) + iv(E) C S0=4 iv=2 D S0=1 iv=1 E S0=1 iv=1 [Adapted from McCabe and Butler, “Design Complexity Measurement and Testing,” CACM 32(12)]

Architectural Design Measures Integration Complexity : S1 Measure of the number of integration tests required S1 = S0 - n + 1 S0 is the design complexity n is the number of modules

Integration Complexity S1=5 S0=9 iv=3 S1=5 S0=9 iv=3 M N A S0=5 iv=3 B iv=1 S S0=5 iv=3 T iv=1 C iv=1 D iv=1 U iv=1 V iv=1 [Adapted from McCabe and Butler, “Design Complexity Measurement and Testing,” CACM 32(12)]

Integrated Properties of M and N iv=3 M Integration Point A S0=5 iv=3 B S0=10 iv=1 C iv=1 D iv=1 N S0=9 iv=3 S S0=5 iv=3 T S0=1 iv=1 U iv=1 V iv=1

Integration Testing Module integration testing Scope is a module and its immediate subordinates Testing Steps Apply reduction rules to the module Cyclomatic complexity of the subalgorithm is the module design complexity of the original algorithm. This determines the number of required tests. The baseline method applied to the subalgorithm yields the design subtrees and the module integration tests

Integration Testing Design integration testing Derived from integration complexity, which quantifies a basis set of integration tests Testing steps Calculate iv and S0 for each module Calculate S1 for the top module (number of basis subtrees required) Build a path matrix (S1 x n) to establish the basis set of subtrees Identify and label each predicate in the design tree and place those labels above each column of the path matrix corresponding to the module it influences Apply the baseline method to the design to complete the matrix (1 : the module is executed; 0 : the module is not executed) Identify the subtrees in the matrix and the conditions which derive the subtrees Build corresponding test cases for each subtree

Design Integration Example iv=2 M P1 A S0=3 iv=1 B S0=4 iv=2 P2 C S0=1 iv=1 S0=1 iv=1 D E S0=1 iv=1 S1 = S0 - n + 1 = 8 - 6 + 1 = 3 P1 : condition W = X P2 : condition Y = Z [Adapted from McCabe and Butler, “Design Complexity Measurement and Testing,” CACM 32(12)]

Integration Path Test Matrix [Adapted from McCabe and Butler, “Design Complexity Measurement and Testing,” CACM 32(12)]

Integration Path Test Matrix [Adapted from McCabe and Butler, “Design Complexity Measurement and Testing,” CACM 32(12)]

Example What is an appropriate number of integration test cases with ModuleA, ModuleB; use ModuleA, ModuleB; procedure Main is begin S1; while CM loop ProcA; ProcB; end loop; end Main; with ModuleC; use ModuleC; package body ModuleA is procedure ProcA is begin S1; if CA then else ProcC; end if; end ProcA; null; end ModuleA; with ModuleC; use ModuleC; package body ModuleB is procedure ProcB is begin S1; if CB then ProcC; else S2; end if; if CB2 then S3; end ProcB; null; end ModuleB; package body ModuleC is procedure ProcC is begin S1; if CC then S2; else S3; end if; end ProcC; null; end ModuleC; What is an appropriate number of integration test cases and what are those cases?

Example Main ProcA ProcB ProcC [Adapted from Watson and McCabe, “Structured Testing: A Testing Methodology Using the Cyclomatic Complexity Metric,” NIST 500-235, 1996]

System Testing The primary objective of unit and integration testing is to ensure that the design has been implemented properly; that is, that the programmers wrote code to do what the designers intended. (Verification) The primary objective of system testing is very different: We want to ensure that the system does what the customer wants it to do. (Validation) [Some notes adapted from Pfleeger 2001]

System Testing Steps in system testing Function Testing Performance Testing Acceptance Testing Installation Testing Function Test Performance Test Acceptance Test Installation Test Delivered System Integrated Modules Functioning System Verified, Validated Software Accepted System

Function Testing Checks that an integrated system performs its functions as specified in the requirements. Common functional testing techniques (cause-effect graphs, boundary value analysis, etc.) used here. View the entire system as a black box.

Performance Testing Compares the behavior of the functionally verified system to nonfunctional requirements. System performance is measured against the performance objectives set by the customer and expressed as nonfunctional requirements. This may involve hardware engineers. Since this stage and the previous constitute a complete review of requirements, the software is now considered validated.

Types of Performance Tests Stress tests Configuration tests Legacy Regression tests Security tests Timing tests Environmental tests Quality tests Recovery tests Documentation tests Usability tests

Acceptance Testing Customer now leads testing and defines the cases to test. The purpose of acceptance testing is to allow the customer and users to determine if the system that was built actually meets their needs and expectations. Many times, the customer representative involved in requirements gathering will specify the acceptance tests.

Types of Acceptance Tests Benchmark tests Subset of users operate the system under a set of predefined test cases. Pilot tests Subset of users operate the system under normal or “everyday” situations. Alpha testing if done at developer’s site Beta testing if done at customer’s site Parallel tests New system operates in parallel with the previous version. Users gradually transition to the new system.

Installation Testing Last stage of testing May not be needed if acceptance testing was performed at the customer’s site. The system is installed in the environment in which it will be used, and we verify that it works in the field as it did when tested previously.