Chapter 8 Testing the Programs Shari L. Pfleeger Joann M. Atlee 4th Edition.

Slides:



Advertisements
Similar presentations
Defect testing Objectives
Advertisements

Test process essentials Riitta Viitamäki,
SOFTWARE TESTING. INTRODUCTION  Software Testing is the process of executing a program or system with the intent of finding errors.  It involves any.
Software Failure: Reasons Incorrect, missing, impossible requirements * Requirement validation. Incorrect specification * Specification verification. Faulty.
ISBN Prentice-Hall, 2006 Chapter 8 Testing the Programs Copyright 2006 Pearson/Prentice Hall. All rights reserved.
CMSC 345, Version 11/07 SD Vick from S. Mitchell Software Testing.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Illinois Institute of Technology
Testing an individual module
Lecturer: Dr. AJ Bieszczad Chapter 87-1 How does software fail? Wrong requirement: not what the customer wants Missing requirement Requirement impossible.
Bottom-Up Integration Testing After unit testing of individual components the components are combined together into a system. Bottom-Up Integration: each.
BY RAJESWARI S SOFTWARE TESTING. INTRODUCTION Software testing is the process of testing the software product. Effective software testing will contribute.
1 Software Testing Techniques CIS 375 Bruce R. Maxim UM-Dearborn.
CH08: Testing the Programs
SOFTWARE TESTING STRATEGIES CIS518001VA : ADVANCED SOFTWARE ENGINEERING TERM PAPER.
The Software Development Life Cycle: An Overview Presented by Maxwell Drew and Dan Kaiser Southwest State University Computer Science Program.
CS 501: Software Engineering Fall 1999 Lecture 16 Verification and Validation.
Chapter 8 Testing the Programs Shari L. Pfleeger Joann M. Atlee
CMSC 345 Fall 2000 Unit Testing. The testing process.
Verification and Validation Overview References: Shach, Object Oriented and Classical Software Engineering Pressman, Software Engineering: a Practitioner’s.
Software Testing ©Dr. David A. Workman School of EE and Computer Science March 19, 2007.
INT-Evry (Masters IT– Soft Eng)IntegrationTesting.1 (OO) Integration Testing What: Integration testing is a phase of software testing in which.
1 Software testing. 2 Testing Objectives Testing is a process of executing a program with the intent of finding an error. A good test case is in that.
Testing Basics of Testing Presented by: Vijay.C.G – Glister Tech.
Software Testing Testing types Testing strategy Testing principles.
Testing Workflow In the Unified Process and Agile/Scrum processes.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
Unit 7 Chapter 8 Testing the Programs. Unit 7 Requirements Read Chapters 8 and 9 Respond to the Unit 7 Discussion Board (25 points) Attend seminar/Take.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
Testing the programs In this part we look at classification of faults the purpose of testing unit testing integration testing strategies when to stop testing.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 20 Slide 1 Defect testing l Testing programs to establish the presence of system defects.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
Software Engineering Saeed Akhtar The University of Lahore.
Software Testing and Quality Assurance 1. What is the objectives of Software Testing?
ISBN Prentice-Hall, 2006 Chapter 8 Testing the Programs Copyright 2006 Pearson/Prentice Hall. All rights reserved.
CS451 Lecture 10: Software Testing Yugi Lee STB #555 (816)
Testing the Programs 中国科学技术大学软件学院 孟宁 2010 年 12 月.
1 Software Testing Strategies: Approaches, Issues, Testing Tools.
SOFTWARE TESTING. Introduction Software Testing is the process of executing a program or system with the intent of finding errors. It involves any activity.
Chapter 1 Software Engineering Principles. Problem analysis Requirements elicitation Software specification High- and low-level design Implementation.
 Software Testing Software Testing  Characteristics of Testable Software Characteristics of Testable Software  A Testing Life Cycle A Testing Life.
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Testing Overview Software Reliability Techniques Testing Concepts CEN 4010 Class 24 – 11/17.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
Chapter 8 Testing the Programs. Integration Testing  Combine individual comp., into a working s/m.  Test strategy gives why & how comp., are combined.
Chapter 8 Testing the Programs 8.1 Software Faults and Failures 1. Introduction  faults: A: definition: the problem caused by error B: cause: X: the software.
SOFTWARE TESTING LECTURE 9. OBSERVATIONS ABOUT TESTING “ Testing is the process of executing a program with the intention of finding errors. ” – Myers.
Verification vs. Validation Verification: "Are we building the product right?" The software should conform to its specification.The software should conform.
Software Testing. Software Quality Assurance Overarching term Time consuming (40% to 90% of dev effort) Includes –Verification: Building the product right,
Software Testing Strategies for building test group
Software Testing.
Software Engineering TESTING Compiled by: Dr. S. Prem Kumar
Software Testing.
Chapter 9, Testing.
Software Testing Techniques
Verification and Validation Overview
Some Simple Definitions for Testing
Lecture 09:Software Testing
Chapter 8 Testing the Programs Shari L. Pfleeger Joann M. Atlee 4th Edition.
Verification and Validation Unit Testing
Static Testing Static testing refers to testing that takes place without Execution - examining and reviewing it. Dynamic Testing Dynamic testing is what.
Testing the Programs 东华理工大学软件学院 李祥.
Chapter 10 – Software Testing
Software Verification and Validation
Software Verification and Validation
Software Verification and Validation
Software Testing Strategies
Presentation transcript:

Chapter 8 Testing the Programs Shari L. Pfleeger Joann M. Atlee 4th Edition

Contents 8.1 Software Faults and Failures 8.2 Testing Issues 8.3 Unit Testing 8.4 Integration Testing 8.5 Testing Object Oriented Systems 8.6 Test Planning 8.7 Automated Testing Tools 8.8 When to Stop Testing 8.9 Information System Example 8.10 Real Time Example 8.11 What this Chapter Means for You

Chapter 8 Objectives Types of faults and how to classify them The purpose of testing Unit testing Integration testing strategies Test planning When to stop testing

8.1 Software Faults and Failures Why Does Software Fail? A wrong or missing requirement Not what the customer wants or needs A requirement that is impossible to implement Given prescribed hardware and software A faulty system design A database design restrictions A faulty program design The program code may be wrong

8.1 Software Faults and Failures Objective of Testing Objective of testing: discover faults A test is successful only when a fault is discovered Fault identification is the process of determining what fault(s) caused the failure Fault correction is the process of making changes to the system so that the faults are removed

8.1 Software Faults and Failures Types of Faults Algorithmic fault or syntax faults A component’s logic doesn’t produce proper output Computation and precision faults A formula’s implementation is wrong Documentation faults The documentation doesn’t match what a program does Stress or overload faults Data structures filled past their specified capacity Capacity or boundary faults The system’s performance becomes unacceptable as activity reaches its specified limit

8.1 Software Faults and Failures Types of Faults (continued) Timing or coordination faults Code executing in improper timing Performance faults System does not perform at the speed prescribed Recovery Faults Failures don’t behave as required Hardware and system software faults Supplied hardware and system software do not work according to the documented conditions and procedures Standard and procedure faults Code doesn’t follow organizational or procedural standards

8.1 Software Faults and Failures Typical Algorithmic Faults An algorithmic fault occurs when a component’s algorithm or logic does not produce proper output Branching too soon Branching too late Testing for the wrong condition Forgetting to initialize variable or set loop invariants Forgetting to test for a particular condition Comparing variables of inappropriate data types Syntax faults

8.1 Software Faults and Failures Orthogonal Defect Classification Historical information leads to trends Trends can lead to changes in designs or requirements Leading to reduced number of faults injected How: Categorize faults using IBM Orthogonal Classifications Track faults of omission and commission also Must be product- and organization-independent Scheme must be applicable to all development stages

8.1 Software Faults and Failures Orthogonal Defect Classification Fault Type Meaning Function Fault that affects capability, end-user interface, product interface with hardware architecture, or global data structure Interface Fault in interacting with other component or drivers via calls, macros, control, blocks or parameter lists Checking Fault in program logic that fails to validate data and values properly before they are used Assignment Fault in data structure or code block initialization Timing/serialization Fault in timing of shared and real-time resources Build/package/merge Fault that occurs because of problems in repositories management changes, or version control Documentation Fault that affects publications and maintenance notes Algorithm Fault involving efficiency or correctness of algorithm or data structure but not design

8. 1 Software Faults and Failures Sidebar 8 8.1 Software Faults and Failures Sidebar 8.1 Hewlett-Packard’s Fault Classification

8. 1 Software Faults and Failures Sidebar 8 8.1 Software Faults and Failures Sidebar 8.1 Faults for one Hewlett-Packard Division

8.2 Testing Issues Test Organization Module testing, component testing, or unit testing Integration testing Function testing Performance testing Acceptance testing Installation testing System testing

8.2 Testing Issues Testing Organization Illustrated

8.2 Testing Issues Attitude Toward Testing The problem: In academia students are given a grade for the correctness and operability of their programs Test cases generated to show correctness Critiques of program are considered critiques of ability The solution: Egoless programming; programs are viewed as components of a larger system, not as the property of those who wrote them Development team focused on correcting faults, not placing blame

8.2 Testing Issues Who Performs the Test? Independent test team Avoid conflict for personal responsibility for faults Improve objectivity between design and implementation Allow testing and coding concurrently

8.2 Testing Issues Views of the Test Objects Closed box or black box Functionality of the test objects No view of code or data structure – input and output only Clear box or white box Structure of the test objects Internal view of code and data structures

8.2 Testing Issues Clear Box Example of logic structure

8.2 Testing Issues Factors Affecting the Choice of Test Philosophy White box or Black box testing The number of possible logical paths The nature of the input data The amount of computation involved The complexity of algorithms Don’t have to chose! Combination of each could be the right approach

8.3 Unit Testing Code Review Code walkthrough Present code and documentation to review team Team comments on correctness Focus is on the code not the coder No influence on developer performance Code inspection Check code and documentation against list of concerns Review correctness and efficiency of algorithms Check comments for completeness Formalized process

8.3 Unit Testing Typical Inspection Preparation and Meeting Times Development Artifact Preparation Time Meeting Time Requirement Document 25 pages per hour 12 pages per hour Functional specification 45 pages per hour 15 pager per hour Logic specification 50 pages per hour 20 pages per hour Source code 150 lines of code per hour 75 lines of code per hour User documents 35 pages per hour

8.3 Unit Testing Fault Discovery Rate Discovery Activity Fault Found per Thousand Lines of Code Requirements review 2.5 Design Review 5.0 Code inspection 10.0 Integration test 3.0 Acceptance test 2.0

8.3 Unit Testing Sidebar 8.3 The Best Team Size for Inspections The preparation rate, not the team size, determines inspection effectiveness The team’s effectiveness and efficiency depend on their familiarity with their product

8.3 Unit Testing Proving Code Correct Formal proof techniques Write assertions to describe input\output conditions Draw a flow diagram depicting logical flow Generate theorems to be proven Locate loops and define if-then assertions for each Identify all paths from A1 to An Cover each path so that each input assertion implies an output assertion Prove that the program terminates Only proves design is correct, not implementation Expensive

8.3 Unit Testing Proving Code Correct (continued) Symbolic execution Execution using symbols not data variables Execute each line, checking for state Program is viewed as a series of state changes Automated theorem-proving Prove software is correct by developing tools to execute it The input data and conditions The output data and conditions The lines of code for the component to be tested

8.3 Unit Testing Proving Code Correct: An Illustration

8.3 Unit Testing Testing versus Proving Hypothetical environment Code is viewed as classes of data and conditions Proof may not involve execution Testing: Actual operating environment Demonstrate actual use of program Series of experiments

8.3 Unit Testing Steps in Choosing Test Cases Determining test objectives Coverage criteria Selecting test cases Inputs that demonstrate the behavior of the code Defining a test Detailing execution instructions

8.3 Unit Testing Test Thoroughness Statement testing Branch testing Path testing Definition-use testing All-uses testing All-predicate-uses/some-computational-uses testing All-computational-uses/some-predicate-uses testing

8.3 Unit Testing Relative Strengths of Test Strategies

8.3 Unit Testing Comparing Techniques Fault discovery Percentages by Fault Origin Discovery Techniques Requirements Design Coding Documentation Prototyping 40 35 15 Requirements review 5 Design Review 55 Code inspection 20 65 25 Unit testing 1

8.3 Unit Testing Comparing Techniques (continued) Effectiveness of fault-discovery techniques Requirements Faults Design Faults Code Faults Documentation Faults Reviews Fair Excellent Good Prototypes Not applicable Testing Poor Correctness Proofs

8.3 Unit Testing Sidebar 8.4 Fault Discovery Efficiency at Contel IPC 17.3% during inspections of the system design 19.1% during component design inspection 15.1% during code inspection 29.4% during integration testing 16.6% during system and regression testing 0.1% after the system was placed in the field

8.4 Integration Testing Bottom-up Top-down Big-bang Sandwich testing Modified top-down Modified sandwich

8.4 Integration Testing Terminology Component Driver: a routine that calls a particular component and passes a test case to it Stub: a special-purpose program to simulate the activity of the missing component

8.4 Integration Testing View of a System System viewed as a hierarchy of components

8.4 Integration Testing Bottom-Up Integration Example The sequence of tests and their dependencies

8.4 Integration Testing Top-Down Integration Example Only A is tested by itself

8.4 Integration Testing Modified Top-Down Integration Example Each level’s components individually tested before the merger takes place

8.4 Integration Testing Bing-Bang Integration Example Requires both stubs and drivers to test the independent components

8.4 Integration Testing Sandwich Integration Example Viewed system as three layers

8.4 Integration Testing Modified Sandwich Integration Example Allows upper-level components to be tested before merging them with others

8.4 Integration Testing Comparison of Integration Strategies Bottom- up Top- down Modified top-down Bing-bang Sandwich Modified sandwich Integration Early Late Time to basic working program Component drivers needed Yes No Stubs needed Work parallelism at beginning Medium Low High Ability to test particular paths Easy Hard Ability to plan and control sequence hard

8.4 Integration Testing Sidebar 8.5 Builds at Microsoft The feature teams synchronize their work by building the product and finding and fixing faults on a daily basis

8.5 Testing Object-Oriented Systems Testing the Code Questions at the Beginning of Testing OO Systems Is there a path that generates a unique result? Is there a way to select a unique result? Are there useful cases that are not handled? Check objects for excesses and deficiencies Missing objects Unnecessary classes Missing or unnecessary associations Incorrect placement of associations or attributes

8.5 Testing Object-Oriented Systems Testing the Code Objects might be missing if: You find asymmetric associations or generalizations You find disparate attributes and operations on a class One class is playing two or more roles An operation has no good target class You find two associations with the same name and purpose Class might be unnecessary if: It has no attributes, operations or associations Associations might be unnecessary if: It has redundant information or no operation uses the path

8.5 Testing Object-Oriented Systems Easier and Harder Parts of Testing OO Systems OO unit testing is less difficult, but integration testing is more extensive

8.5 Testing Object-Oriented Systems Differences Between OO and Traditional Testing The farther the gray line is out, the more the difference

8.6 Test Planning Establish test objectives Design test cases Write test cases Testing test cases Execute tests Evaluate test results

8.6 Test Planning Purpose of the Plan Test plan explains who does the testing why the tests are performed how tests are conducted when the tests are scheduled Test plan describes That the software works correctly and is free of faults That the software performs the functions as specified The test plan is a guide to the entire testing activity

8.6 Test Planning Contents of the Plan What the test objectives are How the test will be run Criteria to determine that testing is complete Detailed list of test cases How test data is generated How output data will be captured Complete picture of how and why testing will be performed

8.7 Automated Testing Tools Code analysis Static analysis code analyzer structure checker data analyzer sequence checker Output from static analysis

8.7 Automated Testing Tools (continued) Dynamic analysis program monitors: watch and report program’s behavior Test execution Capture and replay Stubs and drivers Automated testing environments Test case generators

8.8 When to Stop Testing More faulty? Probability of finding faults during development Make it hard to tell when we are done testing…

8.8 When to Stop Testing Stopping Approaches Fault seeding Intentionally inserts a known number of faults Undiscovered faults leads to total faults : detected seeded Faults = detected nonseeded faults total seeded faults total nonseeded faults Improved by basing number of seeds on historical data Improved by using two independent test groups Comparing results Coverage Criteria Determine how many statement, path or branch tests are required Use total to track completeness of tests

8.8 When to Stop Testing Stopping Approaches (continued) Confidence in the software Based on fault estimates to find confidence e.g. 95% confident the software is fault free C = 1, if n > N = S/(S – N + 1) if n ≤ N Where S = number of seeds N = number of actual faults n = number of found faults Assumes all faults have equal probability of being detected

8.8 When to Stop Testing Identifying Fault-Prone Code Track the number of faults found in each component during the development Collect measurement (e.g., size, number of decisions) about each component Classification trees: a statistical technique that sorts through large arrays of measurement information and creates a decision tree to show best predictors A tree helps in deciding the which components are likely to have a large number of errors

8.8 When to Stop Testing An Example of a Classification Tree

8.9 Information Systems Example Piccadilly System Test strategy that exercises every path Test scripts describe input and expected outcome If actual outcome is equal to expected outcome Is the component fault-free? Are we done testing? Use data-flow testing strategy rather than structural Definition-use testing Ensure each path is exercised

8.10 Real-Time Example The Ariane-5 System The Ariane-5’s flight control system was tested in four ways equipment testing on-board computer software testing staged integration system validation tests The Ariane-5 developers relied on insufficient reviews and test coverage

8.11 What this Chapter Means for You It is important to understand the difference between faults and failures The goal of testing is to find faults, not to prove correctness Describes techniques for testing Absence of faults doesn’t guarantee correctness