ISBN 0-13-146913-4 Prentice-Hall, 2006 Chapter 8 Testing the Programs Copyright 2006 Pearson/Prentice Hall. All rights reserved.

Slides:



Advertisements
Similar presentations
1 Integration Testing CS 4311 I. Burnstein. Practical Software Testing, Springer-Verlag, 2003.
Advertisements

SOFTWARE TESTING. INTRODUCTION  Software Testing is the process of executing a program or system with the intent of finding errors.  It involves any.
ISBN Prentice-Hall, 2006 Unit 6 Writing the Programs Copyright 2006 Pearson/Prentice Hall. All rights reserved.
Software Engineering-II Sir zubair sajid. What’s the difference? Verification – Are you building the product right? – Software must conform to its specification.
Software Failure: Reasons Incorrect, missing, impossible requirements * Requirement validation. Incorrect specification * Specification verification. Faulty.
CMSC 345, Version 11/07 SD Vick from S. Mitchell Software Testing.
Software Testing.
Chapter 8 Testing the Programs Shari L. Pfleeger Joann M. Atlee 4th Edition.
Illinois Institute of Technology
CS 425/625 Software Engineering Software Testing
Outline Types of errors Component Testing Testing Strategy
Lecturer: Dr. AJ Bieszczad Chapter 87-1 How does software fail? Wrong requirement: not what the customer wants Missing requirement Requirement impossible.
Software Testing Name: Madam Currie Course: Swen5431 Semester: Summer 2K.
High Level: Generic Test Process (from chapter 6 of your text and earlier lesson) Test Planning & Preparation Test Execution Goals met? Analysis & Follow-up.
Software Testing & Strategies
Issues on Software Testing for Safety-Critical Real-Time Automation Systems Shahdat Hossain Troy Mockenhaupt.
Bottom-Up Integration Testing After unit testing of individual components the components are combined together into a system. Bottom-Up Integration: each.
Software System Integration
BY RAJESWARI S SOFTWARE TESTING. INTRODUCTION Software testing is the process of testing the software product. Effective software testing will contribute.
1 Software Testing Techniques CIS 375 Bruce R. Maxim UM-Dearborn.
CH08: Testing the Programs
ISBN Prentice-Hall, 2006 Chapter 10 Delivering the System Copyright 2006 Pearson/Prentice Hall. All rights reserved.
Verification and Validation Yonsei University 2 nd Semester, 2014 Sanghyun Park.
The Software Development Life Cycle: An Overview Presented by Maxwell Drew and Dan Kaiser Southwest State University Computer Science Program.
CS 501: Software Engineering Fall 1999 Lecture 16 Verification and Validation.
Chapter 8 Testing the Programs Shari L. Pfleeger Joann M. Atlee
CS4311 Spring 2011 Verification & Validation Dr. Guoqiang Hu Department of Computer Science UTEP.
CMSC 345 Fall 2000 Unit Testing. The testing process.
Software Testing.
Verification and Validation Overview References: Shach, Object Oriented and Classical Software Engineering Pressman, Software Engineering: a Practitioner’s.
Software Testing ©Dr. David A. Workman School of EE and Computer Science March 19, 2007.
INT-Evry (Masters IT– Soft Eng)IntegrationTesting.1 (OO) Integration Testing What: Integration testing is a phase of software testing in which.
1 Software testing. 2 Testing Objectives Testing is a process of executing a program with the intent of finding an error. A good test case is in that.
Testing Basics of Testing Presented by: Vijay.C.G – Glister Tech.
Testing Workflow In the Unified Process and Agile/Scrum processes.
1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 23 Reliability III.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
Unit 7 Chapter 8 Testing the Programs. Unit 7 Requirements Read Chapters 8 and 9 Respond to the Unit 7 Discussion Board (25 points) Attend seminar/Take.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
1 Introduction to Software Testing. Reading Assignment P. Ammann and J. Offutt “Introduction to Software Testing” ◦ Chapter 1 2.
Testing the programs In this part we look at classification of faults the purpose of testing unit testing integration testing strategies when to stop testing.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
Software Testing and Quality Assurance 1. What is the objectives of Software Testing?
ISBN Prentice-Hall, 2006 Chapter 8 Testing the Programs Copyright 2006 Pearson/Prentice Hall. All rights reserved.
Integration testing Integrate two or more module.i.e. communicate between the modules. Follow a white box testing (Testing the code)
Testing the Programs 中国科学技术大学软件学院 孟宁 2010 年 12 月.
1 Software Testing Strategies: Approaches, Issues, Testing Tools.
Chapter 1 Software Engineering Principles. Problem analysis Requirements elicitation Software specification High- and low-level design Implementation.
 Software Testing Software Testing  Characteristics of Testable Software Characteristics of Testable Software  A Testing Life Cycle A Testing Life.
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
Testing Overview Software Reliability Techniques Testing Concepts CEN 4010 Class 24 – 11/17.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
Chapter 8 Testing the Programs. Integration Testing  Combine individual comp., into a working s/m.  Test strategy gives why & how comp., are combined.
Chapter 8 Testing the Programs 8.1 Software Faults and Failures 1. Introduction  faults: A: definition: the problem caused by error B: cause: X: the software.
Verification vs. Validation Verification: "Are we building the product right?" The software should conform to its specification.The software should conform.
Software Testing. Software Quality Assurance Overarching term Time consuming (40% to 90% of dev effort) Includes –Verification: Building the product right,
Software Testing Strategies for building test group
Software Testing.
Software Engineering TESTING Compiled by: Dr. S. Prem Kumar
Chapter 9, Testing.
Verification and Validation Overview
Some Simple Definitions for Testing
Lecture 09:Software Testing
Chapter 8 Testing the Programs Shari L. Pfleeger Joann M. Atlee 4th Edition.
Verification and Validation Unit Testing
Static Testing Static testing refers to testing that takes place without Execution - examining and reviewing it. Dynamic Testing Dynamic testing is what.
Testing the Programs 东华理工大学软件学院 李祥.
Chapter 10 – Software Testing
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
Software Testing Strategies
Presentation transcript:

ISBN Prentice-Hall, 2006 Chapter 8 Testing the Programs Copyright 2006 Pearson/Prentice Hall. All rights reserved.

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.2 © 2006 Pearson/Prentice Hall Contents 8.1Software Faults and Failures 8.2Testing Issues 8.3Unit Testing 8.4Integration Testing 8.5Testing Object-Oriented Systems 8.6Test Planning 8.7Automated Testing Tools 8.8When to Stop Testing 8.9 Information System Example 8.10Real Time Example 8.11What this Chapter Means for You

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.3 © 2006 Pearson/Prentice Hall Chapter 8 Objectives Types of faults and how to classify them The purpose of testing Unit testing Integration testing strategies Test planning When to stop testing

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.4 © 2006 Pearson/Prentice Hall 8.1 Software Faults and Failures Why Does Software Fail? Wrong requirement: not what the customer wants Missing requirement Requirement impossible to implement Faulty design Faulty code Improperly implemented design

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.5 © 2006 Pearson/Prentice Hall 8.1 Software Faults and Failures Objective of Testing Objective of testing: discover faults A test is successful only when a fault is discovered –Fault identification is the process of determining what fault caused the failure –Fault correction is the process of making changes to the system so that the faults are removed

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.6 © 2006 Pearson/Prentice Hall 8.1 Software Faults and Failures Types of Faults Algorithmic fault Computation and precision fault –a formula’s implementation is wrong Documentation fault –Documentation doesn’t match what program does Capacity or boundary faults –System’s performance not acceptable when certain limits are reached Timing or coordination faults Performance faults –System does not perform at the speed prescribed Standard and procedure faults

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.7 © 2006 Pearson/Prentice Hall 8.1 Software Faults and Failures Typical Algorithmic Faults An algorithmic fault occurs when a component’s algorithm or logic does not produce proper output –Branching too soon –Branching too late –Testing for the wrong condition –Forgetting to initialize variable or set loop invariants –Forgetting to test for a particular condition –Comparing variables of inappropriate data types Syntax faults

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.8 © 2006 Pearson/Prentice Hall 8.1 Software Faults and Failures Orthogonal Defect Classification Fault TypeMeaning FunctionFault that affects capability, end-user interface, product interface with hardware architecture, or global data structure InterfaceFault in interacting with other component or drivers via calls, macros, control blocks, or parameter lists CheckingFault in program logic that fails to validate data and values properly before they are used AssignmentFault in data structure or code block initialization Timing/serializationFault in timing of shared and real-time resources Build/package/mergeFault that occurs because of problems in repositories, management changes, or version control DocumentationFault that affects publications and maintenance notes AlgorithmFault involving efficiency or correctness of algorithm or data structure but not design

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.9 © 2006 Pearson/Prentice Hall 8.1 Software Faults and Failures Sidebar 8.1 Hewlett-Packard’s Fault Classification

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.10 © 2006 Pearson/Prentice Hall 8.1 Software Faults and Failures Sidebar 8.1 Faults for one Hewlett-Packard Division

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.11 © 2006 Pearson/Prentice Hall 8.2 Testing Issues Testing Organization Module testing, component testing, or unit testing Integration testing Function testing Performance testing Acceptance testing Installation testing

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.12 © 2006 Pearson/Prentice Hall 8.2 Testing Issues Testing Organization Illustrated

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.13 © 2006 Pearson/Prentice Hall 8.2 Testing Issues Attitude Toward Testing Egoless programming: programs are viewed as components of a larger system, not as the property of those who wrote them

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.14 © 2006 Pearson/Prentice Hall 8.2 Testing Issues Who Performs the Test? Independent test team –avoid conflict –improve objectivity –allow testing and coding concurrently

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.15 © 2006 Pearson/Prentice Hall 8.2 Testing Issues Views of the Test Objects Closed box or black box: functionality of the test objects Clear box or white box: structure of the test objects

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.16 © 2006 Pearson/Prentice Hall 8.2 Testing Issues White Box Advantage –free of internal structure’s constraints Disadvantage –not possible to run a complete test

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.17 © 2006 Pearson/Prentice Hall 8.2 Testing Issues Clear Box Example of logic structure

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.18 © 2006 Pearson/Prentice Hall 8.2 Testing Issues Sidebar 8.2 Box Structures Black box: external behavior description State box: black box with state information White box: state box with a procedure

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.19 © 2006 Pearson/Prentice Hall 8.2 Testing Issues Factors Affecting the Choice of Test Philosophy The number of possible logical paths The nature of the input data The amount of computation involved The complexity of algorithms

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.20 © 2006 Pearson/Prentice Hall 8.3 Unit Testing Code Review Code walkthrough Code inspection

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.21 © 2006 Pearson/Prentice Hall 8.3 Unit Testing Typical Inspection Preparation and Meeting Times Development ArtifactPreparation TimeMeeting Time Requirement Document25 pages per hour12 pages per hour Functional specification45 pages per hour15 pager per hour Logic specification50 pages per hour20 pages per hour Source code150 lines of code per hour 75 lines of code per hour User documents35 pages per hour20 pages per hour

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.22 © 2006 Pearson/Prentice Hall 8.3 Unit Testing Fault Discovery Rate Discovery Activity Faults Found per Thousand Lines of Code Requirements review2.5 Design Review5.0 Code inspection10.0 Integration test3.0 Acceptance test2.0

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.23 © 2006 Pearson/Prentice Hall 8.3 Unit Testing Sidebar 8.3 The Best Team Size for Inspections The preparation rate, not the team size, determines inspection effectiveness The team’s effectiveness and efficiency depend on their familiarity with their product

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.24 © 2006 Pearson/Prentice Hall 8.3 Unit Testing Proving Code Correct Formal proof techniques Symbolic execution Automated theorem-proving

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.25 © 2006 Pearson/Prentice Hall 8.3 Unit Testing Proving Code Correct: An Illustration

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.26 © 2006 Pearson/Prentice Hall 8.3 Unit Testing Testing versus Proving Proving: hypothetical environment Testing: actual operating environment

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.27 © 2006 Pearson/Prentice Hall 8.3 Unit Testing Steps in Choosing Test Cases Determining test objectives Selecting test cases Defining a test

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.28 © 2006 Pearson/Prentice Hall 8.3 Unit Testing Test Thoroughness Statement testing Branch testing Path testing Definition-use testing All-uses testing All-predicate-uses/some-computational- uses testing All-computational-uses/some-predicate- uses testing

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.29 © 2006 Pearson/Prentice Hall 8.3 Unit Testing Relative Strengths of Test Strategies

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.30 © 2006 Pearson/Prentice Hall 8.3 Unit Testing Comparing Techniques Fault discovery Percentages by Fault Origin Discovery TechniquesRequirementsDesignCodingDocumentation Prototyping Requirements review Design Review Code inspection Unit testing15200

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.31 © 2006 Pearson/Prentice Hall 8.3 Unit Testing Comparing Techniques (continued) Effectiveness of fault-discovery techniques Requirements FaultsDesign FaultsCode Faults Documentation Faults ReviewsFairExcellent Good PrototypesGoodFair Not applicable TestingPoor GoodFair Correctness ProofsPoor Fair

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.32 © 2006 Pearson/Prentice Hall 8.3 Unit Testing Sidebar 8.4 Fault Discovery Efficiency at Contel IPC 17.3% during inspections of the system design 19.1% during component design inspection 15.1% during code inspection 29.4% during integration testing 16.6% during system and regression testing 0.1% after the system was placed in the field

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.33 © 2006 Pearson/Prentice Hall 8.4 Integration Testing Bottom-up Top-down Big-bang Sandwich testing Modified top-down Modified sandwich

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.34 © 2006 Pearson/Prentice Hall 8.4 Integration Testing Terminology Component Driver: a routine that calls a particular component and passes a test case to it Stub: a special-purpose program to simulate the activity of the missing component

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.35 © 2006 Pearson/Prentice Hall 8.4 Integration Testing View of a System System viewed as a hierarchy of components

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.36 © 2006 Pearson/Prentice Hall 8.4 Integration Testing Bottom-Up Integration Example The sequence of tests and their dependencies

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.37 © 2006 Pearson/Prentice Hall 8.4 Integration Testing Top-Down Integration Example Only A is tested by itself

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.38 © 2006 Pearson/Prentice Hall 8.4 Integration Testing Modified Top-Down Integration Example Each level’s components individually tested before the merger takes place

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.39 © 2006 Pearson/Prentice Hall 8.4 Integration Testing Big-Bang Integration Example Requires both stubs and drivers to test the independent components

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.40 © 2006 Pearson/Prentice Hall 8.4 Integration Testing Sandwich Integration Example Viewed system as three layers

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.41 © 2006 Pearson/Prentice Hall 8.4 Integration Testing Modified Sandwich Integration Example Allows upper-level components to be tested before merging them with others

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.42 © 2006 Pearson/Prentice Hall 8.4 Integration Testing Comparison of Integration Strategies Bottom- up Top- down Modified top-down Big-bangSandwichModified sandwich IntegrationEarly LateEarly Time to basic working program LateEarly LateEarly Component drivers needed YesNoYes Stubs neededNoYes Work parallelism at beginning MediumLowMediumHighMediumHigh Ability to test particular paths EasyHardEasy MediumEasy Ability to plan and control sequence EasyHard EasyHardhard

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.43 © 2006 Pearson/Prentice Hall 8.4 Integration Testing Sidebar 8.5 Builds at Microsoft The feature teams synchronize their work by building the product and finding and fixing faults on a daily basis

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.44 © 2006 Pearson/Prentice Hall 8.5 Testing Object-Oriented Systems Questions at the Beginning of Testing OO System Is there a path that generates a unique result? Is there a way to select a unique result? Are there useful cases that are not handled?

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.45 © 2006 Pearson/Prentice Hall 8.5 Testing Object-Oriented Systems Easier and Harder Parts of Testing OO Systems OO unit testing is less difficult, but integration testing is more extensive

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.46 © 2006 Pearson/Prentice Hall 8.5 Testing Object-Oriented Systems Differences Between OO and Traditional Testing The farther the gray line is out, the more the difference

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.47 © 2006 Pearson/Prentice Hall 8.6 Test Planning Establish test objectives Design test cases Write test cases Test test cases Execute tests Evaluate test results

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.48 © 2006 Pearson/Prentice Hall 8.6 Test Planning Purpose of the Plan Test plan explains –who does the testing –why the tests are performed –how tests are conducted –when the tests are scheduled

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.49 © 2006 Pearson/Prentice Hall 8.6 Test Planning Contents of the Plan What the test objectives are How the test will be run What criteria will be used to determine when the testing is complete

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.50 © 2006 Pearson/Prentice Hall 8.7 Automated Testing Tools Code analysis –Static analysis code analyzer structure checker data analyzer sequence checker Output from static analysis

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.51 © 2006 Pearson/Prentice Hall 8.7 Automated Testing Tools (continued) Dynamic analysis –program monitors: watch and report program’s behavior Test execution –Capture and replay –Stubs and drivers –Automated testing environments Test case generators

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.52 © 2006 Pearson/Prentice Hall 8.8 When to Stop Testing More faulty? Probability of finding faults during the development

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.53 © 2006 Pearson/Prentice Hall 8.8 When to Stop Testing Stopping Approaches Coverage criteria Fault seeding detected seeded faults = detected nonseeded faults total seeded faults total nonseeded faults Confidence in the software, C = 1, if n >N = S/(S – N + 1) if n ≤ N

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.54 © 2006 Pearson/Prentice Hall 8.8 When to Stop Testing Identifying Fault-Prone Code Track the number of faults found in each component during the development Collect measurements (e.g., size, number of decisions) about each component Classification trees: a statistical technique that sorts through large arrays of measurement information and creates a decision tree to show best predictors –A tree helps in deciding the which components are likely to have a large number of errors

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.55 © 2006 Pearson/Prentice Hall 8.8 When to Stop Testing An Example of a Classification Tree

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.56 © 2006 Pearson/Prentice Hall 8.9 Information System Example Piccadilly System Using data-flow testing strategy rather than structural –Definition-use testing

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.57 © 2006 Pearson/Prentice Hall 8.10 Real-Time Example The Ariane-5 System The Ariane-5’s flight control system was tested in four ways –equipment testing –on-board computer software testing –staged integration –system validation tests The Ariane-5 developers relied on insufficient reviews and test coverage

Pfleeger and Atlee, Software Engineering: Theory and PracticePage 8.58 © 2006 Pearson/Prentice Hall 8.11 What this Chapter Means for You It is important to understand the difference between faults and failures The goal of testing is to find faults, not to prove correctness