Chapter 22 Developer Testing

Slides:



Advertisements
Similar presentations
Object Oriented Analysis And Design-IT0207 iiI Semester
Advertisements

Testing Relational Database
Defect testing Objectives
การทดสอบโปรแกรม กระบวนการในการทดสอบ
Testing and Quality Assurance
SOFTWARE TESTING. INTRODUCTION  Software Testing is the process of executing a program or system with the intent of finding errors.  It involves any.
Annoucements  Next labs 9 and 10 are paired for everyone. So don’t miss the lab.  There is a review session for the quiz on Monday, November 4, at 8:00.
Developer Testing and Debugging. Resources Code Complete by Steve McConnell Code Complete by Steve McConnell Safari Books Online Safari Books Online Google.
CMSC 345, Version 11/07 SD Vick from S. Mitchell Software Testing.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Testing an individual module
Software Testing. “Software and Cathedrals are much the same: First we build them, then we pray!!!” -Sam Redwine, Jr.
1 These courseware materials are to be used in conjunction with Software Engineering: A Practitioner’s Approach, 5/e and are provided with permission by.
Chapter 13 & 14 Software Testing Strategies and Techniques
Software Testing Verification and validation planning Software inspections Software Inspection vs. Testing Automated static analysis Cleanroom software.
TESTING.
CS 501: Software Engineering Fall 1999 Lecture 16 Verification and Validation.
CMSC 345 Fall 2000 Unit Testing. The testing process.
CS4311 Spring 2011 Unit Testing Dr. Guoqiang Hu Department of Computer Science UTEP.
1 Debugging and Testing Overview Defensive Programming The goal is to prevent failures Debugging The goal is to find cause of failures and fix it Testing.
Testing, Bug Fixing and Debugging the Code Yordan Dimitrov Telerik Corporation
Testing Basics of Testing Presented by: Vijay.C.G – Glister Tech.
1 Chapter 22 Developer Testing. 2 introduction Testing is the most popular quality-improvement activity Testing is the most popular quality-improvement.
Testing and Debugging Version 1.0. All kinds of things can go wrong when you are developing a program. The compiler discovers syntax errors in your code.
Black Box Testing Techniques Chapter 7. Black Box Testing Techniques Prepared by: Kris C. Calpotura, CoE, MSME, MIT  Introduction Introduction  Equivalence.
Unit Testing 101 Black Box v. White Box. Definition of V&V Verification - is the product correct Validation - is it the correct product.
Software Testing Yonsei University 2 nd Semester, 2014 Woo-Cheol Kim.
Chapter 22 Developer testing Peter J. Lane. Testing can be difficult for developers to follow  Testing’s goal runs counter to the goals of the other.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
What is Testing? Testing is the process of finding errors in the system implementation. –The intent of testing is to find problems with the system.
Software Engineering 2004 Jyrki Nummenmaa 1 BACKGROUND There is no way to generally test programs exhaustively (that is, going through all execution.
Chapter 1: Fundamental of Testing Systems Testing & Evaluation (MNN1063)
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
CS451 Lecture 10: Software Testing Yugi Lee STB #555 (816)
Software Quality Assurance and Testing Fazal Rehman Shamil.
Dynamic Testing.
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
Defect testing Testing programs to establish the presence of system defects.
Testing Integral part of the software development process.
Functional testing, Equivalence class testing
Chapter 17 Software Testing Techniques
Software Testing.
Software Testing.
John D. McGregor Session 9 Testing Vocabulary
SOFTWARE TESTING OVERVIEW
Software Engineering (CSI 321)
Software Testing An Introduction.
CS5123 Software Validation and Quality Assurance
Chapter 18 Software Testing Strategies
Chapter 13 & 14 Software Testing Strategies and Techniques
John D. McGregor Session 9 Testing Vocabulary
UNIT-4 BLACKBOX AND WHITEBOX TESTING
John D. McGregor Session 9 Testing Vocabulary
Software testing strategies 2
Lecture 09:Software Testing
Software testing.
CS240: Advanced Programming Concepts
Chapter 10 – Software Testing
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
CPSC 315 – Programming Studio
Chapter 1: Boundary Value Testing
Java & Testing.
Chapter 11: Integration- and System Testing
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Software Testing.
Chapter 13 & 14 Software Testing Strategies and Techniques 1 Software Engineering: A Practitioner’s Approach, 6th edition by Roger S. Pressman.
Presentation transcript:

Chapter 22 Developer Testing CSC-3004 Introduction to Software Development Spring 2017 James Skon

Topics Role of Developer Testing in Software Quality Recommended Approach to Developer Testing Bag of Testing Tricks:Typical Errors Test-Support Tools Improving Your Testing Keeping Test Records

Developer Testing Unit testing Component testing Integration testing the execution of a complete class, routine, or small program that has been written by a single programmer or team of programmers tested in isolation from the more complete system Component testing execution of a class, package, small program, or other program element that involves the work of multiple programmers or programming teams, which is tested in isolation from the more complete system. Integration testing is the combined execution of two or more classes, packages, components, or subsystems that have been created by multiple programmers or programming teams. This kind of testing typically starts as soon as there are two classes to test and continues until the entire system is complete.

Developer Testing Regression testing System testing repetition of previously executed test cases for the purpose of finding defects in software that previously passed the same set of tests. System testing is the execution of the software in its final configuration, including integration with other software and hardware systems. tests for security, performance, resource loss, timing problems, and other issues that can’t be tested at lower levels of integration.

Testing Black-box White Box (or glass-box)

Black-box Testing refers to tests in which the tester cannot see the inner workings of the item being tested. does not apply when you test code that you have written

White Box Testing refers to tests in which the tester is aware of the inner workings of the item being tested. this is the kind of testing that you as a developer use to test your own code.

Why programmer don't like testing Testing’s goal runs counter to the goals of other development activities. Testing can never completely prove the absence of errors. Testing by itself does not improve software quality. Testing requires you to assume that you’ll find errors in your code.

Testing time as a percent of overall project effort

Testing During Construction You generally write a routine or class, check it mentally, and then review it or test it. Should test each unit thoroughly before you combine it with any others. If you’re writing several routines, you should test them one at a time. If you throw several untested routines together at once and find an error, any of the several routines might be guilty. If you add one routine at a time to a collection of previously tested routines, you know its likely the new errors are in the new code

Approach to Developer Testing Test each relevant requirement Test each relevant design concern to make sure that the design has been implemented. Plan the test cases for this step at the design stage or as early as possible Use “basis testing” to add detailed test cases to those that test the requirements and the design. Use a checklist of the kinds of errors you’ve made on the project to date or havemade on previous projects.

Limitations of Developer Testing Developer tests tend to be “clean tests” Developer testing tends to have an optimistic view of test coverage Developer testing tends to skip more sophisticated kinds of test coverage “100% statement coverage” or “100% branch coverage”

Exhaustive testing? Name 2620 (20 characters, each with 26 possible choices) Address Phone Number 1010 (10 digits, each with 10 possible choices) Total Possibilities = 2620 * 2620 * 1010 ≈ 1066

Data-Flow Testing Data-flow testing is based on the idea that data usage is at least as error-prone as control flow. Data can exist in one of three states: Defined The data has been initialized, but it hasn’t been used yet. Used The data has been used for computation, as an argument to a routine, or for something else. Killed The data was once defined, but it has been undefined in some way.

Data-Flow Testing Routine State Entered Exited The control flow enters the routine immediately before the variable is acted upon. A working variable is initialized at the top of a routine, for example. Exited The control flow leaves the routine immediately after the variable is acted upon. A return value is assigned to a status variable at the end of a routine, for example.

Suspicious Combinations of Data States Defined-Defined Defined-Killed Entered-Killed Entered-Used Killed-Killed Killed-Used Used-Defined

Data-Flow Testing - structural if ( Condition 1 ) { x = a; } else { x = b; } If ( Condition 2 ) { y = x + 1; } else { y = x - 1; 1. Condition 1 True 2. Condition 1 False 1. Condition 1 true, Condition 2 false x = a ... y = x - 1 How do we cover every path? 1. Condition 1 false, Condition 2 true x = b ... y = x + 1 1. Condition 1 false, Condition 2 false x = b ... y = x 1 1 1. Condition 1 true, Condition 2 True x = a ... y = x + 1 1. Condition 2 True 2. Condition 2 False

Equivalence Partitioning is a software testing technique that divides the input data of a software unit into partitions of equivalent data from which test cases can be derived. In principle, test cases are designed to cover each partition at least once.

Equivalence Partitioning Password: must be a minimum 8 characters and maximum 12 characters.

Equivalence Partitioning Password: must be a minimum 8 characters and maximum 12 characters. Test Cases 1: Consider password length less than 8. Test Cases 2: Consider password of length exactly 8. Test Cases 3: Consider password of length between 9 and 11. Test Cases 4: Consider password of length exactly 12. Test Cases 5: Consider password of length more than 12.

Equivalence Partitioning Input box accepts numbers between 1-1000. Valid range 1-1000, Invalid range 0 and Invalid range 1001 or more.

Equivalence Partitioning Input box accepts numbers between 1-1000. Valid range 1- 1000, Invalid range 0 and Invalid range 1001 or more. Test Cases 1: Consider test data exactly as the input boundaries of input domain i.e. values 1 and 1000. Test Cases 2: Consider test data with values just below the extreme edges of input domains i.e. values 0 and 999. Test Cases 3: Consider test data with values just above the extreme edges of input domain i.e. values 2 and 1001.

Error Guessing A heuristic based on a technique of error guessing. The term “error guessing” is a lowbrow name for a sensible concept. It means creating test cases based upon guesses about where the program might have errors, although it implies a certain amount of sophistication in the guessing.

Boundary Analysis One of the most fruitful areas for testing is boundary conditions— off-by-one errors. Saying num – 1 when you mean num and saying >= when you mean > are common mistakes.

Classes of Bad Data Too little data (or no data) Too much data The wrong kind of data (invalid data) The wrong size of data Uninitialized data

Classes of Good Data Test the nominal cases (don't assume correct) For example Nominal cases—middle-of-the-road, expected values Minimum normal configuration Maximum normal configuration Compatibility with old data

Use Test Cases That Make Hand-Checks Convenient Pick numbers you will recognize, not random numbers. For example, for salary, pick: $60,000 rather than $62,446 Why?

Typical Errors Errors are rarely evenly distributed, but often occur in clusters. Capers Jones reported that a focused quality-improvement program at IBM identified 31 of 425 classes in the IMS system as error-prone. The 31 classes were repaired or completely redeveloped, and, in less than a year, customer-reported defects against IMS were reduced ten to one. Total maintenance costs were reduced by about 45 percent. Customer satisfaction improved from “unacceptable” to “good” (Jones 2000).

Typical Errors Most errors tend to be concentrated in a few highly defective routines. Here is the general relationship between errors and code: Eighty percent of the errors are found in 20 percent of a project’s classes or routines Fifty percent of the errors are found in 5 percent of a project’s classes

Errors by Classification Occurrence Error Type 25.18% Structural 22.44% Data 16.19% Functionality as implemented 9.88% Construction 8.98% Integration 8.12% Functional requirements 2.76% Test definition or execution 1.74% System, software architecture 4.71% Unspecified

Errors by Classification Suggestions: The scope of most errors is fairly limited Many errors are outside the domain of construction Most construction errors are the programmers’ fault Clerical errors (typos) are a surprisingly common source of problems Misunderstanding the design is a recurring theme in studies of programmer errors Most errors are easy to fix It’s a good idea to measure your own organization’s experiences with errors

Proportion of Errors Resulting from Faulty Construction On small projects, construction defects make up the vast bulk of all errors. In one study of coding errors on a small project (1000 lines of code), 75% of defects resulted from coding, compared to 10% from requirements and 15% from design (Jones 1986a). This error breakdown appears to be representative of many small projects.

Proportion of Errors Resulting from Faulty Construction Construction defects account for at least 35% of all defects regardless of project size. Although the proportion of construction defects is smaller on large projects, they still account for at least 35% of all defects (Beizer 1990, Jones 2000). Some researchers have reported proportions in the 75% range even on very large projects (Grady 1987). In general, the better the application area is understood, the better the overall architecture is. Errors then tend to be concentrated in detailed design and coding (Basili and Perricone 1984).

Proportion of Errors Resulting from Faulty Construction Construction errors, although cheaper to fix than requirements and design errors, are still expensive. A study of two very large projects at Hewlett-Packard found that the average construction defect cost 25–50% as much to fix as the average design error (Grady 1987). When the greater number of construction defects was figured into the overall equation, the total cost to fix construction defects was one to two times as much as the cost attributed to design defects.

Proportion of Errors Resulting from Faulty Construction As the size of the project increases, the proportion of errors committed during construction decreases. Nevertheless, construction errors account for 45–75% of all errors on even the largest projects.

Errors in Testing Itself Test cases are often as likely or more likely to contain errors than the code being tested.

Test-Support Tools Building Scaffolding to Test Individual Classes Build Dummy Objects to test other objects: Return control immediately, having taken no action. Test the data fed to it. Print a diagnostic message, perhaps an echo of the input parameters, or log a message to a file. Get return values from interactive input. Return a standard answer regardless of the input. Burn up the number of clock cycles allocated to the real object or routine. Function as a slow, fat, simple, or less accurate version of the real object or routine.

Test-Support Tools Test-Data Generators A program that creates lots of random test cases (good and bad) Coverage Monitors Tool that monitors what code has (or hasn't) been tested Data Recorder/Logging Log operations and events for review after error

Test-Support Tools Symbolic Debuggers Step through code monitoring state, breakpoints, watch (variables) Error Databases One powerful test tool is a database of errors that have been reported. (Bugzilla)

Test-Support Tools System Perturbers Helps to find rarely occurring errors Memory filling: ill memory with arbitrary values before you run your program so that uninitialized variables aren’t set to 0 accidentally. Or set to a rare value that causes a breakpoint Memory shaking: In multitasking systems, some tools can rearrange memory as your program operates so that you can be sure you haven’t written any code that depends on data being in absolute rather than relative locations.

Test-Support Tools System Perturbers Selective memory failing: A memory driver can simulate low-memory conditions in which a program might be running out of memory, fail on a memory request, grant an arbitrary number of memory requests before failing, or fail on an arbitrary number of requests before granting one. This is especially useful for testing complicated programs that work with dynamically allocated memory.

Test-Support Tools System Perturbers Memory-access checking (bounds checking) Bounds checkers watch pointer operations to make sure your pointers behave themselves. Such a tool is useful for detecting uninitialized or dangling pointers.