John D. McGregor Session 9 Testing Vocabulary

Slides:



Advertisements
Similar presentations
Testing and Inspecting to Ensure High Quality
Advertisements

Testing Workflow Purpose
Software Architecture Prof.Dr.ir. F. Gielen
WISTPC-09 : Session A Tariq M. King PhD Candidate Florida International University Workshop on Integrating Software Testing into Programming.
Software Testing and Quality Assurance
Recall The Team Skills Analyzing the Problem
The Software Product Life Cycle. Views of the Software Product Life Cycle  Management  Software engineering  Engineering design  Architectural design.
 QUALITY ASSURANCE:  QA is defined as a procedure or set of procedures intended to ensure that a product or service under development (before work is.
Dr. Pedro Mejia Alvarez Software Testing Slide 1 Software Testing: Building Test Cases.
Lecture 6 Software Testing and jUnit CS140 Dick Steflik.
Testing. What is Testing? Definition: exercising a program under controlled conditions and verifying the results Purpose is to detect program defects.
Objectives Understand the basic concepts and definitions relating to testing, like error, fault, failure, test case, test suite, test harness. Explore.
© 2012 IBM Corporation Rational Insight | Back to Basis Series Chao Zhang Unit Testing.
CPIS 357 Software Quality & Testing
CMSC 345 Fall 2000 Unit Testing. The testing process.
Software Testing. What is Software Testing? Definition: 1.is an investigation conducted to provide stakeholders with information about the quality of.
From Use Cases to Test Cases 1. A Tester’s Perspective  Without use cases testers will approach the system to be tested as a “black box”. “What, exactly,
Testing Workflow In the Unified Process and Agile/Scrum processes.
Software Construction Lecture 18 Software Testing.
CPSC 873 John D. McGregor Session 9 Testing Vocabulary.
Chapter 8 Testing. Principles of Object-Oriented Testing Å Object-oriented systems are built out of two or more interrelated objects Å Determining the.
Lecture 13.  Failure mode: when team understands requirements but is unable to meet them.  To ensure that you are building the right system Continually.
Testing and inspecting to ensure high quality An extreme and easily understood kind of failure is an outright crash. However, any violation of requirements.
CPSC 871 John D. McGregor Module 8 Session 1 Testing.
CPSC 871 John D. McGregor Module 6 Session 2 Validation and Verification.
TESTING FUNDAMENTALS BY K.KARTHIKEYAN.
Software Quality Assurance and Testing Fazal Rehman Shamil.
CPSC 871 John D. McGregor Module 8 Session 3 Assignment.
 Software Testing Software Testing  Characteristics of Testable Software Characteristics of Testable Software  A Testing Life Cycle A Testing Life.
Dynamic Testing.
1 Phase Testing. Janice Regan, For each group of units Overview of Implementation phase Create Class Skeletons Define Implementation Plan (+ determine.
What is a software? Computer Software, or just Software, is the collection of computer programs and related data that provide the instructions telling.
CPSC 372 John D. McGregor Module 8 Session 1 Testing.
CPSC 872 John D. McGregor Session 13 Process. Specification and design problem solution specification implementation specification.
Complexity Analysis (Part I)
Software Testing.
SOFTWARE TESTING Date: 29-Dec-2016 By: Ram Karthick.
Introduction to Computers and C++ Programming
Testing Tutorial 7.
Software Testing.
User-centred system design process
John D. McGregor Session 9 Testing Vocabulary
John D. McGregor Eclipse Process Framework Module 2 Session 4
CompSci 230 Software Construction
Unified Modeling Language
Chapter 8 – Software Testing
Verification and Testing
Recall The Team Skills Analyzing the Problem
Some Simple Definitions for Testing
John D. McGregor Session 9 Testing Vocabulary
Chapter 1 Software Engineering.
BASIC DEFINITIONS Errors : An error is a mistake, misconception, or misunderstanding on the part of a software developer. In the category of developer.
Software testing.
Coding Concepts (Basics)
Informatics 43 – April 28, 2016.
Baisc Of Software Testing
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
Automated test.
Applying Use Cases (Chapters 25,26)
Applying Use Cases (Chapters 25,26)
Java & Testing.
CSE 1020:Software Development
TYPES OF TESTING.
Complexity Analysis (Part I)
Automated test.
From Use Cases to Implementation
Software Testing.
Complexity Analysis (Part I)
Testing Slides adopted from John Jannotti, Brown University
Our Process CMSC 345, Version 1/04.
Presentation transcript:

John D. McGregor Session 9 Testing Vocabulary CPSC 372 John D. McGregor Session 9 Testing Vocabulary

End-to-end quality Quality can not be tested into a product Requirements Development Testing Coding Review Use cases Analysis Guided Inspection models Architectural Design ATAM Architecture description Detailed Design Design Implementation Unit/Feature Integration System

Test theory Testing is a search for faults which are manifest as defects in an implementation. A “successful” test is one that finds a defect and causes an observable failure. In this unit we will talk a bit about how we guide the search to be the most successful. Read the following: http://www.computer.org/portal/web/swebok/html/contentsch5#ch5

Testability Part of being successful depends on how easily defects can be found. Software should be designed to be controllable and observable. Our testing software must be able to control the software under test to put it in a specific state so the test result can be observed and evaluated.

Fault models A fault is a defect that can cause a failure. There may be multiple defects that are all in place because of a single fault. A fault model is a catalog of the faults that are possible for a given technology. For example, consider the state machine pattern that structures a system as a set of states and the means of moving from one state to another.

Fault models - 2 An application of that pattern can become faulty if the implementer: Type 1: alters the tail state of a transition (a transfer fault); Type 2: alters the output of a transition (an output fault); Type 3: adds an extra transition; and Type 4: adds an extra state. Type 5: removes a transition Type 6: removes a state Type 7: alters guard

Fault models - 3 Any one who tests is using a fault model. It may be an implicit model or they may write it down and provide to others. The idea is to capture experience. Where have you been successful finding faults? For example, people make little mistakes about a numeric value so we usually test for the expected value +/- a small amount.

Test case A test case is a triple: <pre-conditions, input data, expected results> Testing a piece of software involves: Software that executes the software being tested Software being tested Software that specifies a particular scenario In the next session we will consider Junit a software framework that executes tests. In this session we will focus on test cases.

Black-box Test Case Here is pseudo-code for a method: int average(int number, array list_of_numbers){ } The implementation would go between the {}. When a tester creates test cases without an implementation it is referred to as specification-based or “black-box” testing.

Black-box Test Case - 2 For int average(int number, array list_of_numbers){ A test case would include pre-conditions – there is no state for an algorithm so no pre-conditions the number of numbers to be averaged a list of numbers to be averaged Consider what could go wrong Number might not match the number of numbers Number might be entered as a negative There might not be any numbers in the list We also want some tests that will succeed so there should be some test cases in which we expect correct action

Black-box Test Case - 3 Test cases <null, 6 (1,2,3,4,5,6), 3.5> <null, 3 (10, 20, 30), 20> <null, -3 (10, 20, 30), error> <null, 4 (10, 20, 30), error) <null, 3 (), error) The first test case fails – any idea why?

White-box Test Case int average(int number, array list_of_numbers){ sum = 0; for I =1 , number do{ sum = sum + next_number_in_list } if (number > 0) return sum/number else return NAN Structural (or white-box) testing defines test cases based on the structure of the code.

White-box Test Case - 2 Test cases <null, 6 (1,2,3,4,5,6), 3.5> <null, -3 (10, 20, 30), error> But these are test cases from the previous set of tests The test case definition does not look any different whether it is black-box or white-box.

Coverage We keep defining test cases as long as there are possible faults that have not been directly exercised. In black-box testing the coverage measures are based on the parameter types and the return type. In fact the very first test case we defined in the black-box test suite violates the return type for the method average.

Coverage - 2 Specification-based tests help us find out if the software can do all it is supposed to do. Implementation-based tests help us find out if the software does anything it is not supposed to. To do a thorough job we need both types of coverage.

Defect Density Average number of defects per 1 KLOC (1000 lines of code) For type of domain and development method it is fairly constant http://www.softrel.com/Current%20defect%20density%20statistics.pdf

A bigger fault model Actually there is a bigger fault model than we first laid out. There is an underlying fault model that addresses the “routine” aspects of any program. For example, the result of calculating an average (using division) may result in a real number but the return is specified as an int (integer).

A bigger fault model - 2 Type mismatches Incorrect conditions on iteration statements (while, for, do, etc.) or branching statements

Relative fault model How something is implemented affects what is the fault model we use. Java, for example, would find the mismatch about return type and computation at compilation time. It is not a testing issue. Different language tools will find different kinds of defects and eliminate them before testing. So an abstract fault model has to be filtered by the implementation technology. Strongly typed languages such as Java and C++ will find more faults earlier than C or other non/loosely typed languages.

Measuring test effectiveness Compute the coverage achieved from a set of tests Short-term – which faults in the fault model are being found in the implementation by testing Long-term – metrics gathered after the fact such as defects not found during testing but found by customers after delivery Long-term – categories of defects that are being produced in the development process

Timing of tests Tests are conducted at a number of points in the software development life cycle Each time a developer finishes an iteration on a unit of software (class, module, component) unit tests are conducted. The unit tests are based on both the specification and implementation of the unit.

Integration When two or more pieces of software are joined together, particularly if they were created by two different teams, integration tests are conducted. These tests are created by focusing on the interactions (method calls) between the two pieces. Coverage is measured against the set of all possible interactions in the implementation.

System testing System testing takes a somewhat different perspective – what was the program intended to do? The test cases for this approach come from the requirements. Coverage – test cases per requirements By “system” here I mean the software but system test might also be taken to mean hardware and software if the software runs on specialized hardware.

Testing quality attributes System test cases must include coverage of non-functional requirements such as latency (how long it takes to accomplish a certain task) The test harnesses for this and other specific items such as the interactions of the user interface. May require human in the loop testing

Here’s what you are going to do Read http://resources.sei.cmu.edu/asset_files/conferencepaper/2016_021_001_499333.pdf Use the Verify language and its Help in OSATE to create test cases for wbs Submit team work by 11:59PM Oct 4th