Test Inventory A “successful” test effort may include: –Finding “bugs” –Ensuring the bugs are removed –Show that the system or parts of the system works.

Slides:



Advertisements
Similar presentations
Object Oriented Analysis And Design-IT0207 iiI Semester
Advertisements

Configuration Management
SOFTWARE TESTING. Software Testing Principles Types of software tests Test planning Test Development Test Execution and Reporting Test tools and Methods.
T. E. Potok - University of Tennessee Software Engineering Dr. Thomas E. Potok Adjunct Professor UT Research Staff Member ORNL.
Copyright © 2003 Software Quality Research Laboratory Software Production Essentials Seeing Past the Buzz Words.
Software Testing.
The Unified Software Development Process - Workflows Ivar Jacobson, Grady Booch, James Rumbaugh Addison Wesley, 1999.
Computer Engineering 203 R Smith Requirements Management 6/ Requirements IEEE Standard Glossary A condition or capability needed by a user to solve.
Swami NatarajanJune 17, 2015 RIT Software Engineering Reliability Engineering.
SE 450 Software Processes & Product Metrics Reliability Engineering.
SE 555 Software Requirements & Specification Requirements Validation.
EXAMPLES OF METRICS PROGRAMS
Cmpt-225 Simulation. Application: Simulation Simulation  A technique for modeling the behavior of both natural and human-made systems  Goal Generate.
3. Software product quality metrics The quality of a product: -the “totality of characteristics that bear on its ability to satisfy stated or implied needs”.
Software Process and Product Metrics
High Level: Generic Test Process (from chapter 6 of your text and earlier lesson) Test Planning & Preparation Test Execution Goals met? Analysis & Follow-up.
 What is Software Testing  Terminologies used in Software testing  Types of Testing  What is Manual Testing  Types of Manual Testing  Process that.
Software Configuration Management
Chapter 13 & 14 Software Testing Strategies and Techniques
University of Palestine software engineering department Testing of Software Systems Fundamentals of testing instructor: Tasneem Darwish.
Writing a Program Chapter 1. Introduction We all “start” by learning how to code in some programming language. –With a small, hypothetical, and fairly.
Extreme Programming Software Development Written by Sanjay Kumar.
Testing. Definition From the dictionary- the means by which the presence, quality, or genuineness of anything is determined; a means of trial. For software.
Software Testing Lifecycle Practice
1 Software Testing (Part-II) Lecture Software Testing Software Testing is the process of finding the bugs in a software. It helps in Verifying and.
University of Palestine software engineering department Testing of Software Systems Fundamentals of testing instructor: Tasneem Darwish.
Software Testing Life Cycle
Name Hometown Program Employer/Student Fun Fact 1.
Testing – A Methodology of Science and Art. Agenda To show, A global Test Process which work Like a solution Black Box for an Software Implementation.
Introduction Telerik Software Academy Software Quality Assurance.
CMSC 345 Fall 2000 Unit Testing. The testing process.
Testing E001 Access to Computing: Programming. 2 Introduction This presentation is designed to show you the importance of testing, and how it is used.
Software Development Software Testing. Testing Definitions There are many tests going under various names. The following is a general list to get a feel.
Testing Workflow In the Unified Process and Agile/Scrum processes.
Most Important Tests (MITs) Most Important Tests is just a method which focuses on the risky and important areas of the system to test –Based on statistical.
Software cost estimation Predicting the resources required for a software development process 1.
FPGA-Based System Design: Chapter 6 Copyright  2004 Prentice Hall PTR Topics n Design methodologies.
From Quality Control to Quality Assurance…and Beyond Alan Page Microsoft.
Copyright © 1994 Carnegie Mellon University Disciplined Software Engineering - Lecture 3 1 Software Size Estimation I Material adapted from: Disciplined.
Disciplined Software Engineering Lecture #3 Software Engineering Institute Carnegie Mellon University Pittsburgh, PA Sponsored by the U.S. Department.
CSCI 521 Final Exam Review. Why Establish a Standard Process? It is nearly impossible to have a high quality product without a high quality process. Standard.
1 Ch. 1: Software Development (Read) 5 Phases of Software Life Cycle: Problem Analysis and Specification Design Implementation (Coding) Testing, Execution.
Chapter 10 Verification and Validation of Simulation Models
©Ian Sommerville 2000Software Engineering, 7th edition. Chapter 26Slide 1 Software cost estimation l Predicting the resources required for a software development.
Software Development Problem Analysis and Specification Design Implementation (Coding) Testing, Execution and Debugging Maintenance.
Software Testing Mehwish Shafiq. Testing Testing is carried out to validate and verify the piece developed in order to give user a confidence to use reliable.
Software Quality Assurance and Testing Fazal Rehman Shamil.
How to Build Test Inventory Test inventory is built and updated as the software project moves from phase to phase –Start with Requirements List the actual.
T EST T OOLS U NIT VI This unit contains the overview of the test tools. Also prerequisites for applying these tools, tools selection and implementation.
Software Test Plan Why do you need a test plan? –Provides a road map –Provides a feasibility check of: Resources/Cost Schedule Goal What is a test plan?
Testing Integral part of the software development process.
SOFTWARE TESTING TRAINING TOOLS SUPPORT FOR SOFTWARE TESTING Chapter 6 immaculateres 1.
Configuration Management
Regression Testing with its types
Software Engineering (CSI 321)
Configuration Management
Some Simple Definitions for Testing
Chapter 13 & 14 Software Testing Strategies and Techniques
IS442 Information Systems Engineering
Chapter 10 Verification and Validation of Simulation Models
Software Quality Engineering
Strategies For Software Test Documentation
CSE 303 Concepts and Tools for Software Development
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
Software metrics.
Applying Use Cases (Chapters 25,26)
Applying Use Cases (Chapters 25,26)
Chapter 1: Creating a Program.
Chapter 13 & 14 Software Testing Strategies and Techniques 1 Software Engineering: A Practitioner’s Approach, 6th edition by Roger S. Pressman.
Presentation transcript:

Test Inventory A “successful” test effort may include: –Finding “bugs” –Ensuring the bugs are removed –Show that the system or parts of the system works Goal of testing (Hutcheson’s): –Establish a responsive, dependable system which satisfies and delights the users. –Perform the above within the agreed upon constraint of budget, schedule, and other resources satisfies and delights def/metrics?

How do we achieve that goal? – use a process Plan Test Establish Test Cases Organize Resource Execute Test Cases Bug? Record success no yes Record failure Report Problem to developers Receive Response from dev. Bug Fix? no yes Retest. Bug Fixed? no yes Integrate fix and prepare for rebuild Record No-fix reason Record Problem fixed Record data and produce Reports “wait” -by test coverage -by test results -by fix results -etc.

Planning Test Test Planning Coverage = # of test cases designed / # of scenarios - planning mostly based on requirements doc. - test cases designed with requirements and design docs. Test Execution Coverage = # of test cases ran / # of designed test cases - how do we decide how much to run? - why wouldn’t we run all the designed test cases?

A Real Problem is Getting Bugs All Fixed In large complex systems that requires several steps before one reaches the actual test case, a failure may not always be reproducible! –Makes debugging difficult when the developer runs and it executes! (consider an internal queue size problem --- when queue is full some external inputs get dropped you may not be able to get to full queue very quickly.) Under the gun of schedule, not all problems can get fixed in time for rebuild and retest. (low priority ones get delayed and eventually forgotten!) Products get released with both “known” bugs and some “unknown” bugs!

Successful Testing Needs Good test plan Good test execution Good bug fixing Good fix integration Good “accounting” of problems found, fixed, integrated, and released.

Keeping a “List” or Table of Test Cases We must quantitatively keep a list of test cases so we can ask: –How many items are on the list –How long does it take to execute the complete list –Where are we in terms of the list (test status) –Can we prioritize the list –Arrange the list to show coverage in a tabular form Test Case # 1 # 2 Funct. 1Funct. 2Funct X X X X

How do We Measure Test ? Much like how we measure code ---- loc? –Number of lines of test script written in some language? A test case may be measured by the number of steps involved in executing the test; e.g. –Step1 input field x –Step2 press submit –Step3 choose from displayed options –Step 4 press submit –(note that not all 4 steps test cases are the same --- much like not all 4 loc are the same.) A test case is a comparison of actual versus expected result no matter how many steps are needed to get the result. –This may be vastly different in the test time required. Every keystroke and every mouse movement should be counted! Your thoughts -----?

Some Typical Types of Test Unit Test – testing at “chunks of code” level; done by module author –Small number of keystrokes and mouse movements Functional Test – testing a particular application function that is usually a requirement statement. –Often tested as a “black box” test “System” Test – testing the system internals and internal structures. (Not to be confused with total applications system test) –Often tested as a “white box’ test

An interesting comparison of 2 tests item Prod. 1Prod. 1.1 # of test scripts (actual)1, # user functions (actual)236 # of verifications / test script 1 50 Total # of verifications performed 1,000 6,600 Average # of times a test is executed Total # of tests attempted - computed ,000 Average duration of test (known #) 20 min. 4 min Total time running the test (from log) 383 hrs 100hrs. # verifications/ hr of testing

Some more interesting numbers Efficiency = work done/expended effort = verifications/ hr of testing –For prod1: efficiency was 2.6. and prod2: it was 66 Cost is the inverse of efficiency: exp. effort/ work done –For prod1 : 383 p-hrs/1000 verifications =.383 p-hrs/verification –For prod1; 383 p-hrs/ 236 functions = 1.6 p-hrs/function verified How big is the test? --- number of test scripts identified Size of test set – number of test scripts that will be executed to completion Size of test effort – total time required to perform all the test activities: plan, analyze, execute, track, retest, integrate fixes, etc. Cost of total test effort size of test effort in person-hours multiplied by dollars per person hour. * Test schedule should be built based on historical information from past & an estimate of the current effort

How do we create a test inventory Data collected from inspections/reviews of requirements/design/etc. Known Analytical methods –Path analysis –Data analysis –Usage statistics profile –Environmental catalog (executing environments) Non-Analytical –Experts’ gorilla test –Customer support’s past intuition/brainstorming