Testing, Bug Fixing and Debugging the Code Yordan Dimitrov Telerik Corporation www.telerik.com.

Slides:



Advertisements
Similar presentations
Object Oriented Analysis And Design-IT0207 iiI Semester
Advertisements

Software Testing. Quality is Hard to Pin Down Concise, clear definition is elusive Not easily quantifiable Many things to many people You'll know it when.
Test process essentials Riitta Viitamäki,
Testing and Quality Assurance
Basic Concepts Snejina Lazarova Senior QA Engineer, Team Lead CRMTeam Dimo Mitev Senior QA Engineer, Team Lead SystemIntegrationTeam Telerik QA Academy.
Developer Testing and Debugging. Resources Code Complete by Steve McConnell Code Complete by Steve McConnell Safari Books Online Safari Books Online Google.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Debugging CPSC 315 – Programming Studio Fall 2008.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Testing HCI Usability Testing. Chronological order of testing Individual program units are built and tested (white-box testing / unit testing) Units are.
The Basics of Software Testing
Testing an individual module
Software Testing. “Software and Cathedrals are much the same: First we build them, then we pray!!!” -Sam Redwine, Jr.
Software Testing Prasad G.
Chapter 11: Testing The dynamic verification of the behavior of a program on a finite set of test cases, suitable selected from the usually infinite execution.
1 Software Testing Techniques CIS 375 Bruce R. Maxim UM-Dearborn.
Test Design Techniques
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Software Testing Sudipto Ghosh CS 406 Fall 99 November 9, 1999.
Software Testing Verification and validation planning Software inspections Software Inspection vs. Testing Automated static analysis Cleanroom software.
University of Palestine software engineering department Testing of Software Systems Fundamentals of testing instructor: Tasneem Darwish.
DKT311 Software Engineering Dr. Rozmie Razif bin Othman.
TESTING.
Introduction Telerik Software Academy Software Quality Assurance.
CMSC 345 Fall 2000 Unit Testing. The testing process.
Software Engineering Chapter 23 Software Testing Ku-Yaw Chang Assistant Professor Department of Computer Science and Information.
1 Debugging and Testing Overview Defensive Programming The goal is to prevent failures Debugging The goal is to find cause of failures and fix it Testing.
Chapter 8 – Software Testing Lecture 1 1Chapter 8 Software testing The bearing of a child takes nine months, no matter how many women are assigned. Many.
1 Software testing. 2 Testing Objectives Testing is a process of executing a program with the intent of finding an error. A good test case is in that.
Testing, Bug Fixing and Debugging the Code Yordan Dimitrov Telerik Corporation
© 2008 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice Testing 29/Apr/2009 Petr Panuška.
Software Testing Testing types Testing strategy Testing principles.
1 Chapter 22 Developer Testing. 2 introduction Testing is the most popular quality-improvement activity Testing is the most popular quality-improvement.
Dr. Tom WayCSC Testing and Test-Driven Development CSC 4700 Software Engineering Based on Sommerville slides.
DEBUGGING. BUG A software bug is an error, flaw, failure, or fault in a computer program or system that causes it to produce an incorrect or unexpected.
Software Testing Yonsei University 2 nd Semester, 2014 Woo-Cheol Kim.
Chapter 22 Developer testing Peter J. Lane. Testing can be difficult for developers to follow  Testing’s goal runs counter to the goals of the other.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
1 Introduction to Software Testing. Reading Assignment P. Ammann and J. Offutt “Introduction to Software Testing” ◦ Chapter 1 2.
Software Engineering 2004 Jyrki Nummenmaa 1 BACKGROUND There is no way to generally test programs exhaustively (that is, going through all execution.
Chapter 1: Fundamental of Testing Systems Testing & Evaluation (MNN1063)
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
CS451 Lecture 10: Software Testing Yugi Lee STB #555 (816)
Software Quality Assurance and Testing Fazal Rehman Shamil.
Dynamic Testing.
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Testing Overview Software Reliability Techniques Testing Concepts CEN 4010 Class 24 – 11/17.
Lecturer: Eng. Mohamed Adam Isak PH.D Researcher in CS M.Sc. and B.Sc. of Information Technology Engineering, Lecturer in University of Somalia and Mogadishu.
Introduction to Software Testing Maili Markvardt.
Software engineering - 2 Section 8. QUIZ Show how it is possible to determine the height of a tall building with the aid of a barometer.
SOFTWARE TESTING LECTURE 9. OBSERVATIONS ABOUT TESTING “ Testing is the process of executing a program with the intention of finding errors. ” – Myers.
Software Testing. SE, Testing, Hans van Vliet, © Nasty question  Suppose you are being asked to lead the team to test the software that controls.
Defect testing Testing programs to establish the presence of system defects.
1 Software Testing. 2 What is Software Testing ? Testing is a verification and validation activity that is performed by executing program code.
Testing Integral part of the software development process.
ISQB Software Testing Section Meeting 10 Dec 2012.
Dr. Rozmie Razif bin Othman
Chapter 22 Developer Testing
Testing and Debugging PPT By :Dr. R. Mall.
Software Engineering (CSI 321)
Software Testing An Introduction.
Software engineering – 1
Chapter 18 Software Testing Strategies
UNIT-4 BLACKBOX AND WHITEBOX TESTING
CSCE 315 – Programming Studio, Fall 2017 Tanzir Ahmed
Testing and Test-Driven Development CSC 4700 Software Engineering
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
Java & Testing.
UNIT-4 BLACKBOX AND WHITEBOX TESTING
Presentation transcript:

Testing, Bug Fixing and Debugging the Code Yordan Dimitrov Telerik Corporation

 What Is Testing?  Seven Testing Principles  Developer Testing  Developer vs. QA Testing  Debugging vs. Testing  Black-box vs. White-box Testing  Role of Developer Testing in Software Quality  Recommended Approach to Developer Testing 2

Testing is a means of detecting errors 4

Main Test Activities  Testing is not just running tests, but also:  Planning and control  Choosing test conditions  Designing and executing test cases  Checking results  Evaluating exit criteria  Reporting on the testing process and system under test  Finalizing or completing closure activities 5

Main Objectives in Testing  Testing pursues several objectives:  Finding defects  Gaining confidence about the level of quality  Providing information for decision-making  Preventing defects 6

 Objectives of testing differ according to the point of view:  Development testing – cause as many failures as possible and fix them  Acceptance testing – confirm that the system works as expected  Assessment – assess the quality of the software and its readiness for release 7

 Objectives of testing differ according to the point of view:  Maintenance testing – check for new defects, introduced during development  Operational testing – assess system characteristics such as reliability or availability 8

1. Testing shows presence of defects  Testing can show that defects are present  Cannot prove that there are no defects  Appropriate testing reduces the probability for defects 10

2. Exhaustive testing is impossible  All combinations of inputs and preconditions are usually almost infinite number  Testing everything is not feasible  Except for trivial cases  Risk analysis and priorities should be used to focus testing efforts 11

3. Early testing  Testing activities shall be started as early as possible  And shall be focused on defined objectives  The later a bug is found – the more it costs! 12

4. Defect clustering  Testing effort shall be focused proportionally  To the expected and later observed defect density of modules  A small number of modules usually contains most of the defects discovered  Responsible for most of the operational failures 13

5. Pesticide paradox  Same tests repeated over and over again tend to loose their effectiveness  Previously undetected defects remain undiscovered  New and modified test cases should be developed 14

6. Testing is context dependent  Testing is done differently in different contexts  Example: safety-critical software is tested differently from an e-commerce site 15

7. Absence-of-errors fallacy  Finding and fixing defects itself does not help in these cases:  The system built is unusable  Does not fulfill the users’ needs and expectations 16

Testing as a Priority of the Developer

 Software is tested in numerous ways  Some are typically performed by developers  Some are more commonly performed by specialized test personnel  QA Engineers 18

 Developer testing refers to testing by the developer  Usually the following tests are priority of developer testing:  Unit tests  Component tests  Integration tests  Sometimes regression tests and system tests are also included 19

 Numerous additional kinds of testing are performed by QA Engineers and rarely performed by developers: 20  Beta tests  Customer-acceptance tests  Performance tests  Configuration tests  Platform tests  Stress tests  Usability tests  Etc.

 Testing  A means of initial detection of errors  Debugging  A means of diagnosing and correcting the root causes of errors that have already been detected 21

Source:

 Testing is usually broken into two broad categories:  Black-box testing  White-box testing 23

 Black-box techniques are a way to derive and select test conditions, test cases, or test data  Based on an analysis of the test basis documentation  Also called specification-based or behavioral techniques  Tests are based on the way the system is supposed to work 24

 White-box techniques  Also called structural or glass-box techniques  Based on an analysis of the structure of the component or system  Information about how the software is constructed?  E.g., code and detailed design information  Usually a priority of developers 25

 Individual testing steps typically find less than 50 percent of the errors present each  (Unit test, component test, and integration test)  The combination of testing steps often finds less than 60 percent of the errors present (Jones 1998) 27

 Testing's goal runs counter to the goals of other development activities  Testing can never completely prove the absence of errors  Testing by itself does not improve software quality  Testing requires you to assume that you'll find errors in your code 28

 How much testing should be done is a matter of risk:  Too much testing can delay the product release and increase the product price  Insufficient testing hides risks of errors in the final product 29

30 Developer testing should probably take 8 to 25 percent of the total project time

Ground Rules and Tips for Effective Development and Testing

 Test for each relevant requirement  Make sure that the requirements have been implemented  Test for each relevant design concern  Make sure that the design has been implemented 32

 Use "basis testing" to add detailed test cases to those that test the requirements and the design  Use a checklist of the kinds of errors you've made on the project to date or have made on previous projects 33

 Effort is the same  Detect defects earlier  Forces you to think at least a little bit  Exposes requirements problems sooner  Run it when you want 34 Source: flickr

 Developer tests tend to be "clean tests“  Developer testing tends to have an optimistic view of test coverage  Developer testing tends to skip more sophisticated kinds of test coverage 35

 Incomplete Testing  Structured Basis Testing  Data-Flow Testing  Equivalence Partitioning  Error Guessing  Boundary Analysis  Classes of Bad Data  Classes of Good Data  Use Test Cases That Make Hand-Checks Convenient 36

 Test each statement in a program at least once  Compute the minimum number of test cases:  Start with 1 for the straight path through the routine  Add 1 for each of the following keywords, or their equivalents: if, while, repeat, for, and, and or  Add 1 for each case in a case statement  If the case statement doesn't have a default case, add 1 more 37

Statement1; <-- 1 Statement2; if ( x < 10 ) { <-- 2 Statement3; Statement3;}Statement4; (1)Count "1" for the routine itself. (2)Count "2" for the if. 38

39Case Minimum memory use Most readable output 1 Nominal case All boolean conditions are true 2 The initial for condition is false numEmployees < 1 3 The first if is false m_employee[ id ].governmentRetirementWith-held >=MAX_GOVT_RETIREMENT 4 The second if is false because the first part of the and is false not m_employee[ id ].WantsRetirement 5 The second if is false because the second part of the and is false not EligibleForRetirement( m_employee[id] ) 6 The third if is false not EligibleForPersonalRetirement( m_employee[ id ] )

 The normal combination of data states  A variable is defined, used one or more times, and perhaps killed 40 Source:

 The key to writing data-flow test cases is to exercise all possible defined-used paths:  All definitions  Test every definition of every variable  I.e., every place at which any variable receives a value  All defined-used combinations  Test every combination of defining a variable in one place and using it in another 41

if ( Condition 1 ) { x = a; } else { x = b; } if ( Condition 2 ) { y = x + 1; } else { y = x - 1; } 42Case Test Description 1 Condition 1 = true, Condition 2 = true 2 Condition 1 = false, Condition 2 = false ? x = a; y = x – 1; ? x = b; y = x + 1;

43Case Test Description 7 Define companyRetirement in line 12, and use it first in line 26 This isn't necessarily covered by any of the previous test cases 8 Define companyRetirement in line 12, and use it first in line 31 This isn't necessarily covered by any of the previous test cases

44Case Test Description 1 Case 1 is defined so that the true condition for m_employee[ ID ]. governmentRetirementWithheld < MAX_GOVT_RETIREMENT is the first case on the true side of the boundary 3 Case 3 is defined so that the false condition for m_employee[ ID ]. governmentRetirementWithheld < MAX_GOVT_RETIREMENT is on the false side of the boundary 9 An additional test case is added for the case directly on the boundary in which m_employee [ ID ].governmentRetirementWithheld = MAX_GOVT_RETIREMENT

 Minimum and Maximum allowable values 45Case Test Description 10 A large group of employees, each of whom has a large salary (what constitutes "large" depends on the specific system being developed)—for the sake of example, we'll say 1000 employees, each with a salary of $250,000, none of whom have had any social security tax withheld and all of whom want retirement withholding. 11 A group of 10 employees, each of whom has a salary of $0.00.

 Too little data (Case 2-11)  Too much data  The wrong kind of data  The wrong size of data  Uninitialized data 46Case Test Description 12 An array of 100,000,000 employees. Tests for too much data. 13 A negative salary. Wrong kind of data. 14 A negative number of employees. Wrong kind of data.

 Nominal cases—middle-of-the-road, expected values  Minimum normal configuration  Maximum normal configuration  Compatibility with old data 47Case Test Description 16 A group of one employee. To test the minimum normal configuration. 17 A group of 500 employees. To test the maximum normal configuration.

 Which Classes Contain the Most Errors?  Errors by Classification  The scope of most errors is fairly limited  Many errors are outside the domain of construction  Most construction errors are the programmers' fault  Clerical errors (typos) are a surprisingly common source of problems 48

 Errors by Classification  Misunderstanding the design is a recurring theme in studies of programmer errors  Most errors are easy to fix  It's a good idea to measure your own organization's experiences with errors 49 Source:

 Planning to Test  Retesting (Regression Testing)  Automated Testing 50

1.Stabilize the error 2.Locate the source of the error a)Gather the data b)Analyze the data and form hypothesis c)Determine how to prove or disprove the hypothesis d)Prove or disprove the hypothesis by 2c 3.Fix the defect 4.Test the fix 5.Look for similar errors 51

Demo

 Use all available data  Refine the test cases  Check unit tests  Use available tools  Reproduce the error several different ways  Generate more data to generate more hypotheses  Use the results of negative tests  Brainstorm for possible hypotheses 53

 Narrow the suspicious region of the code  Be suspicious of classes and routines that have had defects before  Check code that’s changed recently  Expand the suspicious region of the code  Integrate incrementally  Check for common defects  Talk to someone else about the problem  Take a break from the problem 54

 Understand the problem before you fix it  Understand the program, not just the problem  Confirm the defect diagnosis  Relax  Save the original source code  Fix the problem not the symptom  Make one change at a time  Add a unit test that expose the defect  Look for similar defects 55 Source:

 Your ego tells you that your code is good and doesn't have a defect even when you've seen that it has one.  How "Psychological Set" Contributes to Debugging Blindness 56

57

 How "Psychological Distance" can help 58 First Variable Second Variable Psychological Distance stopptstcppt Almost invisible shiftrnshiftrm Almost none dcountbcountSmall claims1claims2Small productsumLarge

 Building Scaffolding to Test Individual Classes  Diff Tools  Test-Data Generators  Coverage Monitors  Data Recorder/Logging  Symbolic Debuggers  System Perturbers  Error Databases 59

 Diff Tools  Compiler Warning Messages  Set your compiler’s warning level to the highest  Treat warnings as errors  Initiate project wide standards  Extended Syntax and Logic Checking  Profilers  Test Frameworks/Scaffolding  Debuggers 60 Source:

 Testing can give confidence in the quality of the software if it finds few or no defects  If defects are found, the quality increases when those defects are fixed  Lessons learnt from previous mistakes improve future performance 61

Questions?