Presentation is loading. Please wait.

Presentation is loading. Please wait.

Testing, Bug Fixing and Debugging the Code Yordan Dimitrov Telerik Corporation www.telerik.com.

Similar presentations


Presentation on theme: "Testing, Bug Fixing and Debugging the Code Yordan Dimitrov Telerik Corporation www.telerik.com."— Presentation transcript:

1 Testing, Bug Fixing and Debugging the Code Yordan Dimitrov Telerik Corporation www.telerik.com

2  What Is Testing?  Seven Testing Principles  Developer Testing  Developer vs. QA Testing  Debugging vs. Testing  Black-box vs. White-box Testing  Role of Developer Testing in Software Quality  Recommended Approach to Developer Testing 2

3

4 Testing is a means of detecting errors 4

5 Main Test Activities  Testing is not just running tests, but also:  Planning and control  Choosing test conditions  Designing and executing test cases  Checking results  Evaluating exit criteria  Reporting on the testing process and system under test  Finalizing or completing closure activities 5

6 Main Objectives in Testing  Testing pursues several objectives:  Finding defects  Gaining confidence about the level of quality  Providing information for decision-making  Preventing defects 6

7  Objectives of testing differ according to the point of view:  Development testing – cause as many failures as possible and fix them  Acceptance testing – confirm that the system works as expected  Assessment – assess the quality of the software and its readiness for release 7

8  Objectives of testing differ according to the point of view:  Maintenance testing – check for new defects, introduced during development  Operational testing – assess system characteristics such as reliability or availability 8

9

10 1. Testing shows presence of defects  Testing can show that defects are present  Cannot prove that there are no defects  Appropriate testing reduces the probability for defects 10

11 2. Exhaustive testing is impossible  All combinations of inputs and preconditions are usually almost infinite number  Testing everything is not feasible  Except for trivial cases  Risk analysis and priorities should be used to focus testing efforts 11

12 3. Early testing  Testing activities shall be started as early as possible  And shall be focused on defined objectives  The later a bug is found – the more it costs! 12

13 4. Defect clustering  Testing effort shall be focused proportionally  To the expected and later observed defect density of modules  A small number of modules usually contains most of the defects discovered  Responsible for most of the operational failures 13

14 5. Pesticide paradox  Same tests repeated over and over again tend to loose their effectiveness  Previously undetected defects remain undiscovered  New and modified test cases should be developed 14

15 6. Testing is context dependent  Testing is done differently in different contexts  Example: safety-critical software is tested differently from an e-commerce site 15

16 7. Absence-of-errors fallacy  Finding and fixing defects itself does not help in these cases:  The system built is unusable  Does not fulfill the users’ needs and expectations 16

17 Testing as a Priority of the Developer

18  Software is tested in numerous ways  Some are typically performed by developers  Some are more commonly performed by specialized test personnel  QA Engineers 18

19  Developer testing refers to testing by the developer  Usually the following tests are priority of developer testing:  Unit tests  Component tests  Integration tests  Sometimes regression tests and system tests are also included 19

20  Numerous additional kinds of testing are performed by QA Engineers and rarely performed by developers: 20  Beta tests  Customer-acceptance tests  Performance tests  Configuration tests  Platform tests  Stress tests  Usability tests  Etc.

21  Testing  A means of initial detection of errors  Debugging  A means of diagnosing and correcting the root causes of errors that have already been detected 21

22 Source: http://www.allposters.com

23  Testing is usually broken into two broad categories:  Black-box testing  White-box testing 23

24  Black-box techniques are a way to derive and select test conditions, test cases, or test data  Based on an analysis of the test basis documentation  Also called specification-based or behavioral techniques  Tests are based on the way the system is supposed to work 24

25  White-box techniques  Also called structural or glass-box techniques  Based on an analysis of the structure of the component or system  Information about how the software is constructed?  E.g., code and detailed design information  Usually a priority of developers 25

26

27  Individual testing steps typically find less than 50 percent of the errors present each  (Unit test, component test, and integration test)  The combination of testing steps often finds less than 60 percent of the errors present (Jones 1998) 27

28  Testing's goal runs counter to the goals of other development activities  Testing can never completely prove the absence of errors  Testing by itself does not improve software quality  Testing requires you to assume that you'll find errors in your code 28

29  How much testing should be done is a matter of risk:  Too much testing can delay the product release and increase the product price  Insufficient testing hides risks of errors in the final product 29

30 30 Developer testing should probably take 8 to 25 percent of the total project time

31 Ground Rules and Tips for Effective Development and Testing

32  Test for each relevant requirement  Make sure that the requirements have been implemented  Test for each relevant design concern  Make sure that the design has been implemented 32

33  Use "basis testing" to add detailed test cases to those that test the requirements and the design  Use a checklist of the kinds of errors you've made on the project to date or have made on previous projects 33

34  Effort is the same  Detect defects earlier  Forces you to think at least a little bit  Exposes requirements problems sooner  Run it when you want 34 Source: flickr

35  Developer tests tend to be "clean tests“  Developer testing tends to have an optimistic view of test coverage  Developer testing tends to skip more sophisticated kinds of test coverage 35

36  Incomplete Testing  Structured Basis Testing  Data-Flow Testing  Equivalence Partitioning  Error Guessing  Boundary Analysis  Classes of Bad Data  Classes of Good Data  Use Test Cases That Make Hand-Checks Convenient 36

37  Test each statement in a program at least once  Compute the minimum number of test cases:  Start with 1 for the straight path through the routine  Add 1 for each of the following keywords, or their equivalents: if, while, repeat, for, and, and or  Add 1 for each case in a case statement  If the case statement doesn't have a default case, add 1 more 37

38 Statement1; <-- 1 Statement2; if ( x < 10 ) { <-- 2 Statement3; Statement3;}Statement4; (1)Count "1" for the routine itself. (2)Count "2" for the if. 38

39 39Case Minimum memory use Most readable output 1 Nominal case All boolean conditions are true 2 The initial for condition is false numEmployees < 1 3 The first if is false m_employee[ id ].governmentRetirementWith-held >=MAX_GOVT_RETIREMENT 4 The second if is false because the first part of the and is false not m_employee[ id ].WantsRetirement 5 The second if is false because the second part of the and is false not EligibleForRetirement( m_employee[id] ) 6 The third if is false not EligibleForPersonalRetirement( m_employee[ id ] )

40  The normal combination of data states  A variable is defined, used one or more times, and perhaps killed 40 Source: http://blog.radvision.com

41  The key to writing data-flow test cases is to exercise all possible defined-used paths:  All definitions  Test every definition of every variable  I.e., every place at which any variable receives a value  All defined-used combinations  Test every combination of defining a variable in one place and using it in another 41

42 if ( Condition 1 ) { x = a; } else { x = b; } if ( Condition 2 ) { y = x + 1; } else { y = x - 1; } 42Case Test Description 1 Condition 1 = true, Condition 2 = true 2 Condition 1 = false, Condition 2 = false ? x = a; y = x – 1; ? x = b; y = x + 1;

43 43Case Test Description 7 Define companyRetirement in line 12, and use it first in line 26 This isn't necessarily covered by any of the previous test cases 8 Define companyRetirement in line 12, and use it first in line 31 This isn't necessarily covered by any of the previous test cases

44 44Case Test Description 1 Case 1 is defined so that the true condition for m_employee[ ID ]. governmentRetirementWithheld < MAX_GOVT_RETIREMENT is the first case on the true side of the boundary 3 Case 3 is defined so that the false condition for m_employee[ ID ]. governmentRetirementWithheld < MAX_GOVT_RETIREMENT is on the false side of the boundary 9 An additional test case is added for the case directly on the boundary in which m_employee [ ID ].governmentRetirementWithheld = MAX_GOVT_RETIREMENT

45  Minimum and Maximum allowable values 45Case Test Description 10 A large group of employees, each of whom has a large salary (what constitutes "large" depends on the specific system being developed)—for the sake of example, we'll say 1000 employees, each with a salary of $250,000, none of whom have had any social security tax withheld and all of whom want retirement withholding. 11 A group of 10 employees, each of whom has a salary of $0.00.

46  Too little data (Case 2-11)  Too much data  The wrong kind of data  The wrong size of data  Uninitialized data 46Case Test Description 12 An array of 100,000,000 employees. Tests for too much data. 13 A negative salary. Wrong kind of data. 14 A negative number of employees. Wrong kind of data.

47  Nominal cases—middle-of-the-road, expected values  Minimum normal configuration  Maximum normal configuration  Compatibility with old data 47Case Test Description 16 A group of one employee. To test the minimum normal configuration. 17 A group of 500 employees. To test the maximum normal configuration.

48  Which Classes Contain the Most Errors?  Errors by Classification  The scope of most errors is fairly limited  Many errors are outside the domain of construction  Most construction errors are the programmers' fault  Clerical errors (typos) are a surprisingly common source of problems 48

49  Errors by Classification  Misunderstanding the design is a recurring theme in studies of programmer errors  Most errors are easy to fix  It's a good idea to measure your own organization's experiences with errors 49 Source: http://kathrynvercillo.com

50  Planning to Test  Retesting (Regression Testing)  Automated Testing 50

51 1.Stabilize the error 2.Locate the source of the error a)Gather the data b)Analyze the data and form hypothesis c)Determine how to prove or disprove the hypothesis d)Prove or disprove the hypothesis by 2c 3.Fix the defect 4.Test the fix 5.Look for similar errors 51

52 Demo

53  Use all available data  Refine the test cases  Check unit tests  Use available tools  Reproduce the error several different ways  Generate more data to generate more hypotheses  Use the results of negative tests  Brainstorm for possible hypotheses 53

54  Narrow the suspicious region of the code  Be suspicious of classes and routines that have had defects before  Check code that’s changed recently  Expand the suspicious region of the code  Integrate incrementally  Check for common defects  Talk to someone else about the problem  Take a break from the problem 54

55  Understand the problem before you fix it  Understand the program, not just the problem  Confirm the defect diagnosis  Relax  Save the original source code  Fix the problem not the symptom  Make one change at a time  Add a unit test that expose the defect  Look for similar defects 55 Source: http://www.movingseniorsbc.com

56  Your ego tells you that your code is good and doesn't have a defect even when you've seen that it has one.  How "Psychological Set" Contributes to Debugging Blindness 56

57 57

58  How "Psychological Distance" can help 58 First Variable Second Variable Psychological Distance stopptstcppt Almost invisible shiftrnshiftrm Almost none dcountbcountSmall claims1claims2Small productsumLarge

59  Building Scaffolding to Test Individual Classes  Diff Tools  Test-Data Generators  Coverage Monitors  Data Recorder/Logging  Symbolic Debuggers  System Perturbers  Error Databases 59

60  Diff Tools  Compiler Warning Messages  Set your compiler’s warning level to the highest  Treat warnings as errors  Initiate project wide standards  Extended Syntax and Logic Checking  Profilers  Test Frameworks/Scaffolding  Debuggers 60 Source: http://www.clker.com

61  Testing can give confidence in the quality of the software if it finds few or no defects  If defects are found, the quality increases when those defects are fixed  Lessons learnt from previous mistakes improve future performance 61

62 Questions? http://academy.telerik.com


Download ppt "Testing, Bug Fixing and Debugging the Code Yordan Dimitrov Telerik Corporation www.telerik.com."

Similar presentations


Ads by Google