Presentation is loading. Please wait.

Presentation is loading. Please wait.

Object-Oriented and Classical Software Engineering Eighth Edition, WCB/McGraw-Hill, 2011 Stephen R. Schach.

Similar presentations


Presentation on theme: "Object-Oriented and Classical Software Engineering Eighth Edition, WCB/McGraw-Hill, 2011 Stephen R. Schach."— Presentation transcript:

1 Object-Oriented and Classical Software Engineering Eighth Edition, WCB/McGraw-Hill, 2011 Stephen R. Schach

2

3 TESTING

4 Overview Quality issues Non-execution-based testing Execution-based testing What should be tested? Testing versus correctness proofs Who should perform execution-based testing? When testing stops

5 Revise Terminology When humans make a MISTAKE, a FAULT is injected into the system FAILURE is the observed incorrect behavior of a product as a consequence of a FAULT. ERROR is the amount by which a result is incorrect DEFECT is a generic term for fault. QUALITY the extent to which a product satisfies its specifications.

6 IEEE Glossary of terms 610.12, 1990 (1) The difference between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition. For example, a difference of 30 meters between a computed result and the correct result. (2) An incorrect step, process, or data defini- tion. For example, an incorrect instruction in a computer program.

7 IEEE Glossary of terms 610.12, 1990 A human action that produces an incorrect result. A defect in a hardware device or component; for example, a short circuit or broken The inability of a system or component to perform its required functions within specified performance requirements.

8 IEEE Glossary of terms 610.12, 1990 The fault tolerance discipline distinguishes between a human action (a mistake), its manifestation (a hardware or software fault), the result of the fault (a failure), and the amount by which the result is incorrect (the error).

9 Remember phases of Software Engineering

10 Phases of Classical Software Engineering Requirements phase Explore the concept Elicit the client’s requirements Analysis (specification) phase Analyze the client’s requirements Draw up the specification document Draw up the software project management plan “What the product is supposed to do” Design phase Architectural design, followed by Detailed design “How the product does it” Implementation phase CodingUnit testingIntegration Acceptance testing Postdelivery maintenance Corrective maintenance Perfective maintenance Adaptive maintenance Retirement

11 Testing There are two basic types of testing – Execution-based testing – Non-execution-based testing

12 Program testing Testing is intended to show that a program does what it is intended to do and to discover program defects before it is put into use. When you test software, you execute a program using artificial data. You check the results of the test run for errors, anomalies or information about the program’s non-functional attributes. Can reveal the presence of errors NOT their absence. Testing is part of a more general verification and validation process, which also includes static validation techniques. 12

13 Program testing goals To demonstrate to the developer and the customer that the software meets its requirements. – For custom software, this means that there should be at least one test for every requirement in the requirements document. For generic software products, it means that there should be tests for all of the system features, plus combinations of these features, that will be incorporated in the product release. To discover situations in which the behavior of the software is incorrect, undesirable or does not conform to its specification. – Defect testing is concerned with rooting out undesirable system behavior such as system crashes, unwanted interactions with other systems, incorrect computations and data corruption. 13

14 Validation and defect testing The first goal leads to validation testing – You expect the system to perform correctly using a given set of test cases that reflect the system’s expected use. The second goal leads to defect testing – The test cases are designed to expose defects. The test cases in defect testing can be deliberately obscure and need not reflect how the system is normally used. 14

15 Testing process goals Validation testing Validation testing – To demonstrate to the developer and the system customer that the software meets its requirements – A successful test shows that the system operates as intended. Defect testing Defect testing – To discover faults or defects in the software where its behaviour is incorrect or not in conformance with its specification – A successful test is a test that makes the system perform incorrectly and so exposes a defect in the system. 15

16 An input-output model of program testing 16

17 Verification: "Are we building the product right”. The software should conform to its specification. Validation: "Are we building the right product”. The software should do what the user really requires. Verification vs validation 17

18 Revise Terminology Verification – Process of determining whether a workflow has been correctly carried out – Takes place at the end of each workflow Validation – Intensive evaluation that determines if the product as a whole satisfies its requirements – Takes place just before the product is to be delivered to the client

19 V & V confidence Aim of V & V is to establish confidence that the system is ‘fit for purpose’. Depends on system’s purpose, user expectations and marketing environment – Software purpose The level of confidence depends on how critical the software is to an organisation. – User expectations Users may have low expectations of certain kinds of software. – Marketing environment Getting a product to market early may be more important than finding defects in the program. 19

20 Software inspections Concerned with analysis of the static system representation to discover problems (static verification) – May be supplemented by tool-based document and code analysis. Software testing Concerned with exercising and observing product behaviour (dynamic verification) – The system is executed with test data and its operational behaviour is observed. Inspections and testing 20

21 Inspections and testing 21

22 Software inspections These involve people examining the source representation with the aim of discovering anomalies and defects. Inspections not require execution of a system so may be used before implementation. They may be applied to any representation of the system (requirements, design, configuration data, test data, etc.). They have been shown to be an effective technique for discovering program errors. 22

23 Advantages of inspections During testing, errors can mask (hide) other errors. Because inspection is a static process, you don’t have to be concerned with interactions between errors. Incomplete versions of a system can be inspected without additional costs. If a program is incomplete, then you need to develop specialized test harnesses to test the parts that are available. As well as searching for program defects, an inspection can also consider broader quality attributes of a program, such as compliance with standards, portability and maintainability. 23

24 Inspections and testing Inspections and testing are complementary and not opposing verification techniques. Both should be used during the V & V process. Inspections can check conformance with a specification but NOT conformance with the customer’s real requirements. Inspections CANNOT check non-functional characteristics such as performance, usability, etc. 24

25 Software Quality Not “excellence” The extent to which software satisfies its specifications Every software professional is responsible for ensuring that his or her work is correct – Quality must be built in from the beginning

26 Software Quality Assurance The members of the SQA group must ensure that the developers are doing high-quality work – At the end of each workflow – When the product is complete In addition, quality assurance must be applied to – The process itself Example: Standards

27 Responsibilities of SQA Group Development of standards to which software must conform Establishment of monitoring procedures for ensuring compliance with standards Ensure the quality of the software process Ensure the quality of the product

28 Managerial Independence There must be managerial independence between – The development team – The SQA group Neither group should have power over the other Problem: SQA team finds serious defects as the deadline approaches! What to do? Who decides?

29 Managerial Independence (contd) More senior management must decide whether to – Deliver the product on time but with faults, or – Test further and deliver the product late The decision must take into account the interests of the client and the development organization

30 Non-Execution-Based Testing Testing software without running test cases – Reviewing software, carefully reading through it – Analyzing software mathematically Underlying principles – We should not review our own work Other people should do the review. We cannot see our own mistakes. – Group synergy Review using a team of software professionals with a brad range of skills

31 Walkthroughs A walkthrough team consists of from four to six members It includes representatives of – The team responsible for the current workflow – The team responsible for the next workflow – The SQA group – The client representative The walkthrough is preceded by preparation – Lists of items Items not understood Items that appear to be incorrect

32 Managing Walkthroughs The walkthrough team is chaired by the SQA representative In a walkthrough we detect faults, not correct them – A correction produced by a committee is likely to be of low quality – The cost of a committee correction is too high – Not all items flagged are actually incorrect – A walkthrough should not last longer than 2 hours There is no time to correct faults as well

33 Managing Walkthroughs (contd) Walkthrough can be – Participant driven : reviewers present their lists – Document Driven : person/team responsible from the document walks through it A walkthrough is usually document-driven, rather than participant- driven Walkthrough leader elicits questions and facilitates discussion Verbalization leads to fault finding A walkthrough should never be used for performance appraisal

34 Inspections An inspection has five formal steps 1.Overview : – Given by a person responsible from producing the document – Document distributed at the end of the session 2.Preparation – Understand the document in detail – Use statistics of fault types 3.Inspection – One participant walks through the document Every item must be covered Every branch must be taken at least once – Fault finding begins – Within 1 day moderator (team leader) produces a written report 4.Rework – Person responsible resolves all faults and problems in the written document 5.Follow-up – Moderator ensures that all issues mentioned in the report are resolved – List faults that were fixed. – Clarify incorrectly flagged items

35 Inspections An inspection has five formal steps Moderator ensures that all issues mentioned in the report are resolved List faults that were fixed. Clarify incorrectly flagged items Follow-up Person responsible resolves all faults and problems in the written document Rework One participant walks through the document Every item must be covered Every branch must be taken at least once Fault finding begins Within 1 day moderator (team leader) produces a written report Inspection Understand the document in detail Use statistics of fault types Preparation Given by a person responsible from producing the document Document distributed at the end of the session Overview :

36 Inspections (contd) An inspection team has four members – Moderator – A member of the team performing the current workflow – A member of the team performing the next workflow – A member of the SQA group Special roles are played by the – Moderator – Reader – Recorder

37 Fault Statistics Faults are recorded by severity – Example: Major or minor Faults are recorded by fault type – Examples of design faults: Not all specification items have been addressed Actual and formal arguments do not correspond In general interface and logic errors

38 Fault Statistics (contd) For a given workflow, we compare current fault rates with those of previous products – Early warning!!!! If disproportionate number of a certain fault type is discovered in 203 code artifacts – Check other artifacts for the same fault type We take action if there are a disproportionate number of faults in an artifact – Redesigning from scratch is a good alternative We carry forward fault statistics to the next workflow – We may not detect all faults of a particular type in the current inspection

39 Statistics on Inspections IBM inspections showed up – 82% of all detected faults (1976) – 70% of all detected faults (1978) – 93% of all detected faults (1986) Switching system – 90% decrease in the cost of detecting faults (1986) JPL – Four major faults, 14 minor faults per 2 hours (1990) – Savings of $25,000 per inspection – The number of faults decreased exponentially by phase (1992)

40 Statistics on Inspections (contd) Warning Fault statistics should never be used for performance appraisal – “Killing the goose that lays the golden eggs” Another problem:

41 Comparison of Inspections and Walkthroughs Walkthrough – Two-step, informal process Preparation Analysis Inspection – Five-step, formal process Overview Preparation Inspection Rework Follow-up

42 6.2.5 Strengths and Weaknesses of Reviews Reviews can be effective – Faults are detected early in the process Reviews are less effective if the process is inadequate – Large-scale software should consist of smaller, largely independent pieces – The documentation of the previous workflows has to be complete and available online

43 Metrics for Inspections Inspection rate (e.g., design pages inspected per hour) Fault density (e.g., faults per KLOC inspected) Fault detection rate (e.g., faults detected per hour) Fault detection efficiency (e.g., number of major, minor faults detected per hour)

44 Metrics for Inspections (contd) Does a 50% increase in the fault detection rate mean that – Quality has decreased? Or – The inspection process is more efficient?

45 Execution-Based Testing Organizations spend up to 50% of their software budget on testing – But delivered software is frequently unreliable Dijkstra (1972) – “Program testing can be a very effective way to show the presence of bugs, but it is hopelessly inadequate for showing their absence”

46 What Should Be Tested? Definition of execution-based testing – “The process of inferring certain behavioral properties of the product based, in part, on the results of executing the product in a known environment with selected inputs” This definition has troubling implications

47 What Should Be Tested? (contd) “Inference” – We have a fault report, the source code, and — often — nothing else Input data => Desirable output “Known environment” – We never can really know our environment Is the problem due to memory? OS? “Selected inputs” – Sometimes we cannot provide the inputs we want – Simulation is needed Ex:” avionics software: how well can you simulate the physical behavior outside the aircraft?

48 What Should Be Tested? (contd) We need to test – correctness (of course), and also – Utility – Reliability – Robustness, and – Performance

49 Utility The extent to which the product meets the user’s needs – Examples: Ease of use Useful functions Cost effectiveness – Compare with competitors

50 Reliability A measure of the frequency and criticality of failure – Mean time between failures – Mean time to repair – Time (and cost) to repair the results of a failure Recover gracefully and quickly

51 Robustness A function of – The range of operating conditions – The possibility of unacceptable results with valid input – The effect of invalid input A robust product should NOT crash when it is NOT used under valid conditions. – Ex: enter ?@#$?@#$ as student id!

52 Performance The extent to which space and time constraints are met – Response time, main memory requirements Real-time software is characterized by hard real-time constraints – Nuclear reactor control system. If data are lost because the system is too slow – Ex: weather measurement stations: There is no way to recover those data

53 Correctness A product is correct if it satisfies its output specifications, – independent of it use of computer resources and – when operated under permitted conditions

54 Correctness of specifications Incorrect specification for a sort: Function trickSort which satisfies this specification:

55 Correctness of specifications (contd) Incorrect specification for a sort: Corrected specification for the sort:

56 Correctness (contd) Technically, correctness is Not necessary – Example: C++ compiler Not sufficient – Example: trickSort

57 Testing versus Correctness Proofs A correctness proof is an alternative to execution-based testing A correctness proof is a mathematical technique for showing that a product is correct – Correct=satisfies it specification

58 Example of a Correctness Proof The code segment to be proven correct Correct: After the code is executed the variable s will contain the sum of the n elements of the array y.

59 Example of a Correctness Proof (contd) A flowchart equivalent of the code segment Figure 6.5

60 Example of a Correctness Proof (contd) – Input specification – Output specification – Loop invariant – Assertions Add an assertion before an after each statement.

61 Example of a Correctness Proof (contd)

62 Three Myths of Correctness Proving Software engineers do not have enough mathematics for proofs – Most computer science majors either know or can learn the mathematics needed for proofs Proving is too expensive to be practical – Economic viability is determined from cost–benefit analysis Proving is too hard – Many nontrivial products have been successfully proven – Tools like theorem provers can assist us

63 Difficulties with Correctness Proving Can we trust a theorem prover ?

64 Difficulties with Correctness Proving (contd) How do we find input–output specifications, loop invariants? What if the specifications are wrong? We can never be sure that specifications or a verification system are correct

65 Correctness Proofs and Software Engineering (contd) Correctness proofs are a vital software engineering tool, where appropriate: – When human lives are at stake – When indicated by cost–benefit analysis – When the risk of not proving is too great Also, informal proofs can improve software quality – Use the assert statement Model checking is a new technology that may eventually take the place of correctness proving

66 Who Should Perform Execution-Based Testing? Programming is constructive Testing is destructive – A successful test finds a fault So, programmers should not test their own code artifacts

67 Who Should Perform Execution-Based Testing? (contd) Solution: – The programmer does informal testing – The SQA group then does systematic testing – The programmer debugs the module All test cases must be – Planned beforehand, including the expected output, and – Retained afterwards

68 When Testing Stops Only when the product has been irrevocably discarded

69 Software Testing - objective 1.Execute a program to find errors 2.A good test case has a high probability of finding errors 3.A successful test finds a new error Check Software specs. Test specs. Testing Results Test Reports

70 A model of the software testing process 70

71 Stages of testing Development testing the system is tested during development to discover bugs and defects. Release testing a separate testing team test a complete version of the system before it is released to users. User testing users or potential users of a system test the system in their own environment. 71

72 Development testing Unit testing i ndividual program units or object classes are tested. should focus on testing the functionality of objects or methods. Component testing several individual units are integrated to create composite components. should focus on testing component interfaces. System testing some or all of the components in a system are integrated and the system is tested as a whole. should focus on testing component interactions. 72 Development testing includes all testing activities that are carried out by the team developing the system.

73 Unit testing Unit testing is the process of testing individual components in isolation. It is a defect testing process. Units may be: – Individual functions or methods within an object – Object classes with several attributes and methods – Composite components with defined interfaces used to access their functionality. 73

74 Object class testing Complete test coverage of a class involves – Testing all operations associated with an object – Setting and interrogating all object attributes – Exercising the object in all possible states. Inheritance makes it more difficult to design object class tests as the information to be tested is not localised. 74

75 The weather station object interface testing Need to define test cases for reportWeather, calibrate, test, startup and shutdown. Using a state model, identify sequences of state transitions to be tested and the event sequences to cause these transitions For example: – Shutdown -> Waiting> Shutdown – Waiting-> Calibrating-> Testing -> Transmitting -> Waiting – Waiting-> Collecting-> Waiting-> Summarizing -> Transmitting -> Waiting 75

76 Automated testing Whenever possible, unit testing should be automated so that tests are run and checked without manual intervention. In automated unit testing, you make use of a test automation framework (such as JUnit) to write and run your program tests. Unit testing frameworks provide generic test classes that you extend to create specific test cases. They can then run all of the tests that you have implemented and report, often through some GUI, on the success or otherwise of the tests. 76

77 Automated test components A setup part, where you initialize the system with the test case, namely the inputs and expected outputs. A call part, where you call the object or method to be tested. An assertion part where you compare the result of the call with the expected result. If the assertion evaluates to true, the test has been successful if false, then it has failed. 77

78 Unit test effectiveness The test cases should show that, when used as expected, the component that you are testing does what it is supposed to do. If there are defects in the component, these should be revealed by test cases. This leads to 2 types of unit test case: – The first of these should reflect normal operation of a program and should show that the component works as expected. – The other kind of test case should be based on testing experience of where common problems arise. It should use abnormal inputs to check that these are properly processed and do not crash the component. 78

79 Testing strategies Partition testing Partition testing, where you identify groups of inputs that have common characteristics and should be processed in the same way. – You should choose tests from within each of these groups. Guideline-based testing Guideline-based testing, where you use testing guidelines to choose test cases. – These guidelines reflect previous experience of the kinds of errors that programmers often make when developing components. 79

80 Partition testing Input data and output results often fall into different classes where all members of a class are related. Each of these classes is an equivalence partition or domain where the program behaves in an equivalent way for each class member. Test cases should be chosen from each partition. 80

81 Equivalence partitioning 81

82 Equivalence Partitioning Typically the universe of all possible test cases is so large that you cannot try them all You have to select a relatively small number of test cases to actually run Which test cases should you choose? Equivalence partitioning helps answer this question

83 Equivalence Partitioning Partition the test cases into "equivalence classes" Each equivalence class contains a set of "equivalent" test cases Two test cases are considered to be equivalent if we expect the program to process them both in the same way (i.e., follow the same path through the code) If you expect the program to process two test cases in the same way, only test one of them, thus reducing the number of test cases you have to run

84 Equivalence Partitioning First-level partitioning: Valid vs. Invalid test cases Valid Invalid

85 Equivalence Partitioning Partition valid and invalid test cases into equivalence classes

86 Equivalence Partitioning Create a test case for at least one value from each equivalence class

87 Equivalence Partitioning When designing test cases, you may use different definitions of “equivalence”, each of which will partition the test case space differently – Example: int Add(n1, n2, n3, …) Equivalence Definition 1: partition test cases by the number of inputs (1, 2, 3, etc.) Equivalence Definition 2: partition test cases by the number signs they contain (positive, negative, both) Equivalence Definition 3: partition test cases by the magnitude of operands (large numbers, small numbers, both) Etc.

88 Equivalence Partitioning When designing test cases, you may use different definitions of “equivalence”, each of which will partition the test case space differently – Example: string Fetch(URL) Equivalence Definition 1: partition test cases by URL protocol (“http”, “https”, “ftp”, “file”, etc.) Equivalence Definition 2: partition test cases by type of file being retrieved (HTML, GIF, JPEG, Plain Text, etc.) Equivalence Definition 3: partition test cases by length of URL (very short, short, medium, long, very long, etc.) Etc.

89 Equivalence Partitioning If an oracle is available, the test values in each equivalence class can be randomly generated. This is more useful than always testing the same static values. – Oracle: something that can tell you whether a test passed or failed Test multiple values in each equivalence class. Often you’re not sure if you have defined the equivalence classes correctly or completely, and testing multiple values in each class is more thorough than relying on a single value.

90 Equivalence Partitioning - examples InputValid Equivalence ClassesInvalid Equivalence Classes A integer N such that: -99 <= N <= 99 ?? Phone Number Area code: [200, 999] Prefix: (200, 999] Suffix: Any 4 digits ??

91 Equivalence Partitioning - examples InputValid Equivalence ClassesInvalid Equivalence Classes A integer N such that: -99 <= N <= 99 [-99, -10] [-9, -1] 0 [1, 9] [10, 99] ? Phone Number Area code: [200, 999] Prefix: (200, 999] Suffix: Any 4 digits ??

92 Equivalence Partitioning - examples InputValid Equivalence ClassesInvalid Equivalence Classes A integer N such that: -99 <= N <= 99 [-99, -10] [-9, -1] 0 [1, 9] [10, 99] < -99 > 99 Malformed numbers {12-, 1-2-3, …} Non-numeric strings {junk, 1E2, $13} Empty value Phone Number Area code: [200, 999] Prefix: (200, 999] Suffix: Any 4 digits ??

93 Equivalence Partitioning - examples InputValid Equivalence ClassesInvalid Equivalence Classes A integer N such that: -99 <= N <= 99 [-99, -10] [-9, -1] 0 [1, 9] [10, 99] < -99 > 99 Malformed numbers {12-, 1-2-3, …} Non-numeric strings {junk, 1E2, $13} Empty value Phone Number Area code: [200, 999] Prefix: (200, 999] Suffix: Any 4 digits 555-5555 (555)555-5555 555-555-5555 200 <= Area code <= 999 200 < Prefix <= 999 ?

94 Equivalence Partitioning - examples InputValid Equivalence ClassesInvalid Equivalence Classes A integer N such that: -99 <= N <= 99 [-99, -10] [-9, -1] 0 [1, 9] [10, 99] < -99 > 99 Malformed numbers {12-, 1-2-3, …} Non-numeric strings {junk, 1E2, $13} Empty value Phone Number Area code: [200, 999] Prefix: (200, 999] Suffix: Any 4 digits 555-5555 (555)555-5555 555-555-5555 200 <= Area code <= 999 200 < Prefix <= 999 Invalid format 5555555, (555)(555)5555, etc. Area code 999 Area code with non-numeric characters Similar for Prefix and Suffix

95 Boundary Value Analysis When choosing values from an equivalence class to test, use the values that are most likely to cause the program to fail Errors tend to occur at the boundaries of equivalence classes rather than at the "center" – If (200 < areaCode && areaCode < 999) { // valid area code } – Wrong! – If (200 <= areaCode && areaCode <= 999) { // valid area code } – Testing area codes 200 and 999 would catch this error, but a center value like 770 would not In addition to testing center values, we should also test boundary values – Right on a boundary – Very close to a boundary on either side

96 Boundary Value Analysis Create test cases to test boundaries of equivalence classes

97 Boundary Value Analysis - examples InputBoundary Cases A number N such that: -99 <= N <= 99 ? Phone Number Area code: [200, 999] Prefix: (200, 999] Suffix: Any 4 digits ?

98 Boundary Value Analysis - examples InputBoundary Cases A number N such that: -99 <= N <= 99 -100, -99, -98 -10, -9 -1, 0, 1 9, 10 98, 99, 100 Phone Number Area code: [200, 999] Prefix: (200, 999] Suffix: Any 4 digits ?

99 Boundary Value Analysis - examples InputBoundary Cases A number N such that: -99 <= N <= 99 -100, -99, -98 -10, -9 -1, 0, 1 9, 10 98, 99, 100 Phone Number Area code: [200, 999] Prefix: (200, 999] Suffix: Any 4 digits Area code: 199, 200, 201 Area code: 998, 999, 1000 Prefix: 200, 199, 198 Prefix: 998, 999, 1000 Suffix: 3 digits, 5 digits

100 Boundary Value Analysis - examples Numeric values are often entered as strings which are then converted to numbers internally [int x = atoi(str);] This conversion requires the program to distinguish between digits and non-digits A boundary case to consider: Will the program accept / and : as digits? /0123456789: 474849505152535455565758 Char ASCII

101 Equivalence partitions 101Chapter 8 Software testing

102 Testing guidelines (sequences) Test software with sequences which have only a single value. Use sequences of different sizes in different tests. Derive tests so that the first, middle and last elements of the sequence are accessed. Test with sequences of zero length. 102

103 General testing guidelines Choose inputs that force the system to generate all error messages Design inputs that cause input buffers to overflow Repeat the same input or series of inputs numerous times Force invalid outputs to be generated Force computation results to be too large or too small. 103

104 Testing policies Exhaustive system testing is impossible so testing policies which define the required system test coverage may be developed. Examples of testing policies: – All system functions that are accessed through menus should be tested. – Combinations of functions (e.g. text formatting) that are accessed through the same menu must be tested. – Where user input is provided, all functions must be tested with both correct and incorrect input. 104

105 Test-driven development Test-driven development (TDD) is an approach to program development in which you inter-leave testing and code development. Tests are written before code and ‘passing’ the tests is the critical driver of development. You develop code incrementally, along with a test for that increment. You don’t move on to the next increment until the code that you have developed passes its test. TDD was introduced as part of agile methods such as Extreme Programming. However, it can also be used in plan-driven development processes. 105

106 Test-driven development 106

107 TDD process activities Start by identifying the increment of functionality that is required. This should normally be small and implementable in a few lines of code. Write a test for this functionality and implement this as an automated test. Run the test, along with all other tests that have been implemented. Initially, you have not implemented the functionality so the new test will fail. Implement the functionality and re-run the test. Once all tests run successfully, you move on to implementing the next chunk of functionality. 107

108 Benefits of test-driven development Code coverage – Every code segment that you write has at least one associated test so all code written has at least one test. Regression testing – A regression test suite is developed incrementally as a program is developed. Simplified debugging – When a test fails, it should be obvious where the problem lies. The newly written code needs to be checked and modified. System documentation – The tests themselves are a form of documentation that describe what the code should be doing. 108

109 Regression testing Regression testing is testing the system to check that changes have not ‘broken’ previously working code. In a manual testing process, regression testing is expensive but, with automated testing, it is simple and straightforward. All tests are rerun every time a change is made to the program. Tests must run ‘successfully’ before the change is committed. 109

110 Release testing Release testing is the process of testing a particular release of a system that is intended for use outside of the development team. The primary goal of the release testing process is to convince the supplier of the system that it is good enough for use. – Release testing, therefore, has to show that the system delivers its specified functionality, performance and dependability, and that it does not fail during normal use. Release testing is usually a black-box testing process where tests are only derived from the system specification. 110

111 Release testing and system testing Release testing is a form of system testing. Important differences: – A separate team that has not been involved in the system development, should be responsible for release testing. – System testing by the development team should focus on discovering bugs in the system (defect testing). The objective of release testing is to check that the system meets its requirements and is good enough for external use (validation testing). 111

112 Requirements based testing Requirements-based testing involves examining each requirement and developing a test or tests for it. MHC-PMS requirements: – If a patient is known to be allergic to any particular medication, then prescription of that medication shall result in a warning message being issued to the system user. – If a prescriber chooses to ignore an allergy warning, they shall provide a reason why this has been ignored. 112

113 Requirements tests Set up a patient record with no known allergies. Prescribe medication for allergies that are known to exist. Check that a warning message is not issued by the system. Set up a patient record with a known allergy. Prescribe the medication to that the patient is allergic to, and check that the warning is issued by the system. Set up a patient record in which allergies to two or more drugs are recorded. Prescribe both of these drugs separately and check that the correct warning for each drug is issued. Prescribe two drugs that the patient is allergic to. Check that two warnings are correctly issued. Prescribe a drug that issues a warning and overrule that warning. Check that the system requires the user to provide information explaining why the warning was overruled. 113

114 Features tested by scenario Authentication by logging on to the system. Downloading and uploading of specified patient records to a laptop. Home visit scheduling. Encryption and decryption of patient records on a mobile device. Record retrieval and modification. Links with the drugs database that maintains side-effect information. The system for call prompting. 114

115 Performance testing Part of release testing may involve testing the emergent properties of a system, such as performance and reliability. Tests should reflect the profile of use of the system. Performance tests usually involve planning a series of tests where the load is steadily increased until the system performance becomes unacceptable. Stress testing is a form of performance testing where the system is deliberately overloaded to test its failure behaviour. 115

116 User testing User or customer testing is a stage in the testing process in which users or customers provide input and advice on system testing. User testing is essential, even when comprehensive system and release testing have been carried out. – The reason for this is that influences from the user’s working environment have a major effect on the reliability, performance, usability and robustness of a system. These cannot be replicated in a testing environment. 116

117 Types of user testing Alpha testing – Users of the software work with the development team to test the software at the developer’s site. Beta testing – A release of the software is made available to users to allow them to experiment and to raise problems that they discover with the system developers. Acceptance testing – Customers test a system to decide whether or not it is ready to be accepted from the system developers and deployed in the customer environment. Primarily for custom systems. 117

118 The acceptance testing process 118Chapter 8 Software testing

119 Stages in the acceptance testing process Define acceptance criteria Plan acceptance testing Derive acceptance tests Run acceptance tests Negotiate test results Reject/accept system 119

120 Software Testing There are two main types of Software Testing Black Box White Box

121 Black Box Black box testing... You know the functionality – Given that you know what it is supposed to do, you design tests that make it do what you think that it should do – From the outside, you are testing its functionality against the specs/requirements – For software this is testing the interface What is input to the system? What you can do from the outside to change the system? What is output from the system? – Tests the functionality of the system by observing its external behavior – No knowledge of how it goes about meeting the goals

122 White Box White box testing... You know the code – Given knowledge of the internal workings, you thoroughly test what is happening on the inside – Close examination of procedural level of detail – Logical paths through code are tested Conditionals Loops Branches – Status is examined in terms of expected values – Impossible to thoroughly exercise all paths Exhaustive testing grows without bound – Can be practical if a limited number of “important” paths are evaluated – Can be practical to examine and test important data structures

123 When & What to Test? Level of Detail Project Time Low High Acceptance Testing Requirements Specifications Analysis Design System Testing Object Design Unit Testing Integration Testing

124 Types of Testing Unit Testing – Done by programmer(s) – Generally all white box Integration/Component Testing – Done by programmer as they integrate their code into code base – Generally white box, maybe some black box Functional/System Testing – It is recommended that this be done by an external test group – Mostly black box so that testing is not ‘corrupted’ by too much knowledge Acceptance Testing – Generally done by customer/customer representative in their environment through the GUI... Definitely black box

125 The Testing Process

126 Planning a Black Box Test Case Look at requirements/problem statement to generate. Said another way: test cases should be traceable to requirements. The “Test Case Grid” Contains: ID of test case Describe test input conditions Expected/Predicted results Actual Results

127 Test Case Grid For your analysis reports, please use the following format: Id InputExpected Result Actual Result

128 Black Box Test Planning The inputs must be very specific The expected results must be very specific You must write the test case so anyone on the team can run the exact test case and get the exact same result/sequence of events Example: “Passing grade?” Input field: – Correct input: Grade = 90; Grade =20 – Incorrect input: “a passing grade” “a failing grade”

129 Example Test Case Grid Your test case grid (last section of your analysis document) should identify at least 15 test cases. Example: “Passing grade?” Id InputExpected Result Actual Result Grade < 70% Fail the class with less than a C (Leave blank until tested) 1 Grade > 70% Pass the class with at least a C 2

130 Bad Test Case Example What is a failing and passing grade? Problem: The “input” value is too vague. Id InputExpected Result Actual Result A passing grade Fail the class with less than a C (Leave blank until tested) 1 A failing grade Pass the class with at least a C 2

131 Failure Test Cases What if the input type is wrong (You’re expecting an integer, they input a float. You’re expecting a character, you get an integer.)? What if the customer takes an illogical path through your functionality? What if mandatory fields are not entered? What if the program is aborted abruptly or input or output devices are unplugged?

132 Using a Flow Chart if x >= 0 if x <=100 check= false check= true Decision statements Key Mapping functionality in a flow chart makes the test case generation process much easier.

133 One input leads to One output A piece of code with inputs a, b, and c. It produces the outputs x, y, and z.

134 One-to-One Testing Each input only has one valid expected result. To check for a valid ATM Card the following is NOT correct. Id InputExpected Result Actual Result Read ATM Card If card is valid, accept card and ask for pin. If card is invalid, “No ATM Card” exception is thrown and card is returned to the user. 1

135 The correct way… Test for ATM card – Input: Read ATM Card – Expected result: Accept card and ask for pin # – Input: Read invalid card – Expected result: “No ATM Card” exception is thrown and card is returned to the user. Id InputExpected Result Actual Result Read ATM Card Accept card and ask for pin # 1 Read Invalid Card “No ATM Card” exception is thrown and card is returned to the user. 2

136 Another test… Test for “get PIN” Input: 4 digit entry of a stolen card Expected result: “Stolen Card” exception is thrown Id InputExpected Result Actual Result Read ATM Card Accept card and ask for pin # 1 Read Invalid Card “No ATM Card” exception is thrown and card is returned to the user. 2 3 Invalid PIN Entered “Stolen Card” exception is thrown and card is destroyed.

137 What’s Next? After tests are performed, the results are recorded. Id InputExpected Result Actual Result Read ATM Card Accept card and ask for pin # 1 Read Invalid Card “No ATM Card” exception is thrown and card is returned to the user. 2 3 Invalid PIN Entered “Stolen Card” exception is thrown and card is destroyed. Accepted the card and asked for a pin.

138 What’s Next? After results are recorded, the testing report is created. Id InputExpected Result Actual Result Status Read ATM Card Accept card and ask for pin # 1 Read Invalid Card “No ATM Card” exception is thrown and card is returned to the user. 2 3 Invalid PIN Entered “Stolen Card” exception is thrown and card is destroyed. Accepted the card and asked for a pin. Failed Passed

139 Verification & Validation Testing is performed during the system implementation stage and the results are delivered in the Final Report. The Test Report provides validation and verification for the software program. Verification: "Are we building the product right?" The software should conform to its specification Validation: "Are we building the right product?" The software should do what the user really requires


Download ppt "Object-Oriented and Classical Software Engineering Eighth Edition, WCB/McGraw-Hill, 2011 Stephen R. Schach."

Similar presentations


Ads by Google