Presentation is loading. Please wait.

Presentation is loading. Please wait.

CSE 7314 Software Testing and Reliability Robert Oshana Lectures 5 - 8

Similar presentations


Presentation on theme: "CSE 7314 Software Testing and Reliability Robert Oshana Lectures 5 - 8"— Presentation transcript:

1 CSE 7314 Software Testing and Reliability Robert Oshana Lectures 5 - 8 oshana@airmail.net

2 Chapter 3 Master test planning

3 Introduction Test planning is key to successful software testing Largely omitted due to time constraints, lack of training, cultural bias A test schedule is not a test plan !! Usually measured by number of test cases run

4 Levels of test planning Master test plan; orchestrates testing at all levels –Unit –Integration –System –Acceptance –Others are alpha, beta, customer acceptance, build, string, development

5 Levels of test planning

6 Importance of test planning Goal is to deal with the important issues of testing –Strategy –Resource utilization –Responsibilities –Risks –Priorities Process is more important than the document

7 Importance of test planning

8 Audience analysis Audience for a unit test plan is a lot different than the audience for a system test plan Different tolerances for what will be read

9 Activity timing Test planning should be started as soon as possible –Same time as requirements specs and project plan are being developed Could have significant impact on the project plan Use TBD’s !!

10 Timing of test planning

11 Standard templates Important to have a template IEEE Std 829-1998 for Software Test Documentation Customize as necessary Strive to improve usability over time

12 IEEE test plan template Test plan identifier Table of contents References Glossary Introduction Test items Software risk issues Features to be tested Features not to be tested Approach Item pass/fail criteria Suspension criteria and resumption requirements Test deliverables Testing tasks Environmental needs Responsibilities Staffing, training, schedule, risks

13 1.0 Test Plan Identifier Keep track of most current version Use CM Must be kept up to date ! No “post-implementation” test planning!

14 2.0 Table of Contents List each topic in the plan References Glossaries Appendices Reader can use to quickly review topics of interest

15 Introduction or Scope Scope of the project Scope of the plan Master test plan may cover the entire project (embedded system) or may be many MTPs for a project

16 Scope of test and evaluation plans

17 Test items Describes programmatically what should be tested Oriented to the level of the test plan Must reference requirements spec, design spec, user’s guide, operations guide, etc

18 Risk issues section Used to determine what the primary focus of testing should be Hard to test everything in a given release Software risks that drive testing –Interfaces to other systems –Modules with a history of defects –Security, performance, reliability SW –Features difficult to change or test

19 Features to be tested What to be tested from a customer point of view (as opposed to test items -> viewpoint of the developer) May help determine which features can be removed

20 Prioritized list with cut line

21 Features not to be tested Some features do not need to be tested –Not changed –Not available for use Used to reduce risk by raising awareness –Can help you attain additional resources May grow if project falls behind

22 Approach/strategy section Description of how testing will be performed (approach) Explain any issues that have a major impact on the success of testing and the project (strategy) Include entry and exit criteria for each level of testing

23 Influences on strategy decisions

24 Methodology decisions “off the shelf” methodology or create your own –How many testers and when? –How many beta sites? –Testing techniques? –Testing levels? Environment becomes more realistic the higher you go

25 Test level decisions

26 Typical test levels

27 static analysis fully specified and compiled code * code inspection * correctness verification * tools (lint) unit test ? unit test * code coverage * oracle compare run time code analysis yes nooperational profiles statistical testing with usage models * leak detectors * instrumentation * field data function theoretic sequence enumeration testing oracle test grammar and usage model creation function mapping rules Model for testing

28 Automating the Testing Process Software Test Station Test Sequences Expected Results Output Data/ Results Oracle Usage Models Crafted Test Cases Script SW Software Under Test Control Data Control and Sequencing of Script Test Data/ Results Modeling tool Modeling tool Output Data/ Responses Output Data/ Responses Certification team Test scripting software Random test case generator Test results verification Test station control software

29 Resources Best laid plans can be ruined –Development running late (testers wait) –Ship date moved forward Testing schedule should contain contingencies Finding testing resources is a strategy decision Can also become a political issue

30 Test coverage decisions Several types of coverage –Code coverage –Requirements coverage –Design and interfaces –Model coverage

31 Walkthroughs and inspections Reviews of requirements, design, code, etc is an important verification technique Complementary activities to testing Walkthrough; peer review Inspection; formal evaluation technique

32 Software evaluation process

33 Walkthroughs vs inspections Walkthroughsinspections ParticipantsPeer(s) led by author Peers in designated roles RigorInformal to formalFormal Training requiredNone, informal, or structured Structured, preferably by teams PurposeJudge quality, find defects, training Measure/improve quality of product and process EffectivenessLow to mediumLow to very high, depending on training and commitment

34 Configuration management Describes how CM should be handled during testing –Change management –Process for reviewing, prioritizing, fixing, and re-testing bugs Tradeoff between fixing too many bugs and freezing code too early Use a CCB

35 Collection and validation of metrics Metrics collection and validation can be a significant overhead What metrics to collect? What will they be used for? How will they be validated? Need a way to measure testing status, effectiveness, quality, etc

36 Tools and automation Testing tools can be a big advantage but, if not used correctly, can also be a disadvantage May take more time than manual Regression may take less time Use of testing tools should be well planned (plan for training and integration)

37 Changes to the test plan Include all the key stakeholders in development and review cycles Small changes may not have to go through approval cycle How often to update (weekly, monthly) How should the plan be reviewed?

38 Meetings and communication Standard meetings should be described here –CCB –Presentations to users and management Status reporting guidelines Chains of command for conflict resolution

39 Other strategy issues Multiple production environments Beta testing Test environment setup and maintenance Use of contractual support Unknown quality of software Feature creep

40 Items pass/fail criteria Each item has to have an expected result –Test cases passed and failed –Number –Type –Severity and location of bugs –Usability –Reliability and stability All test cases not created equal

41 Suspension criteria and resumption requirements Conditions that warrant temporary suspension of testing Sometimes continuing on can be wasted effort Metrics can be used to flag these conditions Testing may be halted if a certain number of bugs are found

42 Simple GANT chart

43 Test deliverables List of documents, tools, other components to be developed and maintained –Test plans –Design specs –Test cases –Custom tools –Defect reports –simulators

44 Testing tasks Identifies the set of tasks necessary to prepare for and perform testing Intertask dependencies and any special skills

45 Environmental needs Hardware, software, data, interfaces, facilities, publications, security access pertaining to the testing effort Attempt to configure testing environment similar to real world if possible Where will the data come from?

46 Environmental needs

47 Responsibilities Define major responsibilities –Establishment of test environment –CM –Unit testing Putting a name next to a task helps to get it done !!

48 Responsibilities matrix

49 Staffing and training needs Number and type of people required depend on the scope Various training needs –Tools –Methodologies –Management systems (defects) –Basic business knowledge

50 Schedule Testing schedule should be built around milestones in the project plan Milestones built around major events Initial generic schedule is useful (no dates) Plan for risks and contingencies Record the schedule (audit trail)

51 Risks and contingencies Planning risks –Unrealistic delivery dates –Staff availability –Budget –Tool inventory –Training needs –Scope of testing –Usage assumptions –Feature creep –Poor quality software

52 Risks and contingencies Contingencies –Reducing the scope of the application –Delaying implementations –Adding resources –Reducing quality processes

53 Approvals Should be the person who can declare the software ready for next stage –Unit test plan (developer) –Acceptance test plan (customer) Should be technical or business experts (not managers)

54 CSE 7314 Software Testing and Reliability Robert Oshana End of Lecture oshana@airmail.net


Download ppt "CSE 7314 Software Testing and Reliability Robert Oshana Lectures 5 - 8"

Similar presentations


Ads by Google