Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 4.Test Management

Similar presentations


Presentation on theme: "Chapter 4.Test Management"— Presentation transcript:

1 Chapter 4.Test Management
Objectives: Design and execute test cases. Understand the Test Report Process for recommending the product Understand the process of test planning. Identify resources for test plan implementation and decide the staffing for release. Additional Special Objectives: • Understand the Test Management Process and Test Planning process. • Design the Test Summary report after execution of the Test Cases.

2 4.1 Test Planning Preparing a Test Plan,
While Preparing a Test plan, it should acts as execution, tracking and reporting of the entire testing project and it should cover • The Scope of Testing, • How the testing is going to be performed,(Methodology) • What Resources are needed for testing,(Requirement) • Criteria for pass-Fail The TimeLine(schedule) • Risk factor Considerable cooperation from business units. Best protocols Master plan Testing level specific test plans: Unit, Integration, System, Acceptance Testing types specific test plans Performance test , Security test plan

3 Test plan types Scope Management,
Objectives and requirement of project Constraints or limitations == time, budget, resources, legal, technological, management concepts. Assumptions needed Risks involved, occurrence, overall impact, mitigation/justification of action plan Deciding Test Approach, Analytical, Model Based, Methodological, Process-Standard complaint, Dynamic strategies, Regression

4 Test planning

5 Sample test plan

6 It pertains to specifying the scope of a project.
• Scope Management Scope Management It pertains to specifying the scope of a project. For testing, Scope Management entails, 1. Understanding what constitutes a release of a product; 2. Breaking down the release into features; 3. Prioritizing the features for testing; 4. Deciding which feature will be tested and which will not be; 5. Gathering details to prepare foe estimation of resources for testing.

7 The following factors drive the choice and prioritization of features to be tested.
• Feature that are new and critical for the release • Feature whose failures can be catastrophic • Feature that are expected to be complex to test • Features which are extensions of earlier features that have been defect prone Deciding Test Approach 1. What type of testing would you use for testing the functionality? 2. What are the configurations or scenarios for testing the features? 3. What integration testing would you do to ensure these features work together? 4. What localization validations would be needed? 5. What “non-functional” tests would you need to do?

8 Setting Up Criteria for Testing
Pass or Fail Criteria Suspend Criteria Resume Criteria A test cycle or a test activity will not be an isolated, it is a continuous activity that can be carried out at one go. Some of the typical criteria include, Encountering more than a certain number of defect Hitting a show stoppers that prevent further progress of testing Developers releasing a new version which they advise should be used in lieu of the product under test.

9 Identifying Responsibilities, Staffing and Training Needs
Identifying group responsibilities for designing, preparing, executing, checking, and managing Staffing and Training Needs Providing environmental needs addresses the aspects of test planning i.e. who part of it. A testing project require different people to play different roles, So The different role definitions should Ensure clear accountability for a given task; Clearly list the responsibilities for various functions to various people; Complement each other; Supplement each other;

10 Staffing and Training Needs
Specific training and staffing needs are essential for designing test plan. Communicate, approve resources, Training : Product training , use of test execution and reporting tool Staffing needs: Test team size, no. of resources description and distribution of each task , with individual role and multiple roles, State when and how long each resource will be required.

11 Role Responsibility Candidate Timing

12 Role Responsibility Candidate Timing

13 Resource Requirements
Project Manager should provide estimates for the various hardware and software resource required. Some of the following factors need to be considered. PC / Machine configuration, Overheads (supporting software, Operating system, no. of license copies ) required by the test automation tool, Supporting tools such as Compiler, Test data generator, Configuration management tool. The different configurations of the supporting software, Special requirement for running machine-intensive tests Appropriate number of licenses of all the software.

14 Test Deliverables: The deliverables include the following,
Generated at every phase of SDLC The deliverables include the following, The test plan Test Incident report Test case Specification Test Traceability matrix Test design specification documents Test results /Reports Testing Strategy Install/Configuration guides Testing Scripts/ procedures Test logs produced Test data Defect Report/ Release report

15 Testing Tasks(Size and Effort Estimation)
Estimations happens broadly in three phases, Size estimation -Number of test cases -Number of test scenarios -Number of configuration to be tested Effort estimation -Productivity data -Reuse opportunities -Robustness of processes

16 4.2 Test Management Choice of Standards,
Traditional tools used for Test management are: • Pen and Paper , • Word Processors, • Spreadsheets Internal Standard include, • Simplified Communication • Promotes consistency and uniformity • Eliminates the need to invent another solution • Continuity • Presents a way of preserving proven practices • Bench marks and framework 1. Naming and storage conventions for test artifacts; 2. Document standards; 3. Test coding standards; 4. Test reporting standards

17 External Standard include:
• Customer standard • National Standard • International Standard Some IEEE standards devoted for software : By Different Organizations are: ISO International Organization for Standardization SPICE - Software Process Improvement and Capability Determination NIST - National Institute of Standards and Technology DoD - Department of Defense

18 IEEE Standards

19 Test Infrastructure Management
Testing requires a robust infrastructure to be planned upfront. This infrastructure is made up of three essential elements. A test case database(TCDB) (additional) A test case database captures all the relevant information about the test cases in an organization. Some of the entities and the attributes are given in following table Sr. No. Test Case Purpose Attributes 1 Test case Records all static information about tests. Test case Id Test case name (File name) Test case owner Associated files for test case. 2 Test case product cross reference Provide mapping between the tests and the corresponding product features, enables identification of test cases for given feature. Module Id 3 Test case run history Gives the history of when the test case was run and what was result , provided inputs on selection of test for regression runs Run date Time taken Run status(Success/Failure) 4 Test case-defect cross-reference Gives details of test cases introduced to test certain specific defects detected in the product ,provides inputs on the selection of test for regression runs. Defect reference

20 2. A defect repository: Captures all the relevant details of defects reported for a product An important vehicle of communication that influences the work flow within a software organization. Provides the base data in arriving at several of the metrics, testing defect metrics, development defect metrics as a part of defect repository.

21 3. Configuration management:
Collection of software elements that comprises a major business function. Concerned with labeling, tracking and controlling changes in the software elements of system. To identify all the interrelated components of software and to control their evolution throughout the various life cycles. That can be applied to activities like software development, document control, problem tracking, change control and maintenance. Also known as Software Configuration Management repository. Change control ensures : • Changes to test files are made in a controlled fashion and only with proper approvals. • Changes made by one test engineer are not accidently lost or overwritten by other changes. • Each change produces distinct a distinct version of the file that is regrettable at any point of time. i.e. everyone gets the access to only the most recent version of the files.

22 Sub elements may be: Test plan document Test cases (planned, designed)
Baseline the test data A process to refresh or roll back baseline data Test Environment –Front end back end and test lab. Test case database, to track and update tests A method to prioritize test cases per test cycle Coverage analysis metrics Defect tracking database Risk management metrics Version control system Configuration management system A requirement tracking tool Metrics required to measure improvement

23 Configuration management
Helps to serve many purposes: • Allows to run automated tests • Provides a test environment to avoid conflicts between manual and automated testing • Helps to track the results of test cases ,pass or fail criteria • Gives report of test coverage levels • Keeps expected results remain consistent across test runs. • A Test lab to conduct multiuser and stress testing.

24 A defect repository It captures all the relevant details of defects reported for a product. The information that a defect repository includes is given in Table. TCDB, defect repository and SCM repository should complement each other and work together in an integrated fashion.

25 Test People Management
People management is an integral part of any project management. People management also requires the ability to hire, motivate, and retain the right people. These skills are seldom formally taught. Testing projects present several additional challenges. We believe that the success of a testing organization depends vitally on judicious people management skills

26 Test Lead responsibilities and activities:
Identify how the test teams formed and aligned within organization Decide the roadmap for the project Identify the scope of testing using SRS documents. Discuss test plan, review and approve by management/ development team. Identify required metrics Calculate size of project and estimate efforts and corresponding plan. Identify skill gap and balance resources and need for training education. Identify the tools for test reporting , test management, test automation, Create healthy environment for all resources to gain maximum throughput.

27 Test team responsibilities and activities:
Initiate the test plan for test case design Conduct review meetings Monitor test progress , check for resources, balancing and allocation Check for delays in schedule discuss, resolve risks if any. Intimate status to stake holders and management Bridge the gap between test team and management. Consider followings for managing test Understand testers Test work environment Role of the test team

28 Integrating with Product Release
The success of a product depends on the effectiveness of integration of the development and testing activities. The schedules of testing have to be linked directly to product release. The following are some of the points to be decided for this planning- Sync point between development and testing as to when different types of testing can commence. Service level agreement/management between development and testing as to how long it would take for the testing team to complete the testing. Consistent definitions of the various priorities and severities of the defects. Communication mechanisms to the documentation group to ensure that the documentation is kept in sync with the product in terms of known defects, workarounds.

29 Test Process: Base Lining a Test Plan
Fundamental test process is divided into following steps: Test planning and control Test analysis and design Test Implementation and execution Evaluation of exit criteria and test reporting Test closure activities. Base Lining a Test Plan A test combines all the points into a single document that acts as an anchor point for the entire testing project. An organization normally arrives at a template that is to be used across the board. Each testing project puts together a test plan based on the template. The test plan is reviewed by a designated set of component people in the organization. After this, the test plan is baselined into the configuration management repository. From then on, the baselined test plan becomes the basis for running the testing project. In addition, periodically, any changes needed to the test plan templates are discussed among the different stake holders and this is kept current and applicable to the testing teams

30 Test Case Specification
Using the test plan as the basis, the testing team designs test case specification, which then becomes the basis for preparing individual test cases. Hence, a test case specification should clearly identify, The purpose of the test: This lists what features or part the test is intended for. Items being tested, along with their version/release numbers as appropriate. Environment that needs to be set up for running the test cases: This includes the hardware environment setup, supporting software environment setup, setup of the product under test. Input data to be used for the test case: The choice of input data will be dependent on the test case itself and the technique followed in the test case. Steps to be followed to execute the test: If automated testing is used, then, these steps ate translated to the scripting language of the tool. The expected results that are considered to be “correct result”. A step to compare the actual result produced with the expected result: This step should do an “intelligent” comparison of the expected and actual results to highlight any discrepancies. Any relationship between this test and other test: These can be in the form of dependencies among the tests or the possibilities of reuse across the tests.

31 Update of Traceability
Black Box Testing, a requirements traceability matrix ensures that the requirements make it through the subsequent life cycle phases and do not get orphaned mid-course. The traceability matrix is a tool to validate that every requirement is tested. The traceability matrix is created during the requirement gathering phase itself by filling up the unique identifier for each requirement. This ensures that there is a two-way mapping between requirements and test cases.

32 Recommending Product Release.
Test Reporting Recommending Product Release. A Test report is any description, explanation or justification the status of a test project. • Based on the test summary report, an organization can take a decision on whether to release the product or not. • Ideally, an organization would like to release a product with zero defects. • However, market pressures may cause the product to be released with the defects provided that the senior management is convinced that there is no major risk of customer dissatisfaction. • Such a decision should be taken by the senior manager after consultation with customer support team, development team, and testing team.

33 Matrix A matrix is a concise organizer of simple tests, especially useful for function tests and domain tests. It groups test cases that are essentially the same. For example, for most input fields, you’ll do a series of the same tests, checking how the field handles boundaries, unexpected characters, function keys, etc. To create a test matrix • Put the objects that you’re testing on the rows. • Show the tests on the columns. • Check off the tests that you actually completed in the cells. Collecting and Analyzing Metrics: When test are executed, information about test execution gets collected in test logs and other files. The basic measurements from running the tests are then converted to meaningful metrics by the use of appropriate transformations and formulae Unique no. Req. Source of Req. SRS or functional Req. Speci Design Speeq.ci. Program Module Test Speci. Test Cases Successful test validation Modification of req. Remarks

34 Executing Test Cases The prepared test cases have to be executed at the appropriate times during a project. As the test cases are executed during a test cycle, the defect repository is updated with, Defect from the earlier test cycles that are fixed in the current build; New defect that get uncovered in the current run of the tests. Follow test procedure to execute test suite and test cases. Confirm test, re-test, re-execute to fix. Log the result of test execution, status of test pass and fail. Comparison of actual results with expected results. Defect occurrences and differences should be updated in report Decide suspension and resumption of further test cases also update traceability metrics During test design and execution, the traceability matrix should be kept current. As and when tests get designed and executed successfully, the traceability matrix should be updated.

35 Preparing Test Summary Report
At the completion of a test cycle, a test summary report is produced. This report gives insights to the senior management about the fitness of the product for release. There are two types of reports that are required: The Incident Report: Communication that happens through the testing cycle as an when defects are encountered. Entry made in defect repository. High impact test incidence is highlighted in test summary report. 2. Test Cycle Report: Summary of activities during test cycle Uncovered Defects based on their severity and impact Progress from previous cycle to the current cycle in terms of defect fixing Outstanding defect those yet to be fixed. Any variations observed in efforts or schedule useful for future planning

36 3. Test Summary Report It is final test in test cycle. Summary of activities carried out during test cycle. Variance of the activities carried out from the activity plan includes: Test that were planned to run but could not be run. Modification of test from what was in the original test specifications Additional tests that was run, not run in the original test plan. Differences in efforts and time taken between planned and executed. Any other derivations from plan. Summary of result should include Test that fail , with any root cause, descriptions and Severity of impact of the defect uncovered by test. Comprehensive assessment and recommendation for release should include. Fit for release , assessment and Recommendation of release.

37 Follow test procedure (execute Test suites, and individual test cases)
Executing Test cases: are prepared ,executed at appropriate time, Major tasks are: Follow test procedure (execute Test suites, and individual test cases) Confirm test, re-test, re-execute test. Test log consists of test cases, order of execution,, who executed it, and the pass/ fail status of test case. Comparison of actual results with expected results. If differences, in actual or expected results, defect occurrence is reported. During the execution of test cases defect database will be communication mechanism between development and testing team.

38 Collecting and analyzing metrics
Requirement volatility: {(No. of requirements added + No. of requirements deleted + No. of requirements modified) / No. of initial approved requirements } * 100 Unit of measure = % percentage Review efficiency Components: No. of critical, Major and Minor review defects Efforts spent on review in hours Weightage factors for defects Critical= 1, Major = 0.4, Minor = 0.1 Formula = (No. of weighted review defects/ effort spent on reviews) Unit of measure = Defect per person hour Productivity in test execution Formula = (No. of test cases executed/ time spent in test execution) Unit of measure = Test cases per person per day

39 Example 1500 test case executed in cycle with 5 resources.
R-1 executes 150 test cases in 2 days , R-2 executes 400 test cases in 4 days , R-3 executes 50 test cases in 1 day , R-4 executes 550 test cases in 5 days , R-5 executes 350 test cases in 3 days , Cumulative time (for 1500 test cases in 15 men days) Productivity of test execution = 1500/15 = 100 test cases per person per day.

40 Collecting and analyzing metrics
Defect Rejection Ratio Formula = ( no. of defects rejected / total no. of defects raised ) * 100 Unit of measure = % Percentage Defect fix rejection ration (N0o. Of defect fixes rejected / No. of defects fixed) * 100 Delivered defect density Components: No. of critical , major, and minor review defects Weightage factors for defects Critical= 1, Major = 0.4, Minor = 0.1 Formula = {(No. of weighted defects found during validation/ customer review + acceptance testing) / ( size of the work product)} Unit of measure = Defect for the work product / cycle. Outstanding defect ratio Formula = (Total no. of open defects/Total no. of defects found) * 100

41 A summary report should present
A summary of the activities carried out during the test cycle; Variance of the activities carried out from the activities planned; Summary of results should include tests that failed and severity of impact of defect; Comprehensive assessment and recommendation for release should include “Fit for release” assessment and Recommendation of release. What kind of defects product has? What is impact /severity of each of these defects? What could be the risks of releasing the product with existing defects?

42 Preparing Test summary report template
( It may vary company to company even project based) Step Details 1 Purpose of the document : Objective 2 Overview of an application : Brief description of application to be tested 3 Testing Scope : In scope, Out of Scope, Items not tested 4 Metrics: helps to understand the test execution results , status of test cases and defect . Test cases planned Test cases executed Test cases passed Test cases failed 100 95 90 5 No. of defects identified and defect status and defect severity Critical Major Medium Cosmetic Total Close 20 10 50 Open 15 65

43 Preparing Test summary report template
( It may vary company to company even project based) Step Details 4.. 5 Types of testing performed Smoke, System Integration, Regression testing 6 Information of test environment and tools: Test environment can be: Module wise defect distribution Registration Booking Payment Reports Total Critical 6 7 5 25 Major 4 2 15 Medium 8 20 Cosmetic 1 17 22 10 16 65 Application URL Application Server Database HP QC MySQL

44 Preparing Test summary report template
( It may vary company to company even project based) Step Details 7. Lessons learned: describes crucial issues occurred and faced ,gives solutions how they were solved during testing) makes proactive decision during next phase to avoid those mistakes. Sr. No. Issues faced Solution 1. Smoke testing test cases required to be executed manually each time. Smoke test cases were automated and scripts were run to run fast and save time. 2. Initially few testers were not having rights to change defect status in HP QC/ALM. Test lead need to perform this task Rights were obtained from Client , by explaining the difficulty.

45 Recommendation for the product release:
Test Reporting Once testing is completed , tester generates metrics and make final reports on their test effort and whether or not the software is ready to release. Recommendation for the product release: It decides fitness of a product for release. It never prove absence of defects , only presence of defect, their severity and its impact. Report to Senior management and product release team. In terms of What kind of defects the product has? What is impact/ severity of each of the defects? What would be the risks of releasing the product with the existing defects? On above observations , the senior management can then take a meaningful business decision about whether to release a given version or not.

46 Preparing Test summary report template
( It may vary company to company even project based) Step Details 8. Recommendation : Suggestions about different activities can be mentioned 9. Best practices: As testing phase comprises of many different where some of them could have saved time , some of them could be good & efficient etc. it is documented and presented to stakeholders 10. Exit criteria: Exit criteria provides information about completion of testing is concluded after verifying fulfillment of certain condition like: All planned test cases are executed All criteria defects are close etc. 11. Conclusion/Sign off: Exit criteria is set and checked to release it. Testing team verifies the application against set of criteria . It is not met them decision about its release is taken by senior management or stakeholders.


Download ppt "Chapter 4.Test Management"

Similar presentations


Ads by Google