Presentation on theme: "GREY BOX TESTING Web Apps & Networking"— Presentation transcript:
1GREY BOX TESTING Web Apps & Networking Session 10Boris Grinberg
2Session 10 (4 Hours) Here are some things that we’ll cover: Test ConditionsApplication-specific conditionsEnvironment-specific conditionsTest TypesRisk ManagementImproving QA ProcessDefect Management
3Test ConditionsTest conditions are critically important factors in Web application testing. The test conditions are the circumstances in which an application under test operates.There are two categories of test conditions, application-specific and environment-specific, which are described in the following slides.
4Application-specific conditions An example of an application-specific condition includes running the same word processor spell-checking test while in Normal View and then again when in Page View mode. If one of the tests generates an error and the other does not, then you can deduce that there is a condition that is specific to the application that is causing the error.Many apps having multiple modes: view, edit, print preview, etc
5Environment-specific conditions When an error is generated by conditions outside of an application under test, the conditions are considered to be environment-specific.In general, I find it useful to think in terms of two classes of operating environments (Static and Dynamic), each having its own unique testing implications:
6Two classes of operating environments Static environments (configuration and compatibility errors). An operating environment in which incompatibility issues may exist, regardless of variable conditions such as processing speed and available memory.Dynamic environments (RAM, disc space, memory, network bandwidth, etc.). An operating environment in which otherwise compatible components may exhibit errors due to memory-related errors or latency conditions.Static Operating Environments: The compatibility differences between Firefox and IE illustrate a static environment.
7Configuration and compatibility issues Configuration and compatibility issues may occur at any point within a Web system: client, server, or network. Configuration issues involve various server software and hardware setups, browser settings, network connections, and TCP/IP stack setups. Figures on the next slide illustrate two of the many possible physical server configurations, one-box and two-box, respectively.
8Configuration Illustrations One box configurationTwo-box configuration
9Dynamic Operating Environments When the value of a specific environment attribute does not stay constant each time (previous slide) a test procedure is executed, it causes the operating environment to become dynamic. The attribute can be anything from resource-specific (availableRAM, disk space, etc.) to timing-specific (network latency, the order of transactions being submitted, etc.).RAM/Disc Space of 1 or 2 boxes?Network latency: two boxes are on the different LANsOrder of Transactions – think about UDPDifferent versions of OS’s
10Test TypesTest types are categories of tests that are designed to expose a certain class of error or verify the accuracy of related behaviors. The analysis of test types is a good way to divide the testing of an application methodically into logical and manageable groups of tasks. Test types are also helpful in communicating required testing time and resources to other members of the product team.
11Acceptance TestingThe two common types of acceptance tests are development acceptance tests and deployment acceptance tests.Release acceptance tests and functional acceptance simple tests are two common classes of test used during the development process. There are subtle differences in the application of these two classes of tests.
12Release Acceptance Test (RAT) The (RAT) or build acceptance test (BAT) , is run on each build release to check that each build is stable enough for further testing.Typically, this test suite consists of entrance and exit test cases, plus test cases that check mainstream functions of the program with mainstream data.Copies of the BAT can be distributed to developers so that they can run the tests before submitting builds to the testing group.
13QA procedures if BAT failed If a build does not pass a BAT test, it is reasonable to do the following:Suspend testing on the new build and resume testing on the prior build until another build is received.Report the failing criteria (should be at least 1 test stopper or catastrophic bug) to the development team.Request a new build.
14Functional Acceptance Simple Test The functional acceptance simple test (FAST) is run on each build release to check that key features of the program are appropriately accessible and functioning properly on at least one test configuration (preferably the minimum or common configuration).This test suite consists of simple test cases that check the lowest level of functionality for each command to ensure that task-oriented functional tests can be performed on the program. The objective is to decompose the functionality of a program down to the command level and then apply test cases to check that each command works as intended. No attention is paid to the combination of these basic commands, the context of the feature that is formed by these combined commands, or the end result of the overall feature.For example, MAT for a File/Save As menu command checks that the Save As dialog box displays. However, it does not validate that the overall file-saving feature works, nor does it validate the integrity of saved files.FAST also known as minimum acceptance test (MAT)
15Deployment Acceptance Test The configurations on which the Web system will be deployed will often be much different from develop-and-test configurations.Testers must consider this in the preparation and writing of test cases for installation time acceptance tests.This type of test usually includes the full installation of the applications to the targeted environments or configurations.Think about single box deployment vs. multiple-box plus Firewalls/Switches/Routers/etc deployment.
16Functionality and Feature-Level Testing This is where we begin to do some serious testing, including boundary testing and other difficult but valid test circumstances:Task-Oriented Functional TestForced-Error TestBoundary TestSystem-Level TestExploratory TestLoad/Volume TestStress TestPerformance TestFail-over TestAvailability TestRegression TestCompatibility and Configuration TestDocumentation TestOnline Help TestUtilities/Toolkits and Collateral TestInstall/Uninstall TestUser Interface TestsUsability TestsAccessibility TestsDB test, stress; load; etc
19Risk ManagementWeb site risk management is a process that helps determine how an organization will be affected by exposure to risk on the Internet.Risk management can be used to minimize, control, or eliminate exposure to risks.Risk management is inevitable for all Web development applications.There are two kinds of risks that are examined when evaluating a project: opportunity risk, which is the loss from avoiding risk, and failure risk, which is the loss from taking a risk but failing to achieve the expected goal.
20Risk Management and Loss Loss may be financial, due to the downtime from a Web server, or it may be competitiveness in the Web market. It may even be due to the development and acquisition of reusable software components or other valuable aspects of the Web site.Managing risks requires that you as the tester set up clear guidelines of how the risks related to the QA activities should be documented and tracked.
21Risk Management Guidelines These guidelines should be a work in progress; the individuals who are responsible for the risk management assessment should be able to access and update them as needed. Risk management can be addressed throughout the Web planning phase.You need to think of the risks before testing, during testing, after testing, and then again when the Web site is actually deployed.
22Several Risk Factors (slide 1 of 2) Following are several risk factors:Probability. Probability is one risk method used to determine the likelihood of the occurrence of a particular risk. The probabilities of risk are categorized as very low, low, medium, high, or very high. For example, server issues may be examined for their level of risk to the Web site. If the server goes down, it may have serious impact, which would make the risk very high.
23Several Risk Factors (slide 2 of 2) Impact. Impact is used to determine the effect a risk would have on the project and how to handle the estimate of risk. Impact can be determined by categorizing risk as to whether they are negligible, critical, or catastrophic.Overall risk. Overall risk is the risk to the project. The overall risk to the project can be determined by using estimates of risk probabilities and impacts. In calculating the overall risk, consider how this risk may affect other risks on the project, and make a note of them.
24Risk Matrix TableA matrix can be used to determine the overall risk for each of effort, performance, and schedule
25Planning for RisksA plan should be developed to address each risk. A good rule of thumb for your risk toolkit is to ask the questions who, what, when, where, why, and how:Who is responsible for the risk management activity?What information is needed to track the status of the risk?When will resources be needed to perform the activity?Where will the resources be used?Why is the risk important?How will the risk plan be implemented?
26Action and Contingency Planning There are two different types of risk planning.Action planning is used to mitigate the risk by way of an immediate response. The probability of the risk occurring and/or the potential impact of the risk may be mitigated by dealing with the problem early in the project. This type of planning is considered proactive.Contingency planning is used to monitor the risk and invoke a predetermined response. A trigger is set up, and if the trigger is reached, the contingency plan is put into effect.
27Concerns in planning for risks (1 of 3) Anticipate risks. When you are testing the Web site, you should have some preconceived idea of what part of the application may cause you problems. An example is testing your Web application to see if it will generate the correct calculated results from the shopping basket.Eliminate risks. Potential problems can be identified before the testing process because the developer and programmer can deal with those issues during unit testing. It is important to make sure that the hardware you are using will work with the software before you begin any testing.
28Concerns in planning for risks (2 of 3) Reduce impact of risk. You can do several things to reduce the impact of risk. It is important to make sure you know everything there is to know about the Web application and previous releases of the Web application project. To lower risks for your project make sure the testing team understands the basic components of the application and how the testing process should progress. It is also important to make sure that unit testing is being done after each phase of the coding process. Make sure you put into effect a complete test plan and document each phase of the software development. The testers should have prewritten scripts of the anticipated outcome of the test to follow.
29Concerns in planning for risks (3 of 3) Stay in control when things do not go as expected. As you test your Web site, expect that something will go wrong. Do not panic; instead, take control of the process and anticipate the next course of action as it pertains to the Web test process. Set up an analysis of the Web testing process and revise and rerun anything that did not go according to plan.The best defense against certain key types of risks is to prepare a contingency and tracking plan that can be used to process and update your plan.
30Tracking RisksTracking risks is essential to the risk management process; if triggers go off, the entire team needs to be informed so that contingency plans can go into effect.Tracking is also useful as the project comes to the end of its development phase. Past knowledge may increase the chances of risk prevention and improvement in future projects. Resources are important as a part of the risk tracking process.Tracking risks will enable you to identify risks and to follow through on the likelihood that the risks will occur on your Web application.
31Risk Tracking Document Risks can be tracked by creating a tracking document. Each member of the team should submit a risk document for his or her particular responsibility. Following is an example of what should be included in the risk document:Name of riskDescription of the riskSteps involved that would cause the risk to happenResultsProbability of the riskResources affectedCommentsRelated risksAlternate plan
32Handling the Risk (slide 1 of 4) There are different ways to monitor how you would like to handle risks. Following are different methods that will help you analyze and address your risks:Decide on the specific component of the Web application that appears to have a high risk. Will you be looking at the entire Web site, a single component, or even a list of components?Determine the severity of concern. Use a scale of normal, high, and low to rank the severity. Everything is presumed to be a normal risk unless there is reason for an assessment of a higher or a lower risk. Selecting a scale of concern that is meaningful to your business is critical in the assessment of the level of severity.
33Handling the Risk (slide 2 of 4) Make individual input from your team key in identifying and foreseeing risks. Understanding the situation in which the Web site is set will help in developing a risk assessment. The team members will determine the different levels of risk that they foresee happening with their part of the Web site project.After each risk is identified, decide on the importance of the risk and its severity. For each area of development a decision should be made to determine whether there would be risks in this particular area of development. You should then determine the level of severity. Record how you think this will affect your risk assessment. Determine how this type of risk is critical to the advancement of the Web site project.
34Handling the Risk (slide 3 of 4) Set up a plan that will be able to handle other risks as they occur. There will surely be risks that you may not even know about. It is critical to be able to deal with the uncertainties as well as the planned or foreseeable risks.Record unknowns that will affect your ability to analyze the risk. During the process, you may feel that you are not able to assess the risk probability. If a certain portion of the Web application is complex, you may be unable to determine the type of risk involved. A risk is anything that may have a negative impact on your business or the performance of your business. As you progress through the risk analysis phase, it helps to make a list of risk items that are critical to your business.Record unknowns that will affect your ability to analyze the risk. Example: You have to test your app with different Firewalls, but limited knowledge of those firewalls will not allow you to identify the risks or their magnitude.
35Handling the Risk (slide 4 of 4) Double-check the risk distribution. It's common to end up with a list of risks in which everything is considered to be equally risky. That may indeed be the case. On the other hand, it may be that your distribution of concerns is skewed because you're not willing to make tough choices about what to test and what not to test. Once you end up with a list of distributed risks, it is important to make sure you double-check them by taking a few examples of equal risks and asking whether those risks really are equal. Take some examples of risks that differ in magnitude and ask if it really does make sense to spend more time testing the higher risk and less time testing the lower risk. Confirm that the distribution of risk magnitudes is correct.
36Contingency Planning (slide 1 of 2) Contingency planning is a vital part of software development. All contingency plans should address the following areas:Objectives of the plan (for example, continue normal operations, continue in a degraded mode, abort the function as quickly and as safely as possible, and so on).Criteria for invoking the plan (for example, missing a renovation milestone, experiencing serious system failures, and so on).Expected life of the plan. (How long can operations continue in contingency operating mode?)Roles, responsibilities, and authority.
37Contingency Planning (slide 2 of 2) Plan(s) creation and checkout of resource constraints to plan for each contingency and objective.Training on and testing of plans.Procedures for invoking contingency mode.Procedures for operating in contingency mode.Resource plan for operating in contingency mode (for example, staffing, scheduling, materials, supplies, facilities, temporary hardware and software, communications, and so on).Criteria for returning to normal operating mode.Procedures for returning to normal operating mode.Procedures for recovering lost or damaged data.
38Improving QA ProcessThe performance of QA process can be improved by applying three readily understood and executed steps:Define the processMeasure process performanceImprove the process.Repeat this cycle continuously.Measure progress with reporting solutions.
39Types of Reporting Solutions QA tool built-in reportsPros: Reports come with the toolCons: very limited or too genericCustom built reportsPros: designed for specific needsCons: extremely expensive (e.g. hire dev)Third Party Reporting SolutionsPros: Variety, Flexibility, Affordable SupportCons: Infrastructure changes (e.g. setup report server)39
40Examples of QA Oriented Reports Daily Reports (Bug Summary, Regression Status)Weekly Reports (Project Bug Progress)Test Plan Status ReportTest Cases Readiness ReportTest Plan Execution ReportTest Case Priority ReportExecution History ReportQuality Index ReportBug Triage Report, Bug Tracking ReportTeam Activity Report
41Defect Assignee Report The report shows:Engineer NamesNot ResolvedNot VerifiedThe screenshotis posted withpermission of theReliable BusinessReporting Inc.41
42Team Progress Report The screenshot is posted with permission of the The report shows:Engineer NamesBug StatisticsTest Case Exec. StatusThe screenshotis posted withpermission of theReliable BusinessReporting Inc.42
43Report Generation Access online, Print, Save as Excel, PDF Receive by , Save on file serverFlexibility, Parameter Customization:Reliable Business Reporting Inc.
44Defect ManagementDefects determine the effectiveness of the Testing what we do.If there are no defects, it directly implies that we don’t have our job.There are two points worth considering here, either the developer is so strong that there are no defects arising out, or the test engineer is weak. In many situations, the second is proving correct.
45What is a Defect? For a test engineer, a defect is following: Any deviation from specificationAnything that causes user dissatisfactionIncorrect outputSoftware does not do what it intended to do.
46Bug / Defect / Error: -Software is said to have Bug if it features deviates from specifications.Software is said to have Defect if it has unwanted side effects.Software is said to have Error if it gives incorrect output.But as for a test engineer all are same as the above definition is only for the purpose of documentation or indicative.
47Defect Taxonomies (1 of 2) Categories of Defects:All software defects can be broadly categorized into the below mentioned types:Errors of commission: something wrong is doneErrors of omission: something left out by accidentErrors of clarity and ambiguity: different interpretationsErrors of speed and capacity
48Defect Taxonomies (2 of 2) However, my previous slide is a broad categorization; here I have for you a host of varied types of defects that can be identified in different software applications:1. Conceptual bugs / Design bugs 2. Coding bugs 3. Integration bugs 4. User Interface Errors 5. Functionality 6. Communication 7. Command Structure 8. Missing Commands 9. Performance 10. Output 11. Error Handling Errors12. Boundary-Related Errors 13. Calculation Errors 14. Initial and Later States 15. Control Flow Errors 16. Errors in Handling Data 17. Race Conditions Errors 18. Load Conditions Errors 19. Hardware Errors 20. Source and Version Control Errors 21. Documentation Errors 22. Testing Errors
49Life Cycle of a DefectThe following self explanatory figure explains the life cycle of a defect: