Presentation is loading. Please wait.

Presentation is loading. Please wait.

Test Logging and Automated Failure Analysis Why Weak Automation Is Worse Than No Automation Geoff Staneff

Similar presentations


Presentation on theme: "Test Logging and Automated Failure Analysis Why Weak Automation Is Worse Than No Automation Geoff Staneff"— Presentation transcript:

1 Test Logging and Automated Failure Analysis Why Weak Automation Is Worse Than No Automation Geoff Staneff (gstaneff@microsoft.com)

2 Overview Background Problem How we dealt with it Lessons Learned

3 Background Started at MSFT in ‘05 Working on the Windows Event Log ◦ Service ◦ API ◦ UI Test Environment ◦ Hundreds of automated tests ◦ Several platforms and languages ◦ Regular execution of test passes

4 Problem: Not enough time and too many failures Analysis Paralysis ◦ Our weak automation was time consuming to maintain New Features ◦ Require new automation Wide test matrix ◦ One set of the code runs against many machines

5 What we did about it Automated Failure Analysis will save us! In Reality: ◦ Improving our test code to support AFA saved us.  Logging practices  Test code quality

6 What is Automated Failure Analysis Automated Failure Analysis is a means of determining if a specific observed failure has previously been diagnosed.

7 Purpose of Effective Test Logging Effective Test Logging provides a record of what a test observed, in sufficient detail to identify the defect(s) observed.

8 Test Logging What does it consist of? ◦ File loggers: text, xml, csv, etc. ◦ Log Sources:ETW, EventLog, etc. ◦ Other Data:e.g. Code Profiling Why do we log? ◦ To support diagnosis ◦ To support identification

9 Logging Consequences Test logging decisions made early in the product cycle outlast their authors and management Certain failure analysis methods inspire or shatter confidence in the test process Certain logging practices enable or preclude various analysis methodologies

10 Methods of Failure Identification Logged Details Rollup Rules External Factors Summary Results Blended Rules Re-Run Ad-hoc

11 Logging Taxonomy Many advantages accrue when teams use the same names for the same data ◦ Team members can read test logs for tests they didn’t author ◦ Disciplines outside test can understand test failures ◦ Easier for new employees to produce good logs ◦ Wider product, test or lab issues can be identified across component boundaries

12 Trace Failure Context Knowing more about how the failure was computed will assist in diagnosis of the underlying defect. The following is an example of how one instance of a Windows API failure could be traced:  Test Failed.  Expected 1.  Found 0 Expected 1.  Win32BoolAPI returned 0, expected 1.  Win32BoolAPI with arguments Arg1, Arg2, Arg3 returned 0, expected 1.  Win32BoolAPI with arguments Arg1, Arg2, Arg3 returned 0 and set the last error to 0x57, expected 1 and 0x0.

13 Avoid Unnecessary Data 6000 lines of trace for 3min of test execution is rarely a good idea. Move trace to failure branches Eliminate ambiguous trace Avoid looped trace Some counting trace may be useful, consider reporting only the count at failure

14 Sections Use Sections to group related information ◦ A Section is simply a container element ◦ WTT’s version of a Section is a Context Without Sections ◦ Individual authors often attempt to create their own on the fly section by pre-pending a characteristic string to the test log output ◦ Unrelated information may match unintentionally

15 The Assert Avoid the Simple Assert Use Named Asserts Use Assert Sections Replace Asserts Use a Custom Trace Level for Asserts

16 Validation & Status Trace Validation trace keeps terse statements in the main execution branch and verbose statements in the failure branches Knowing the last good operation is often necessary Limit status trace whenever possible Log status trace to a named section

17 Setup Code Failures in setup code are often lab or test issues Test logs don’t frequently classify setup trace any differently than product related trace Consider modeling setup steps as a distinct test result Use a Setup section and standard names

18 Parameter Trace Often represents the dynamic data fed into the test case at the start of the test Parameter trace can also identify the local variables passed to a function that fails Initial parameters have a parameters section Function parameters should have their own sections

19 Library Code Opportunity for cross-test and cross- team failure resolution Logging changes made to library code impact all tests that reference that code Consider using either a named section or Validation Trace model

20 Dynamic Data Dynamic data should be marked in a consistent way and separate from other types of information.

21 Timestamps Avoid Tracing Timestamps When you have to trace a timestamp ◦ Trace Durations, Offsets, Elapsed Time If Necessary ◦ Separate the data from the timestamp

22 Essential Information High quality logging practices generally share many of the following qualities ◦ Results are marked explicitly ◦ Partial results are marked ◦ Contains rich local data ◦ Contains rich global data ◦ Maintains clear information relationships ◦ Shares a common format ◦ Separates or marks dynamic data ◦ Uses consistent data tagging ◦ Produces consistent logs across test executions!

23 Questions And Answers Geoff Staneff (gstaneff@microsoft.com)

24 Glossary Teams have their own definitions for many test terms. To simplify conversations between different groups and organizations the following terms will be used here Test Context ◦ A set of variables that describes the execution environment and input data for a test pass Test Case ◦ A blueprint for verification or validation of an observable behavior Test Point ◦ The combination of a Test Case with a Test Context such that a result is obtained Test Pass ◦ A collection of Test Cases or Test Points


Download ppt "Test Logging and Automated Failure Analysis Why Weak Automation Is Worse Than No Automation Geoff Staneff"

Similar presentations


Ads by Google