Test Logging and Automated Failure Analysis Why Weak Automation Is Worse Than No Automation Geoff Staneff

Slides:



Advertisements
Similar presentations
HL7 V2 Implementation Guide Authoring Tool Proposal
Advertisements

Automating Software Module Testing for FAA Certification Usha Santhanam The Boeing Company.
Mahadevan Subramaniam and Bo Guo University of Nebraska at Omaha An Approach for Selecting Tests with Provable Guarantees.
Coding Standard: General Rules 1.Always be consistent with existing code. 2.Adopt naming conventions consistent with selected framework. 3.Use the same.
Annoucements  Next labs 9 and 10 are paired for everyone. So don’t miss the lab.  There is a review session for the quiz on Monday, November 4, at 8:00.
Software Quality Assurance Plan
Test-Driven Development and Refactoring CPSC 315 – Programming Studio.
Auditing Concepts.
1 Software Engineering Lecture 11 Software Testing.
Chapter 10 Introduction to Arrays
Computer Engineering 203 R Smith Requirements Management 6/ Requirements IEEE Standard Glossary A condition or capability needed by a user to solve.
Exceptions in Java Fawzi Emad Chau-Wen Tseng Department of Computer Science University of Maryland, College Park.
EE694v-Verification-Lect5-1- Lecture 5 - Verification Tools Automation improves the efficiency and reliability of the verification process Some tools,
SE 555 Software Requirements & Specification Requirements Analysis.
Principle of Functional Verification Chapter 1~3 Presenter : Fu-Ching Yang.
High Level: Generic Test Process (from chapter 6 of your text and earlier lesson) Test Planning & Preparation Test Execution Goals met? Analysis & Follow-up.
Design Synopsys System Verilog API Donations to Accellera João Geada.
Fundamentals of Python: From First Programs Through Data Structures
CASE Tools And Their Effect On Software Quality Peter Geddis – pxg07u.
1 Chapter One A First Program Using C#. 2 Objectives Learn about programming tasks Learn object-oriented programming concepts Learn about the C# programming.
Chapter 2 Build Your First Project A Step-by-Step Approach 2 Exploring Microsoft Visual Basic 6.0 Copyright © 1999 Prentice-Hall, Inc. By Carlotta Eaton.
1 Integrated Development Environment Building Your First Project (A Step-By-Step Approach)
A First Program Using C#
CLARIN tools for workflows Overview. Objective of this document  Determine which are the responsibilities of the different components of CLARIN workflows.
C++ Object Oriented 1. Class and Object The main purpose of C++ programming is to add object orientation to the C programming language and classes are.
CPIS 357 Software Quality & Testing I.Rehab Bahaaddin Ashary Faculty of Computing and Information Technology Information Systems Department Fall 2010.
O BJECT O RIENTATION F UNDAMENTALS Prepared by: Gunjan Chhabra.
SOFTWARE ENGINEERING BIT-8 APRIL, 16,2008 Introduction to UML.
Simple Program Design Third Edition A Step-by-Step Approach
Implementation Yaodong Bi. Introduction to Implementation Purposes of Implementation – Plan the system integrations required in each iteration – Distribute.
Chapter 8 – Software Testing Lecture 1 1Chapter 8 Software testing The bearing of a child takes nine months, no matter how many women are assigned. Many.
Chapter 6 : Software Metrics
Software Development Software Testing. Testing Definitions There are many tests going under various names. The following is a general list to get a feel.
SOFTWARE DESIGN (SWD) Instructor: Dr. Hany H. Ammar
Dr. Tom WayCSC Testing and Test-Driven Development CSC 4700 Software Engineering Based on Sommerville slides.
SE: CHAPTER 7 Writing The Program
CARDIAC ELECTROPHYSIOLOGY WEB LAB Developing your own protocol descriptions.
Cohesion and Coupling CS 4311
Chapter 9 I/O Streams and Data Files
CPS120: Introduction to Computer Science Lecture 14 Functions.
Visual Basic Programming
C++ Programming Basic Learning Prepared By The Smartpath Information systems
What is Testing? Testing is the process of finding errors in the system implementation. –The intent of testing is to find problems with the system.
Designing Classes CS239 – Jan 26, Key points from yesterday’s lab  Enumerated types are abstract data types that define a set of values.  They.
Debugging TI RTOS TEAM 4 JORGE JIMENEZ JHONY MEDRANO ALBIEN FEZGA.
Software Quality Assurance SOFTWARE DEFECT. Defect Repair Defect Repair is a process of repairing the defective part or replacing it, as needed. For example,
Software Quality Assurance and Testing Fazal Rehman Shamil.
HNDIT23082 Lecture 09:Software Testing. Validations and Verification Validation and verification ( V & V ) is the name given to the checking and analysis.
Perfecto Mobile Automation
Execution ways of program References: www. en.wikipedia.org/wiki/Integrated_development_environment  You can execute or run a simple java program with.
Program Design. Simple Program Design, Fourth Edition Chapter 1 2 Objectives In this chapter you will be able to: Describe the steps in the program development.
What is a software? Computer Software, or just Software, is the collection of computer programs and related data that provide the instructions telling.
SOFTWARE DESIGN & SOFTWARE ENGINEERING Software design is a process in which data, program structure, interface and their details are represented by well.
Chapter 2 Build Your First Project A Step-by-Step Approach 2 Exploring Microsoft Visual Basic 6.0 Copyright © 1999 Prentice-Hall, Inc. By Carlotta Eaton.
Module 4: Troubleshooting Web Servers. Overview Use IIS 7.0 troubleshooting features to gather troubleshooting information Use the Runtime Control and.
Auditing Concepts.
Software Testing.
Advanced Programing practices
Testing Tutorial 7.
z/Ware 2.0 Technical Overview
Verification and Validation
Verification and Validation
Lecture 09:Software Testing
Testing and Test-Driven Development CSC 4700 Software Engineering
An Introduction to Software Architecture
Test Case Test case Describes an input Description and an expected output Description. Test case ID Section 1: Before execution Section 2: After execution.
Test Driven Lasse Koskela Chapter 9: Acceptance TDD Explained
C++ Object Oriented 1.
Presentation transcript:

Test Logging and Automated Failure Analysis Why Weak Automation Is Worse Than No Automation Geoff Staneff

Overview Background Problem How we dealt with it Lessons Learned

Background Started at MSFT in ‘05 Working on the Windows Event Log ◦ Service ◦ API ◦ UI Test Environment ◦ Hundreds of automated tests ◦ Several platforms and languages ◦ Regular execution of test passes

Problem: Not enough time and too many failures Analysis Paralysis ◦ Our weak automation was time consuming to maintain New Features ◦ Require new automation Wide test matrix ◦ One set of the code runs against many machines

What we did about it Automated Failure Analysis will save us! In Reality: ◦ Improving our test code to support AFA saved us.  Logging practices  Test code quality

What is Automated Failure Analysis Automated Failure Analysis is a means of determining if a specific observed failure has previously been diagnosed.

Purpose of Effective Test Logging Effective Test Logging provides a record of what a test observed, in sufficient detail to identify the defect(s) observed.

Test Logging What does it consist of? ◦ File loggers: text, xml, csv, etc. ◦ Log Sources:ETW, EventLog, etc. ◦ Other Data:e.g. Code Profiling Why do we log? ◦ To support diagnosis ◦ To support identification

Logging Consequences Test logging decisions made early in the product cycle outlast their authors and management Certain failure analysis methods inspire or shatter confidence in the test process Certain logging practices enable or preclude various analysis methodologies

Methods of Failure Identification Logged Details Rollup Rules External Factors Summary Results Blended Rules Re-Run Ad-hoc

Logging Taxonomy Many advantages accrue when teams use the same names for the same data ◦ Team members can read test logs for tests they didn’t author ◦ Disciplines outside test can understand test failures ◦ Easier for new employees to produce good logs ◦ Wider product, test or lab issues can be identified across component boundaries

Trace Failure Context Knowing more about how the failure was computed will assist in diagnosis of the underlying defect. The following is an example of how one instance of a Windows API failure could be traced:  Test Failed.  Expected 1.  Found 0 Expected 1.  Win32BoolAPI returned 0, expected 1.  Win32BoolAPI with arguments Arg1, Arg2, Arg3 returned 0, expected 1.  Win32BoolAPI with arguments Arg1, Arg2, Arg3 returned 0 and set the last error to 0x57, expected 1 and 0x0.

Avoid Unnecessary Data 6000 lines of trace for 3min of test execution is rarely a good idea. Move trace to failure branches Eliminate ambiguous trace Avoid looped trace Some counting trace may be useful, consider reporting only the count at failure

Sections Use Sections to group related information ◦ A Section is simply a container element ◦ WTT’s version of a Section is a Context Without Sections ◦ Individual authors often attempt to create their own on the fly section by pre-pending a characteristic string to the test log output ◦ Unrelated information may match unintentionally

The Assert Avoid the Simple Assert Use Named Asserts Use Assert Sections Replace Asserts Use a Custom Trace Level for Asserts

Validation & Status Trace Validation trace keeps terse statements in the main execution branch and verbose statements in the failure branches Knowing the last good operation is often necessary Limit status trace whenever possible Log status trace to a named section

Setup Code Failures in setup code are often lab or test issues Test logs don’t frequently classify setup trace any differently than product related trace Consider modeling setup steps as a distinct test result Use a Setup section and standard names

Parameter Trace Often represents the dynamic data fed into the test case at the start of the test Parameter trace can also identify the local variables passed to a function that fails Initial parameters have a parameters section Function parameters should have their own sections

Library Code Opportunity for cross-test and cross- team failure resolution Logging changes made to library code impact all tests that reference that code Consider using either a named section or Validation Trace model

Dynamic Data Dynamic data should be marked in a consistent way and separate from other types of information.

Timestamps Avoid Tracing Timestamps When you have to trace a timestamp ◦ Trace Durations, Offsets, Elapsed Time If Necessary ◦ Separate the data from the timestamp

Essential Information High quality logging practices generally share many of the following qualities ◦ Results are marked explicitly ◦ Partial results are marked ◦ Contains rich local data ◦ Contains rich global data ◦ Maintains clear information relationships ◦ Shares a common format ◦ Separates or marks dynamic data ◦ Uses consistent data tagging ◦ Produces consistent logs across test executions!

Questions And Answers Geoff Staneff

Glossary Teams have their own definitions for many test terms. To simplify conversations between different groups and organizations the following terms will be used here Test Context ◦ A set of variables that describes the execution environment and input data for a test pass Test Case ◦ A blueprint for verification or validation of an observable behavior Test Point ◦ The combination of a Test Case with a Test Context such that a result is obtained Test Pass ◦ A collection of Test Cases or Test Points