High Volume Test Automation in Practice Andy Tinkham Principal Lead Consultant, QAT Magenic Technologies.

Slides:



Advertisements
Similar presentations
Debugging ACL Scripts.
Advertisements

Thursday, November 1, 2001(c) 2001 Ibrahim K. El-Far. All rights reserved.1 Enjoying the Perks of Model-based Testing Ibrahim K. El-Far Florida Institute.
SolidWorks Enterprise PDM Data Loading Strategies
More and Better Test Ideas Rikard Edgren TIBCO Spotfire EuroSTAR share one-liner test ideas.
Test Yaodong Bi.
Lectures on File Management
Testing and Quality Assurance
How to Actually DO High-Volume Automated Testing Cem Kaner Carol Oliver Florida Institute of Technology 1.
Programming Types of Testing.
By Brian Vees.  SQL Injection  Username Enumeration  Cross Site Scripting (XSS)  Remote Code Execution  String Formatting Vulnerabilities.
Fuzzing Dan Fleck CS 469: Security Engineering Sources:
Economic Perspectives in Test Automation: Balancing Automated and Manual Testing with Opportunity Cost Paper By – Rudolf Ramler and Klaus Wolfmaier Presented.
DT228/3 Web Development JSP: Directives and Scripting elements.
Software Testing. “Software and Cathedrals are much the same: First we build them, then we pray!!!” -Sam Redwine, Jr.
Leveraging User Interactions for In-Depth Testing of Web Application Sean McAllister Secure System Lab, Technical University Vienna, Austria Engin Kirda.
Security Assessments FITSP-M Module 5. Security control assessments are not about checklists, simple pass-fail results, or generating paperwork to pass.
1 Joe Meehean. 2 Testing is the process of executing a program with the intent of finding errors. -Glenford Myers.
What Exactly are the Techniques of Software Verification and Validation A Storehouse of Vast Knowledge on Software Testing.
TEST CASE DESIGN Prepared by: Fatih Kızkun. OUTLINE Introduction –Importance of Test –Essential Test Case Development A Variety of Test Methods –Risk.
Copyright ©2013, Mark Fioravanti High Volume Automated Testing in Security Testing It’s a numbers game. Mark Fioravanti WTST-2013, January 25 th, 2013.
Terms: Test (Case) vs. Test Suite
CS527: (Advanced) Topics in Software Engineering Overview of Software Quality Assurance Tao Xie ©D. Marinov, T. Xie.
Dr. Pedro Mejia Alvarez Software Testing Slide 1 Software Testing: Building Test Cases.
Software Quality Assurance Lecture #8 By: Faraz Ahmed.
TESTING.
Introduction Optimizing Application Performance with Pinpoint Accuracy What every IT Executive, Administrator & Developer Needs to Know.
Security Assessments FITSP-A Module 5
Moving into the Testing Phase Revised for October 22, 2008.
CS 501: Software Engineering Fall 1999 Lecture 16 Verification and Validation.
CMSC 345 Fall 2000 Unit Testing. The testing process.
CS 360 Lecture 3.  The software process is a structured set of activities required to develop a software system.  Fundamental Assumption:  Good software.
Exploitation: Buffer Overflow, SQL injection, Adobe files Source:
Computer Security and Penetration Testing
What is software testing? 1 What are the problems of software testing? 2 Time is limited Applications are complex Requirements are fluid.
Testing Methods Carl Smith National Certificate Year 2 – Unit 4.
Introduction to Software Testing. Types of Software Testing Unit Testing Strategies – Equivalence Class Testing – Boundary Value Testing – Output Testing.
DEBUGGING. BUG A software bug is an error, flaw, failure, or fault in a computer program or system that causes it to produce an incorrect or unexpected.
Data and information. Information and data By the end of this, you should be able to state the difference between DATE and INFORMAITON.
Software Construction Lecture 18 Software Testing.
Sylnovie Merchant, Ph.D. MIS 161 Spring 2005 MIS 161 Systems Development Life Cycle II Lecture 5: Testing User Documentation.
CS3505: DATA LINK LAYER. data link layer  phys. layer subject to errors; not reliable; and only moves information as bits, which alone are not meaningful.
Session # Rational User Conference 2002 Author Note: To edit Session # go to: View/Master/Title Master ©1998, 1999, 2000, 2001, 2002 Rational Software.
Copyright (c) Cem Kaner Black Box Software Testing (Academic Course - Fall 2001) Cem Kaner, J.D., Ph.D. Florida Institute of Technology Section:
COSC 2007 Data Structures II Chapter 13 Advanced Implementation of Tables IV.
Software Engineering 2004 Jyrki Nummenmaa 1 BACKGROUND There is no way to generally test programs exhaustively (that is, going through all execution.
CSCI1600: Embedded and Real Time Software Lecture 33: Worst Case Execution Time Steven Reiss, Fall 2015.
1 Confidential. Property of MedAssets. © 2014 MedAssets, Inc. All rights reserved. MedAssets®. RV Automation In Practice – What It Is & What It.
Security Attacks Tanenbaum & Bo, Modern Operating Systems:4th ed., (c) 2013 Prentice-Hall, Inc. All rights reserved.
Software Development Process CS 360 Lecture 3. Software Process The software process is a structured set of activities required to develop a software.
Whole Test Suite Generation. Abstract Not all bugs lead to program crashes, and not always is there a formal specification to check the correctness of.
Cs498dm Software Testing Darko Marinov January 24, 2012.
Copyright (c) Cem Kaner. All Rights Reserved. 1 Black Box Software Testing (Professional Seminar) Cem Kaner, J.D., Ph.D. Professor of Computer.
Prepared by: Fatih Kızkun
Software Testing An Introduction.
What It Is and What It Should Be
One Digital – Integrated Digital Assurance Automation Framework
Testing the Software with Blinders on
CSCI1600: Embedded and Real Time Software
A test technique is a recipe these tasks that will reveal something
CS240: Advanced Programming Concepts
Software Verification and Validation
Software Verification and Validation
Please use speaker notes for additional information!
Software Security Slide Set #10 Textbook Chapter 11 Clicker Questions
Software Verification and Validation
Test Cases, Test Suites and Test Case management systems
Lecture 34: Testing II April 24, 2017 Selenium testing script 7/7/2019
CSCI1600: Embedded and Real Time Software
Black Box Software Testing (Professional Seminar)
Exploring Exploratory Testing
Presentation transcript:

High Volume Test Automation in Practice Andy Tinkham Principal Lead Consultant, QAT Magenic Technologies

Acknowledgements » This presentation draws on the knowledge shared by the attendees of WTST 12 in Melbourne, FL ( Jan 25-27, 2013, hosted by the Harris Institute for Assured Information at the Florida Institute of Technology and Kaner, Fiedler & Associates, LLC ) » Cem Kaner, Catherine Karena, Michael Kelly, Rebecca Fiedler, Janaka Balasooriyi, Thomas Bedran, Jared Demott, Keith Gallagher, Doug Hoffman, Dan Hoffman, Harry Robinson, Rob Sabourin, Andy Tinkham, Thomas Vaniotis, Tao Xie, Casey Doran, Mark Fiorvanti, Michal Frystacky, Scott Fuller, Nawwar Kabbani, Carol Oliver, Vadym Tereschenko » This material is heavily drawn from Cem Kaners blog posts on kaner.com and context-driven-testing.com, referenced at the end of this slide deck

About me » 17 years in testing industry » Principal Lead Consultant at Magenic Technologies » Doctoral student at Florida Tech » Host free virtual office hours roughly weekly » » »

What is High Volume Test Automation (HiVAT)? A family of test techniques that enable a tester to run & evaluate arbitrarily many computer-assisted tests -- WTST 12 working definition

Lets break that down… Many ways to do HiVAT Different ways for different goals Family of test techniques Not replacing a human Augmenting a testers skill set Enable a tester Need executable tests Need some sort of oracle Run & evaluate Easy to change number of tests Not 1:1 matchup with manual tests Arbitrarily Many Continuum of manual & automated Different tests at different spots Computer- assisted tests

Manual & automated tests Every test has manual elements A human designs it A human wrote the code A human analyzes the results Every test has automated elements Transforming the inputs to outputs is done by the computer Every test falls somewhere on a continuum between the two extremes

HiVAT tests tend toward the automated side » Human still designs overall tests (possibly very high-level) » Computer may determine inputs, paths and expected results » Computer evaluates individual results » Human determines stopping criteria » Number of tests » Time » First bug » Human analyzes overall results

…but are different from traditional automation » Include many iterations of execution » May run for longer periods of time » Sometimes involve more randomness » Can be focused on looking for unknown risks rather than identified risks

Why do HiVAT? » Find problems that occur in only a small subset of input values » Find difficult to encounter bugs like race conditions or corrupted state » Catch intermittent failures » Leverage idle hardware » Address risks and provide value in ways that traditional automation & manual testing dont normally do

How do we do HiVAT? » Lots of ways! » Kaner gives this classification scheme which covers many techniques (including the ones were about to talk about) Focus on Inputs Exploit available oracle Exploit existing tests or tools

Methods that focus on inputs » Testers usually divide inputs into equivalence classes and pick high-value representative values » For reasonably-sized datasets, automation doesnt need to do this! » Run all (or at least many of) the values through the automation » Alternatively, use random input generation to get a stream of input values to use for testing

Parametric Variation » Replace small equivalence class representative sets » Some input sets may allow running the total set of inputs » Doug Hoffmans MASPAR example » Others may still require sampling » Valid passwords example » Sampling can be optimized if data is well understood » Can generate random values

High-Volume Combination Testing » Testers often use combinatorial test techniques to get a workable set of combinations to cover interactions » These techniques leave combinations uncovered » If we know which uncovered combinations are more important or risky, we can add them to the test set » What about when we dont know which ones are of interest? » HiVAT tests can run many more combinations through than are usually done » Sampling can be same as Parametric Variation » Retail POS system example

Input Fuzzing/Hostile Data Stream Testing » Given a known good set of inputs » Make changes to the input and run each changed values through the system » Watch for buffer overruns, stack corruption, crashes, and other system- level problems » Expression Blend example » Alan Jorgensens Acrobat Reader work

Automated Security Vulnerability Checking » Scan an application for input fields » For each input field, try a variety of common SQL Injection and Cross-Site Scripting attacks to detect vulnerabilities » Mark Fiorvantis WTST paper (see references)

One problem with input focused tests » We need an oracle! » It can be hard to verify the correctness of the results without duplicating the functionality were testing » Input-focused tests may look for more obvious errors » Crashes » Memory problems » Simple calculations

Methods that exploit oracles » Sometimes we already have an oracle available » If so, we can take advantage of it!

Functional Equivalence » Run lots of inputs through the SUT and another system that does the same thing, then compare outputs » FIT Testing 2 exam example

Constraint Checks » Look for obviously bad data » US ZIP codes that arent 5 or 9 digits long » End dates that occur before start dates » Pictures that dont look right

State-Model Walking » 3 things required » State model of the application » A way to drive the application » A way to determine what state were in

Methods that exploit existing tests or tools » Existing artifacts can be used in high-volume testing » Tests » Load Generators

Long-Sequence Regression Testing » Take a set of individually passing automated regression tests » Run them together in long chains over extended periods of time » Watch for failures » Actions may leave corrupted state that only later appears » Sequence of actions may be important » Mentsville example

High-Volume Protocol Testing » Send a string of commands to a protocol handler » Web service method calls » API calls » Protocols with defined order

Load-enhanced Functional Testing » Run your existing automated functional tests AND your automated load generation at the same time » Add in additional diagnostic monitoring if available » Systems behave differently under load » System resource problems may not be visible when resources are plentiful » Timing issues

Starting HiVAT in your organization » Inventory what you already have » Existing tests you can chain together (Preferably without intervening clean-up code) » Tools you can put to additional uses » Oracles you can use » Places where small samples have been chosen from a larger data set » Hardware that is sometimes sitting idle

Starting HiVAT in your organization » Match your inventory up to techniques that can take advantage of them » Think about what sorts of risks and problems a technique could reveal in your application » For each risk, do you have other tests that can be reasonably expected to cover that issue? » How much value is there in getting information about the risk? » How much effort is required to get the information? » What other tasks could you do in the same time? » Is the value of the information the cost to implement + the value of the other tasks?

Summary » High volume automated testing is a family of test techniques focused on running an arbitrary number of tests » The number of tests is often defined by an amount of time or coverage of a set of values rather than trying for a minimal set » Some high-volume techniques focus on covering a set of inputs » Some take advantage of an accessible oracle » Some reuse existing artifacts in new ways » Determining what makes sense for you is a matter of risk and value

References » Cem Kaners High Volume Test Automation Overview » Cems WTST 12 write-up » WTST 12 home page (with links to papers and slides, including Mark Fiorvantis) » Doug Hoffmans MASPAR example » Alan Jorgensens Testing With Hostile Data Streams paper » Pat McGee & Cem Kaners Long-Sequence Regression Test (Mentsville) plan

Contact Information Andy Tinkham Magenic Technologies