Presentation is loading. Please wait.

Presentation is loading. Please wait.

441 Copyright © 1996-2003, Satisfice, Inc. V1.6.1 James Bach, Satisfice, Inc. (540)631-0600.

Similar presentations


Presentation on theme: "441 Copyright © 1996-2003, Satisfice, Inc. V1.6.1 James Bach, Satisfice, Inc. (540)631-0600."— Presentation transcript:

1 441 Copyright © 1996-2003, Satisfice, Inc. V1.6.1 James Bach, Satisfice, Inc. james@satisfice.com www.satisfice.com (540)631-0600

2 442 Copyright Notice These slides are distributed under the Creative Commons License. In brief summary, you may make and distribute copies of these slides so long as you give the original author credit and, if you alter, transform or build upon this work, you distribute the resulting work only under a license identical to this one. For the rest of the details of the license, see http://creativecommons.org/licenses/by- sa/2.0/legalcode.

3 443 Acknowledgements  Some of this material was developed in collaboration with Dr. Cem Kaner, of the Florida Institute of Technology.  Many of the ideas in this presentation were inspired by or contributed by other colleagues including Bret Pettichord, Brian Marick, Doug Hoffman, Dave Gelperin, Elisabeth Hendrickson, and Noel Nyman.  This class is under continuous development. Many ideas were improved or contributed by students in earlier versions of the class since 1996.

4 444 Assumptions  You test software.  You have at least some control over the design of your tests and some time to create new tests.  One of your goals is to find important bugs fast.  You test things under conditions of uncertainty and time pressure.  You have control over how you think and what you think about.  You want to get very good at software testing.

5 445 Primary Goal of this Class To teach you how to test a product when you have to test it right now, under conditions of uncertainty, in a way that stands up to scrutiny.

6 446 Background

7 447 Your Moves: Rapid Testing Cycle Make sense of your status Focus on what needs doing START STOP Do a burst of testing Report Compare status against mission

8 448 What About “Slow” Testing? All Testing Rapid automation extensive preparation super testability super skill Rigorous or Thorough Management likes to talk about this… but they don’t fund it. You can do this, no matter what.

9 449 What is quality? What is a bug? Quality is value to some person. A bug is anything that threatens the value of the product.  These definitions are designed to be inclusive.  Inclusive definitions minimize the chance that you will inadvertently overlook an important problem.

10 450 Technical Knowledge Specifications Product Ah! Problem! Problem Testing is in Your Head Coverage Problem Report Communication The important parts of testing don’t take place in the computer or on your desk. Critical Thinking Domain Knowledge Experience

11 451 A Test (or Test Suite) is Like a Question you Ask the Product  A tester’s questions seek potentially valuable information.  To some degree, good tests have these attributes:  Power. When a problem exists, the test will reveal it.  Validity. When the test reveals a problem, it is a genuine problem.  Value. It reveals things your clients want to know about the product or project.  Pop. (short for Karl Popper) It reveal things about our basic or critical assumptions.  Coverage. It exercises the product in some way.  Performability. It can be performed as designed; repeated as needed.  Accountability. You can explain, justify, and prove you ran it.  Cost. This includes time and effort, as well as direct costs.  Opportunity Cost. Performing it may prevent you from doing other tests.

12 452 Contrasting Approaches In scripted testing, tests are first designed and recorded. Then they may be executed at some later time or by a different tester. In exploratory testing, tests are designed and executed at the same time, and they often are not recorded. Product Tests

13 453 Contrasting Approaches Scripted testing emphasizes accountability and decidability. Exploratory testing emphasizes adaptability and learning. Product Tests

14 454 Exploratory Testing Defined Exploratory testing is simultaneous learning, test design, and test execution. pure scripted freestyle exploratory When I say “exploratory testing” and don’t qualify it, I mean anything on the exploratory side of this continuum. chartersvague scripts fragmentary test cases roles

15 455 ET Done Well is a Structured Process  Exploratory testing, as I teach it, is a structured process conducted by a skilled tester, or by a lesser skilled testers or users working under reasonable supervision.  The structure of ET comes from:  Test design heuristics  Chartering  Time boxing  Perceived product risks  The nature of specific tests  The structure of the product being tested  The process of learning the product  Development activities  Constraints and resources afforded by the project  The skills, talents, and interests of the tester  The overall mission of testing In other words, it’s not “random”, but reasoned.

16 456 ET is an Adaptive Process  Exploratory testing decentralizes the testing problem.  Instead of trying to solve it:  only before test execution begins.  by investing in expensive test documentation that tends to reduce the total number of tests that can be created.  only via a designer who is not necessarily the tester.  while trying to eliminate the variations among testers.  completely, and all at once.  It is solved:  over the course of the project.  by minimizing the need for expensive test documentation so that more tests and more complex tests can be created with the same effort.  via testers who may also be test designers.  by taking maximum advantage of variations among testers.  incrementally and cyclically.

17 457 Exploratory Forks New test idea New test idea New test idea New test ideas occur continually during an ET session.

18 458 Lateral Thinking but periodically take stock of your status against your mission Let yourself be distracted… ‘cause you never know what you’ll find…

19 459 Exploratory Testing Tasks Learning Execute Tests Design Tests Product (coverage) Techniques Quality (oracles) Discover the elements of the product. Discover how the product should work. Discover test design techniques that can be used. Decide which elements to test. Observe product behavior. Speculate about possible quality problems. Evaluate behavior against expectations. Configure & operate the product. Select & apply test design techniques. Testing notes Tests Problems Found

20 460 Taking Notes  Test Coverage Outline/Matrix  Oracle Notes  Risk/Strategy List  Test Execution Log  Issues, Questions & Anomalies  It would be easier to test if you changed/added…  How does … work?  Is this important to test? How should I test it?  I saw something strange…

21 461 KEY IDEA

22 462 Models  A Model is…  A map of a territory  A simplified perspective  A relationship of ideas  An incomplete representation of reality  A diagram, list, outline, matrix… No good test design has ever been done without models. The trick is to become aware of how you model the product, and learn different ways of modeling.

23 463 The Universal Test Procedure “Try it and see if it works.”  Learn about it  Model it  Speculate about it  Configure it  Operate it  Know what to look for  See what’s there  Understand the requirements  Identify problems  Distinguish bad problems from not- so-bad problems ModelsOraclesCoverage

24 464 All Product Testing is Something Like This Project Environment Product Elements Quality Criteria Test Techniques Perceived Quality

25 465 Seven Big Problems of Testing Logistics Problem Coverage Problem Oracle Problem Reporting Problem Stopping Problem Pesticide Problem & Agency Problem &

26 466 Coverage  There are as many kinds of coverage as there are ways to model the product.  Structural  Functional  Data  Platform  Operations Product coverage is the proportion of the product that has been tested.

27 467 Sometimes your coverage is disputed… “No user would do that.” “No user I can think of, who I like, would do that on purpose.” Who aren’t you thinking of? Who don’t you like who might really use this product? What might good users do by accident?

28 468 Useful Oracle Heuristics  Consistent with History: Present function behavior is consistent with past behavior.  Consistent with our Image: Function behavior is consistent with an image that the organization wants to project.  Consistent with Comparable Products: Function behavior is consistent with that of similar functions in comparable products.  Consistent with Claims: Function behavior is consistent with what people say it’s supposed to be.  Consistent with User’s Expectations: Function behavior is consistent with what we think users want.  Consistent within Product: Function behavior is consistent with behavior of comparable functions or functional patterns within the product.  Consistent with Purpose: Function behavior is consistent with apparent purpose.

29 469 Rapid, Frequent Feedback to Clients Test Cycle receive build Sanity Check is it testable? Fix Verifications were they fixed? New Stuff is it functional? Common and Critical Tests Complex Tests General Regression Tests

30 470 Risk Focus: Common and Critical Cases  Core functions: the critical and the popular.  Capabilities: can the functions work at all?  Common situations: popular data and pathways.  Common threats: likely stress and error situations.  User impact: failures that would do a lot of damage.  Most wanted: problems of special interest to someone else on the team.

31 471 Rapid Bug Investigation  Identification  Notice a problem.  Recall what you were doing just prior to the problem.  Examine symptoms of the problem w/o disturbing system state.  Consider possibility of tester error.  Investigation  How can the problem be reproduced?  What are the symptoms of the problem?  How severe could the problem be?  What might be causing the problem?  What might be a workaround?  Reality Check  Do we know enough about the problem to report it?  Is it important to investigate this problem right now?  Is this problem, or any variant of it, already known?  How do we know this is really a problem?  Is there someone else who can help us? Identify InvestigateCheck

32 472 KEY IDEA

33 473 Test Strategy  Strategy: “The set of ideas that guide your test design.”  Logistics: “The set of ideas that guide your application of resources to fulfilling the test strategy.”  Plan: “The set of ideas that guide your test project.”  A good test strategy is:  Product-Specific  Risk-focused  Diversified  Practical

34 474 Test Strategy  Test Approach and Test Architecture are other terms commonly used to describe what I’m calling test strategy.  Example of a poorly stated (and probably poorly conceived) test strategy:  “We will use black box testing, cause-effect graphing, boundary testing, and white box testing to test this product against its specification.”

35 475 Test Strategy  Not to be confused with test logistics, which involve the details of bringing resources to bear on the test strategy at the right time and place.  You don’t have to know the entire strategy in advance. The strategy should change as you learn more about the product and its problems.

36 476 One way to make a strategy… 1.Learn the product. 2.Think of important potential problems. 3.Think of ways to test that will cover the product and look for those important problems. 4.Make sure you are taking advantage of resources. 5.Make sure that your strategy is reasonably practical.

37 477 Test Strategy Heuristic: Diverse Half-Measures  There is no single technique that finds all bugs.  We can’t do any technique perfectly.  We can’t do all conceivable techniques. Use “diverse half-measures”-- lots of different points of view, approaches, techniques, even if no one strategy is performed completely.

38 478 Strategy Heuristic: Function/Data Square Data Functions risk testing Function testing reliability testing smoke testing

39 479 Test Techniques A test technique is a recipe for performing these tasks that will reveal something worth reporting  Analyze the situation.  Model the test space.  Select what to cover.  Define your oracles.  Configure the test system.  Operate the test system.  Observe the test system.  Evaluate the test results.

40 480 Dynamic Quality Paradigm Perfect Awful unnecessary quality unacceptable quality Item A Item B It’s more important to work on Item B. Further improvement would not be a good use of resources. Further improvement is necessary. Good enough quality bar floating line

41 481 A Heuristic for Good Enough 1. X has sufficient benefits. 2. X has no critical problems. 3. Benefits of X sufficiently outweigh problems. 4. In the present situation, and all things considered, improving X would be more harmful than helpful. Benefits Problems All conditions must apply.

42 482 Good Enough...  …with what level of confidence?  …to meet ethical obligations?  …in what time frame?  …compared to what?  …for what purpose?  …or else what?  …for whom? Perspective is Everything

43 483 MISSION: The most important part  Find important problems  Assess quality  Certify to standard  Fulfill process mandates  Satisfy stakeholders  Assure accountability  Advise about QA  Advise about testing  Advise about quality  Maximize efficiency  Minimize time  Minimize cost The quality of testing depends on which of these possible missions matter and how they relate. Many debates about the goodness of testing are really debates over missions and givens.

44 484 Testability  Controllability  Observability  Availability  Simplicity  Stability  Information Log files! Scriptable Interface!

45 485 It Boils Down To…  YOU: Skills, equipment, experience, attitude  THE BALL: The product, testing tasks, bugs  YOUR TEAM: Coordination, roles, support  THE GAME: Risks, rewards, project environment, corporate environment, your mission as a tester  YOUR MOVES: How you spend your attention and energy to help your team win the game.

46 486 Rapid Testing  Develop your scientific mind.  Use exploratory testing.  Know your coverage and oracles.  Run crisp test cycles that focus first on areas of risk.  Use a diversified test strategy that serves the mission.  Assure that your testing fits the logistics of the project.

47 487 Exploratory Process

48 488 Introducing the Test Session 1)Charter 2)Time Box 3)Reviewable Result 4)Debriefing vs.

49 489 Charter: A clear mission for the session  A charter may suggest what should be tested, how it should be tested, and what problems to look for.  A charter is not meant to be a detailed plan.  General charters may be necessary at first:  “Analyze the Insert Picture function”  Specific charters provide better focus, but take more effort to design:  “Test clip art insertion. Focus on stress and flow techniques, and make sure to insert into a variety of documents. We’re concerned about resource leaks or anything else that might degrade performance over time.”

50 490 Time Box: Focused test effort of fixed duration  Brief enough for accurate reporting.  Brief enough to allow flexible scheduling.  Brief enough to allow course correction.  Long enough to get solid testing done.  Long enough for efficient debriefings.  Beware of overly precise timing. Short: 60 minutes (+-15) Normal: 90 minutes (+-15) Long: 120 minutes (+-15)

51 491 Debriefing: Measurement begins with observation  The manager reviews session sheet to assure that he understands it and that it follows the protocol.  The tester answers any questions.  Session metrics are checked.  Charter may be adjusted.  Session may be extended.  New sessions may be chartered.  Coaching happens.

52 492 Reviewable Result: A scannable session sheet  Charter  #AREAS  Start Time  Tester Name(s)  Breakdown  #DURATION  #TEST DESIGN AND EXECUTION  #BUG INVESTIGATION AND REPORTING  #SESSION SETUP  #CHARTER/OPPORTUNITY  Data Files  Test Notes  Bugs  #BUG  Issues  #ISSUE CHARTER ----------------------------------------------- Analyze MapMaker’s View menu functionality and report on areas of potential risk. #AREAS OS | Windows 2000 Menu | View Strategy | Function Testing Strategy | Functional Analysis START ----------------------------------------------- 5/30/00 03:20 pm TESTER ----------------------------------------------- Jonathan Bach TASK BREAKDOWN ----------------------------------------------- #DURATION short #TEST DESIGN AND EXECUTION 65 #BUG INVESTIGATION AND REPORTING 25 #SESSION SETUP 20

53 493 Coverage: Specifying coverage areas  These are text labels listed in the Charter section of the session sheet. (e.g. “insert picture”)  Coverage areas can include anything  areas of the product  test configuration  test strategies  system configuration parameters  Use the debriefings to check the validity of the specified coverage areas.

54 494 Closing Concepts

55 495 Common Concerns About ET  Concerns  We have a long-life product and many versions, and we want a good corporate memory of key tests and techniques. Corporate memory is at risk because of the lack of documentation.  The regulators would excommunicate us. The lawyers would massacre us. The auditors would reject us.  We have specific tests that should be rerun regularly.  Replies  So, use a balanced approach, not purely exploratory.  Even if you script all tests, you needn’t outlaw exploratory behavior.  Let no regulation or formalism be an excuse for bad testing.

56 496  Concerns  Some tests are too complex to be kept in the tester’s head. The tester has to write stuff down or he will not do a thorough or deep job.  Replies  There is no inherent conflict between ET and documentation.  There is often a conflict between writing high quality documentation and doing ET when both must be done at the same moment. But why do that?  Automatic logging tools can solve part of the problem.  Exploratory testing can be aided by any manner of test tool, document, or checklist. It can even be done from detailed test scripts. Common Concerns About ET

57 497  Concerns  ET works well for expert testers, but we don’t have any.  Replies  Detailed test procedures do not solve that problem, they merely obscure it, like perfume on a rotten egg.  Our goal as test managers should be to develop skilled testers so that this problem disappears, over time.  Since ET requires test design skill in some measure, ET management must constrain the testing problem to fit the level and type of test design skill possessed by the tester.  I constrain the testing problem by personally supervising the testers, and making use of concise documentation, NOT by using detailed test scripts. Humans make poor robots. Common Concerns About ET

58 498  Concerns  How do I tell the difference between bluffing and exploratory testing?  If I send a scout and he comes back without finding anything, how do I know he didn’t just go to sleep behind some tree?  Replies  You never know for sure– just as you don’t know if a tester truly followed a test procedure.  It’s about reputation and relationships. Managing testers is like managing executives, not factory workers.  Give novice testers short leashes; better testers long leashes. An expert tester may not need a leash at all.  Work closely with your testers, and these problems go away. Common Concerns About ET

59 499 Challenges of High Accountability Exploratory Testing  Architecting the system of charters (test planning)  Making time for debriefings  Getting the metrics right  Creating good test notes  Keeping the technique from dominating the testing  Maintaining commitment to the approach For example session sheets and metrics see http://www.satisfice.com/sbtm


Download ppt "441 Copyright © 1996-2003, Satisfice, Inc. V1.6.1 James Bach, Satisfice, Inc. (540)631-0600."

Similar presentations


Ads by Google