Presentation on theme: "AXIOMS Paul Gerrard THE TESTING OF Advancing Testing Using Axioms."— Presentation transcript:
AXIOMS Paul Gerrard THE TESTING OF Advancing Testing Using Axioms
Axioms – a Brief Introduction Advancing Testing Using Axioms First Equation of Testing Test Strategy and Approach Testing Improvement A Skills Framework for Testers Quantum Theory for Testing Close
Surely, there must be SOME things that ALL testers can AGREE ON? Or are we destined to argue FOREVER?
Started as a ‘thought experiment’ in my blog in February 2008 Some quite vigorous debate on the web ‘great idea’ ‘axioms don’t exist’ ‘Paul has his own testing school’ Initial 12 ideas evolved to 16 test axioms Testers Pocketbook: testers-pocketbook.comtesters-pocketbook.com Test Axioms Website test-axioms.comtest-axioms.com
Some very useful by-products Test strategy, improvement, skills framework Interesting research areas! First Equation of Testing, Testing Uncertainty Principle, Quantum Theory, Relativity, Exclusion Principle... You can tell I like physics
There are no agreed definitions of test or testing!
The words software, IT, program, technology, methodology, v- model, entry/exit criteria, risk – do not appear in definitions
American Heritage Dictionary: Test: (noun) A procedure for critical evaluation; A means of determining the presence, quality, or truth of something; A trial.
A testing stakeholder is someone who is interested in the outcome of testing; You can be your OWN stakeholder (e.g. dev and users)
Let’s look at a few of the test axioms
Testing needs stakeholders
Test design is based on models
Testers need sources of knowledge to select things to test
Testing needs a test coverage model or models
Our sources of knowledge are fallible and incomplete
The value of testing is measured by the confidence of stakeholder decision making
Testing never goes as planned; evidence arrives in discrete quanta “Ohhhhh... Look at that, Schuster... Dogs are so cute when they try to comprehend quantum mechanics.”
Testing never finishes; it stops
Consider Axioms as thinking tools
Axioms + Context + Values + Thinking =Approach
Separation of Axioms, context, values and thinking Tools, methodologies, certification, maturity models promote approaches without reference to your context or values No thinking is required! Without a unifying test theory you have no objective way of assessing these products.
Strategy is a thought process not a document
Test Strategy Test Strategy Risks Goals Constraints Human resource Environment Timescales Process (lack of?) Contract Culture Opportunities User involvement Automation De- Duplication Early Testing Skills Communication Axioms Artefacts
Summary: Identify and engage the people or organisations that will use and benefit from the test evidence we are to provide Consequence if ignored or violated: There will be no mandate or any authority for testing. Reports of passes, fails or enquiries have no audience. Questions: Who are they? Whose interests do they represent? What evidence do they want? What do they need it for? When do they want it? In what format? How often?
Summary: Choose test models to derive tests that are meaningful to stakeholders. Recognise the models’ limitations and the assumptions that the models make Consequence if ignored or violated: Tests design will be meaningless and not credible to stakeholders. Questions Are design models available to use as test models? Are they mandatory? What test models could be used to derive tests from the Test Basis? Which test models will be used? Are test models to be documented or are they purely mental models? What are the benefits of using these models? What simplifying assumptions do these models make? How will these models contribute to the delivery of evidence useful to the acceptance decision makers? How will these models combine to provide sufficient evidence without excessive duplication? How will the number of tests derived from models be bounded?
1. Test Plan Identifier 2. Introduction 3. Test Items 4. Features to be Tested 5. Features not to be Tested 6. Approach 7. Item Pass/Fail Criteria 8. Suspension Criteria and Resumption Requirements 9. Test Deliverables 10. Testing Tasks 11. Environmental Needs 12. Responsibilities 13. Staffing and Training Needs 14. Schedule 15. Risks and Contingencies 16. Approvals Based on IEEE Standard
Items 1, 2 – Administration Items – Scope Management, Prioritisation Item 6 – All the Axioms are relevant Items 7+8 – Good-Enough, Value Item 9 – Stakeholder, Value, Confidence Item 10 – All the Axioms are Relevant Item 11 – Environment Item 12 – Stakeholder Item 13 – All the Axioms are Relevant Item 14 – All the Axioms are Relevant Item 15 – Fallibility, Event Item 16 – Stakeholder Axioms
1. Stakeholder Objectives Stakeholder management Goal and risk management Decisions to be made and how (acceptance) How testing will provide confidence and be assessed How scope will be determined 2. Design approach Sources of knowledge (bases and oracles) Sources of uncertainty Models to be used for design and coverage Prioritisation approach 3. Delivery approach Test sequencing policy Repeat test policies Environment requirements Information delivery approach Incident management approach Execution and end-game approach 4. Plan (high or low-level) Scope Tasks Responsibilities Schedule Approvals Risks and contingencies
Test process improvement is a waste of time
There are no “practice” Olympics to determine the best There is no consensus about which practices are best, unless consensus means “people I respect also say they like it” There are practices that are more likely to be considered good and useful than others, within a certain community and assuming a certain context Good practice is not a matter of popularity. It’s a matter of skill and context. Derived from “No Best Practices”, James Bach,
Actually its 11 (most were not software related)
Google search “CMM” – 22,300,000 “CMM Training” – 48,200 “CMM improves quality” – 74 (BUT really 11 – most of these have NOTHING to do with software) A Gerrard Consulting client… CMM level 3 and proud of it (chaotic, hero culture) Hired us to assess their overall s/w process and make recommendations (quality, time to deliver is slipping) 40+ recommendations, only 7 adopted – they couldn’t change How on earth did they get through the CMM 3 audit?
Using process change to fix cultural or organisational problems is never going to work Improving test in isolation is never going to work either Need to look at changing context rather than values…
Context + Values + Thinking =Approach <- your values <- your context <- your thinking <- your approach
Axioms + Context + Values + Thinking =Approach <- recognise <- hard to change <- could change? <- just do some <- your approach
Axioms represent the critical things to think about Associated questions act as checklists to: Assess your current approach Identify gaps, inconsistencies in current approach QA your new approach in the future Axioms represent the WHAT Your approach specifies HOW
Mission Coalition Vision Communication Action Wins Consolidation Anchoring Changes identified here If you must use one, this is where your ‘test model’ comes into play
Axioms indicate WHAT to think about......so the Axioms point to SKILLS
Summary: Choose test models to derive tests that are meaningful to stakeholders. Recognise the models’ limitations and the assumptions that the models make. Consequence if ignored or violated: Tests design will be meaningless and not credible to stakeholders. Questions: Are design models available to use as test models? Are they mandatory? What test models could be used to derive tests from the Test Basis? Which test models will be used? Are test models to be documented or are they purely mental models? What are the benefits of using these models? What simplifying assumptions do these models make? How will these models contribute to the delivery of evidence useful to the acceptance decision makers? How will these models combine to provide sufficient evidence without excessive duplication? How will the number of tests derived from models be bounded?
A tester needs to understand: Test models and how to use them How to select test models from fallible sources of knowledge How to design test models from fallible sources of knowledge Significance, authority and precedence of test models How to use models to communicate The limitations of test models Familiarity with common models Is this all that current certification provides?
Functional testers are endangered: Certification covers process and clerical skills Functional testing is becoming a commodity and is easy to outsource To survive, testers need to specialise: Management Test automation Test strategy, design, goal- and risk-based Stakeholder management Non-Functional testing Business domain specialists...
Intellectual skills and capabilities are more important than the clerical skills Need to re-focus on: Testing thought processes (Axioms) Real-world examples, not theory Testing as information provision Goal and risk-based testing Testing as a service (to stakeholders) Practical, hands-on, real-world training, exercises and coaching.
If evidence arrives in discrete quanta......can we assign a value to it?
Tests are usually run one by one Every individual test has some significance Some tests expose failures but ultimately we want all tests to PASS When all tests pass – the stakeholders are happy, aren’t they? Can we measure confidence? But...
Testers cannot usually: Prepare all tests they COULD do Run ALL tests in the plan Re-test ALL fixes Regression-test as much or as often as required How do we judge the significance of tests? To include them in scope for planning (or not) To execute them in the right order? To ensure the most significant tests are run?
What stakeholders want ultimately, is every test to pass The ideal situation is: We have run all our tests All our tests pass Acceptance is a formality Not all tests pass though We track incidents, severity and priority – great But how do we track the significance or value of tests that pass?
Significance varies by objective: Criticality of the business goal it covers Criticality of the risk it covers Significance varies by precedent: The first end-to-end test pass is significant Subsequent e2e passes are less significant Significance varies by functional dependence: A test of shared functionality is more important than standalone functionality Significance by stakeholder: Customers and sponsor tests are more significant than developer tests.
Stakeholders usually know how to judge the significance of failures when tests FAIL So why don’t we assess the significance of tests BEFORE we run them? If we did that: We could scope and prioritise more effectively We would know exactly which tests provide enough information for an acceptance decision Acceptance criteria would be taken seriously.
Using business goals, risks and coverage to drive testing is ‘advanced’ - but it is still VERY CRUDE Quantum Testing proposal: Need to assign a micro-significance to all tests Need to assess macro-significance to collections of tests As tests are created and executed, evidence increases incrementally Manage progress by monitoring EVIDENCE rather than by counting test cases.
We and our stakeholders could know the value of tests BEFORE we run them Stakeholders would understand WHAT we are doing and WHY The problem of ‘enough testing’ becomes a shared challenge (testers and stakeholders) Caveats: we assign significance qualitatively rather than numerically Significance is RELATIVE rather than absolute!
Axioms are context-neutral rules for testing The Equation of Testing Separates axioms, context, values and thinking We can have sensible conversations about process Axioms and associated questions provide context neutral checklists for test strategy, assessment/improvement and skills Quantum Testing aims to address the question, “how much testing is enough?”
Thank-You! THE TESTING OF testaxioms.com testers-pocketbook.com gerrardconsulting.com