2 Continuous Integration PersonasCRUD AnalysisMoSCoWSoftware Test AutomationTDDAgile TestingSW design patternWalkthroughV&V standardDefect Tracking ToolCode Inspection“Bug” is foundSoftware ReliabilitySoftware MetricsHCI
3 Positive and Negative Testing Positive TestingDo “normal” user actionsFind cases where the program does not do what it is supposed to doTest with valid inputNegative TestingDo “abnormal” user actionsFind cases where the program does things it is not supposed to doTest with invalid input
4 OutlineSoftware Test in GeneralValue-based Software Test
5 Most Common Software problems Incorrect calculationIncorrect data edits & ineffective data editsIncorrect matching and merging of dataData searches that yields incorrect resultsIncorrect processing of data relationshipIncorrect coding / implementation of business rulesInadequate software performance
6 Confusing or misleading data Software usability by end users & Obsolete SoftwareInconsistent processingUnreliable results or performanceInadequate support of business needsIncorrect or inadequate interfaceswith other systemsInadequate performance and security controlsIncorrect file handling
7 Cost to fix faults 60* to 100* 1.5* to 6* 1* Cost Definition DevelopmentPost Release
8 Objectives of testingExecuting a program with the intent of finding an error.To check if the system meets the requirements and be executed successfully in the Intended environment.To check if the system is “ Fit for purpose”.To check if the system does what it is expected to do.
9 A good test :A good test case is one that has a probability of finding an as yet undiscovered error.A successful test is one that uncovers a yet undiscovered error.A good test is not redundant.A good test should be “best of breed”.A good test should neither be too simple nor too complex.
10 Objective of a Software Tester Find bugs as early as possible and make sure they get fixed.To understand the application well.Study the functionality in detail to find where the bugs are likely to occur.Study the code to ensure that each and every line of code is tested.Create test cases in such a way that testing is done to uncover the hidden bugs and also ensure that the software is usable and reliable
11 How do you know you are a good tester? b.sources: Google images
12 How do you know you are a good tester? Signs that you are dating a testerYour love letters get returned to you marked up withred ink, highlighting your grammar and spellingmistakes.When you ask him how you look in a dress, he’ll actually tell you.He won’t help you change a broken light bulb because his job is simply to report and not to fix.He’ll keep bringing up old problems that you’ve since resolved just to make sure that they’re truly gone
13 Static and Dynamic Verification Software reviews, inspections and walkthroughs - Concerned with analysis of the static system representation to discover problems (static verification)Software testing with test cases - Concerned with exercising and observing product behaviour (dynamic verification)The system is executed with test data and its operational behaviour is observed
15 Inspections and testing Inspections and testing are complementary and not opposing verification techniquesBoth should be used during the V & V processInspections can check conformance with a specification but not conformance with the customer’s real requirementsInspections cannot check non-functional characteristics such as performance, usability, etc.
16 Test data and test cases Test data Inputs which have been devised to test the systemTest cases Inputs to test the system and the predicted outputs from these inputs if the system operates according to its specification
17 Methods of testing Test to specification: Test to code: Black box Data drivenFunctional testingCode is ignored: only use specification document to develop test casesTest to code:Glass box/White boxLogic driven testingIgnore specification and only examine the code.
18 Types of Testing – (Jokes) Aggression Testing: If this doesn’t work, I’m gonna kill somebody.Compression Testing: Confession Testing: Okay, Okay, I did program that bug.Congressional Testing:Are you now, or have you ever been a bug?Depression Testing:If this doesn’t work, I’m gonna kill myself.Egression Testing: Uh-oh, a bug… I’m outta here.Digression Testing: Well, it works, but can I tell you about my truck…Expression Testing: a bug.Obsession Testing: I’ll find this bug if it’s the last thing I do.Oppression Testing: Test this now!Poission Testing: Alors! Regardez le poission!Repression Testing: It’s not a bug, it’s a feature.Secession Testing: The bug is dead! Long lives the bug!Suggestion Testing: Well, it works but wouldn’t it be better if…Ref: netfunny.com
19 Testing Levels Unit testing Integration testing System testing Acceptance testing
20 Unit testing The most ‘micro’ scale of testing. Tests done on particular functions or code modules.Requires knowledge of the internal program design and code.Done by Programmers (not by testers).Unit testing tool
21 Integration Testing Types of Integration Testing Testing of combined parts of an application to determine their functional correctness.‘Parts’ can becode modulesindividual applicationsclient/server applications on a network.Types of Integration TestingTop-downBottom-upSandwichBig-bang
26 Systems TestingTo test the co-existence of products and applications that are required to perform together in the production-like operational environment (hardware, software, network)To ensure that the system functions together with all the components of its environment as a total systemTo ensure that the system releases can be deployed in the current environment
27 Acceptance Testing Objectives To verify that the system meets the user requirementsWhenAfter System TestingInputBusiness Needs & Detailed RequirementsMaster Test PlanUser Acceptance Test PlanOutputUser Acceptance Test report
28 Testing an application under heavy loads. Load testingTesting an application under heavy loads.Eg. Testing of a web site under a range of loads to determine, when the system response time degraded or fails.
29 Stress TestingTesting under unusually heavy loads, heavy repetition of certain actions or inputs, input of large numerical values, large complex queries to a database etc.Term often used interchangeably with ‘load’ and ‘performance’ testing.Performance testingTesting how well an application complies to performance requirements.
30 Alpha testing Beta-testing Testing done when development is nearing completion; minor design changes may still be made as a result of such testing.Beta-testingTesting when development and testing are essentially completed and final bugs and problems need to be found before release.
31 Good Test Plans (1/2) Developed and Reviewed early. Clear, Complete and SpecificSpecifies tangible deliverables that can be inspected.Staff knows what to expect and when to expect it.
32 Good Test Plans (2/2) Realistic quality levels for goals Includes time for planningCan be monitored and updatedIncludes user responsibilitiesBased on past experienceRecognizes learning curves
33 Test Cases Test plan reference id Test case Test condition ContentsTest plan reference idTest caseTest conditionExpected behavior
34 Good Test Cases Find Defects Have high probability of finding a new defect.Unambiguous tangible result that can be inspected.Repeatable and predictable.
35 Good Test Cases Traceable to requirements or design documents Push systems to its limitsExecution and tracking can be automatedDo not misleadFeasible
38 OutlineSoftware Test in GeneralValue-based Software Test
39 Tester’s Attitude and Mindset “The job of tests, and the people that develop and run tests, is to prevent defects, not to find them.”- Mary Poppendieck
40 Pareto 80-20 distribution of test case value [Bullock, 2000] Actual business value1001008080% of% of6060ValueValueAutomated testAutomated testforforgeneration toolgeneration toolCorrectCorrect4040--all tests have equal value*CustomerCustomerBillingBilling20205510101515Customer TypeCustomer Type*Usual SwE assumption for all requirements, objects, defects, …
44 How much test is enough?Li, Q., Yang, Y., Li, M., Wang, Q., Boehm, B. W. and Hu, C., Improving software testing process: feature prioritization to make winners of success-critical stakeholders. Journal of Software Maintenance and Evolution: Research and Practice, n/a. doi: /smr.512
46 Value-based Test Order Logic Value First: Test the one with the highest value.Dependency Second: If the test case with the highest value is not “Ready-to-Test”, which means at least one of the test cases in its Dependencies Set is “Not-Tested-Yet”. In such situation, prioritize the “Not-Tested-Yet” test cases according to “Value First” in this Dependencies Set and start to test until all test cases in the Dependencies Set are “Passed”. Then the test case with the highest value is “Ready-to-Test”.Shrink the prioritization set ASAP: Exclude the tested one out of the prioritization set.