Presentation is loading. Please wait.

Presentation is loading. Please wait.

Approaches to ---Testing Software Some of us “hope” that our software works as opposed to “ensuring” that our software works? Why? Just foolish Lazy Believe.

Similar presentations


Presentation on theme: "Approaches to ---Testing Software Some of us “hope” that our software works as opposed to “ensuring” that our software works? Why? Just foolish Lazy Believe."— Presentation transcript:

1 Approaches to ---Testing Software Some of us “hope” that our software works as opposed to “ensuring” that our software works? Why? Just foolish Lazy Believe that its too costly (time, resource, effort, etc.) Lack of knowledge –DO NOT use the “I feel lucky” or “I feel confident” approach to testing - - - - although you may feel that way sometimes. Use a methodical approach to testing to back up the “I feel ‘lucky/confident” feeling –Methods and metrics utilized must show VALUE –Value, unfortunately, often are expressed in negative terms Severe problems that cost loss of lives or business Problems that cost more than testing expenses and effort

2 Perspective on Testing Today we test because we know that systems have problems - - - we are fallible. 1.To find problems and find the parts that do not work 2.To understand and show the parts that do work 3.To assess the quality of the over-all product (A major QA and release management responsibility ) You are asked to do this as part of of your assignment 1 – Part II report.

3 Some Talking Definitions (based on IEEE terminology) Error –A mistake made by a human –The mistake may be in the requirements, design, code, fix, integration, or install Fault –Is a defect or defects in the artifact that resulted from an error –There may be defects caused by errors made that may or may not be detectable (e.g. error of omission in requirement) Failure –Is the manifestation of faults when the software is “executed.” Running code May show up in several places May be non-code related (e.g. reference manual) Incident –Is the detectable symptom of failures (Not in the text) (includes error of omission and “no-code?”) Note Example? (bank accnt)

4 Testing in the “Large” Testing is concerned with all, but may not be able to detect all –Errors –Faults –Failures –Incidents Testing utilize the notion of test cases to perform the activities of test(s) –Inspection of non-executables –Executing the code –Analyzing results and formally “proving” the non- executables and the executable in a business workflow (or user) setting

5 Software Activities and Error Injections, Fault Passing, and Fault Removal Requirements Design Code Testing Error Fixing Error faults Inspection Fixing Note that in “fixing” faults/failures, one can commit errors and introduce faults faults

6 Specification vs Implementation Specification Implementation Expected Actual The ideal place where expectation and actual “matches” The other areas are of concern ---- especially to testers

7 Specification vs Implementation vs Test Cases SpecificationImplementation Expected Actual Test Cases Tested 1 3 2 4 5 6 7 What do these numbered regions mean to you?

8 Black Box vs White Box code testing Black box testing (Functional Testing) –Look at mainly the input and outputs –Mainly uses the specification(requirements) as the source for designing test cases. –The internal of the implementation is not included in the test case design. –Hard to detect “missing” specification White box testing (Structural Testing) –Look at the internals of the implementation –Design test cases based on the design and code implementation –Hard to detect “extraneous” implementation that was never specified We Need Both - - Black Box and White Box Testing

9 A Sample: “Document-Form” for Tracking Each Test Case Test Case number Test Case author A general description of the test purpose Pre-condition Test inputs Expected outputs (if any) Post-condition Test Case history: –Test execution date –Test execution person –Test execution result (s)

10 Recording Test Results Use the same “form” describing the test case --- see earlier slide on “document-form” test case and expand the “results” to include: –State Pass or Fail on the Execution Result line –If “failed”: 1.Show output or some other indicator to demonstrate the fault or failure 2.Assess and record the severity of the fault or failure found

11 Fault/Failure Classification (Tsui) Very High severity – brings the systems down or a function is non-operational and there is no work around High severity – a function is not operational but there is a manual work around Medium severity – a function is partially operational but the work can be completed with some work around Low severity – minor inconveniences but the work can be completed

12 Fault Classification (Beizer) Mild – misspelled word Moderate - misleading or redundant info Annoying – truncated names; billing for $0.00 Disturbing – some transactions not processed Serious - lose a transaction Very serious - incorrect transaction execution Extreme – Frequent & very serious errors Intolerable - database corruption Catastrophic – System shutdown Infectious - Shutdown that spreads to others Increasing severity

13 IEEE list of “anomalies” (faults) Input/output faults Logic faults Computation faults Interface faults Data faults Why do you care about these “types” of faults (results of errors made)? Because they give us some ideas of what to look for in inspections and in designing future test cases ----

14 Different Levels of Testing Program unit A Program unit B Program unit T...... Function 1 Function 2 Function 8.... Component 1 Whole System Component 3. Unit Testing Functional TestingComponent Testing System Testing

15 Still Need to Demonstrate Value of Testing “Catastrophic” problems (e.g. life or business ending ones) do not need any measurements---but--- others do: –Measure the cost of problems found by customers Cost of problem reporting/recording Cost of problem re-creation Cost of problem fix and retest Cost of solution packaging and distribution Cost of managing the customer problem-to-resolution steps –Measure the cost of discovering the problems and fixing them prior to release Cost of planning reviews (inspections) and testing Cost of executing reviews (inspections) and testing Cost of fixing the problems found and retest Cost of inserting fixes and updates Cost of managing problem-to-resolution steps –Compare the above two costs AND include loss of customer “good-will”

16 Goals of Testing? Test as much as time allows us –Execute as many test cases as schedule allows? Validate all the “key” areas –Test only the designated “key” requirements? Find as much problems as possible –Test all the likely error prone areas and maximize test problems found? Validate the requirements –Test all the requirements? Test to reach a “quality” target State your goal(s) for testing. - - - what would you like people to say about your system ? Your goals may dictate your testing process Quality Target?


Download ppt "Approaches to ---Testing Software Some of us “hope” that our software works as opposed to “ensuring” that our software works? Why? Just foolish Lazy Believe."

Similar presentations


Ads by Google