Presentation is loading. Please wait.

Presentation is loading. Please wait.

SOFTWARE TESTING SOFTWARE TESTING Presented By, C.Jackulin Sugirtha-10mx15 R.Jeyaramar-10mx17K.Kanagalakshmi-10mx20J.A.Linda-10mx25P.B.Vahedha-10mx53.

Similar presentations


Presentation on theme: "SOFTWARE TESTING SOFTWARE TESTING Presented By, C.Jackulin Sugirtha-10mx15 R.Jeyaramar-10mx17K.Kanagalakshmi-10mx20J.A.Linda-10mx25P.B.Vahedha-10mx53."— Presentation transcript:

1 SOFTWARE TESTING SOFTWARE TESTING Presented By, C.Jackulin Sugirtha-10mx15 R.Jeyaramar-10mx17K.Kanagalakshmi-10mx20J.A.Linda-10mx25P.B.Vahedha-10mx53

2  Linda(Linda you want to put what is mean by Quality assurance in 7 th slide.put that ma…..)

3 What is Testing?  “Testing is the process of executing a program with the intent of finding errors ”  “Testing is the process of evaluating a system by manual or automatic means and verify that it satisfies specified requirements”

4 What is Software Testing? Executing software in a simulated or real environment, using inputs selected somehow.

5 Most Common Software problems  Incorrect calculation  Incorrect data edits & ineffective data edits  Incorrect matching and merging of data  Data searches that yields incorrect results  Incorrect processing of data relationship  Incorrect coding / implementation of business rules  Inadequate software performance

6 Cont….  Confusing or misleading data  Software usability by end users & Obsolete Software Obsolete Software  Inconsistent processing  Unreliable results or performance  Inadequate support of business needs  Incorrect or inadequate interfaces with other systems  Inadequate performance and security controls  Incorrect file handling

7 Ultimate goal for software testing Quality Assurance

8 Goals of Testing  Detect faults  Establish confidence in software  Evaluate properties of software  Reliability  Performance  Memory Usage  Security  Usability

9 Objectives of testing  Executing a program with the intent of finding an error.  To check if the system meets the requirements and be executed successfully in the Intended environment.  To check if the system is “ Fit for purpose”.  To check if the system does what it is expected to do.

10 Objectives of testing  A good test case is one that has a probability of finding an as yet undiscovered error.  A successful test is one that uncovers a yet undiscovered error.  A good test is not redundant.  A good test should be “best of breed”.  A good test should neither be too simple nor too complex.

11 Objective of a Software Tester  Find bugs as early as possible and make sure they get fixed.  To understand the application well.  Study the functionality in detail to find where the bugs are likely to occur.  Study the code to ensure that each and every line of code is tested.  Create test cases in such a way that testing is done to uncover the hidden bugs and also ensure that the software is usable and reliable

12  Jack

13 Why Software Testing?  Defects are found during operation  It results in high maintenance cost and user dissatisfaction  It may cause mission failure  Impact on operational performance and reliability

14 What Testing Shows ? errors requirements conformance performance an indication of quality

15 What exactly does Software Tester Do? What exactly does Software Tester Do?   “The goal of Software Tester is to find bugs”   “The goal of a Software Tester is to find bugs, and find them as early as possible”.   “The goal of a Software Tester is to find bugs, and find them as early as possible and make sure they get fixed”

16 Testing Principles  All tests should be traceable to customer requirements. The most severe defects from customer’s point of view are those that cause the program to fail to meet customer requirements.  Tests should be planned long before testing begins, and do not plan testing under the assumption that no errors will be found.

17 Cont.. Cont..  Testing should focus on individual components, and as testing progresses, focus shifts in an attempt to find errors in integrated clusters of components.  Exhaustive testing is not practical, therefore testing focus of choosing subset of test cases that maximize the chance of revealing errors.

18 Cont..  The most effective testing should be conducted by an independent third party. A programmer must avoid to test his own program (Misunderstand of specifications or critique their own work).  The Pareto problem applies to software testing. So 80% of all errors uncovered during testing will likely be traceable to 20% of all program components so the problem is to isolate these suspect components and thoroughly test them.

19 Cont..  A test case consist of two components, input data description and correct output for each input data. These two components must be prepared in advance as the eye see what it wants to see.  Test cases must be written for invalid and unexpected input conditions as well as valid and expected input conditions.

20 Test Planning and Documentation Test Planning and Documentation  Testers should specify the expected result of every test, in advance.  There should be at least one thoroughly documented test for every requirement item or specification item.  Testers should design most or all tests early in development.  Testers should design all tests for reuse as regression tests.

21 The Test Life Cycle  Establish the test objectives  Design the test cases  Write the test cases.  Testing of the test-cases  Executing of the tests  Evaluation of the test results.

22 Test Types  Functional tests  Algorithmic tests  Positive tests  Negative tests  Usability tests  Boundary tests  Startup/shutdown tests  Platform tests  Load/stress tests

23 vahitha

24 Software Testing Difficulties  Most of the software testing literature equates test  case selection to software testing but that is just one  difficult part. Other difficult issues include:  Determining whether or not outputs are correct.

25 Cont..  Comparing resulting internal states to expected states.  Determining whether adequate testing has been done.  Determining what you can say about the software when testing is completed.  Measuring performance characteristics.  Comparing testing strategies.

26 Stages of Testing  System Testing  End-to-End Testing  Operations Readiness Testing  Beta Testing  Load Testing  Stress Testing  Performance Testing  Reliability Testing  Regression Testing

27 Verification & Validation goals  Verification and validation should establish confidence that the software is fit for purpose  This does NOT mean completely free of defects  Rather, it must be good enough for its intended use and the type of use will determine the degree of confidence that is needed

28  Software inspections and walkthroughs - Concerned with analysis of the static system representation to discover problems (static verification)  Software testing - Concerned with exercising and observing product behaviour (dynamic verification) Static and dynamic verification

29 Static and dynamic V&V

30  Jeyaramar

31 Methods of testing  Test to specification:  Black box,  Data driven  Functional testing  Code is ignored: only use specification document to develop test cases  Test to code:  Glass box/White box  Logic driven testing  Ignore specification and only examine the code.

32 Black-box testing  An approach to testing where the program is considered as a ‘black-box’  The program test cases are based on the system specification  Test planning can begin early in the software process

33 Black-box testing

34  Sometime called structural testing or glass-box testing  Derivation of test cases according to program structure  Knowledge of the program is used to identify additional test cases  Objective is to exercise all program statements (not all path combinations) White-box testing

35

36 White box testing - binary search example int search ( int key, int [] elemArray) { int bottom = 0; int top = elemArray.length - 1; int mid; int result = -1; while ( bottom <= top ) { mid = (top + bottom) / 2; if (elemArray [mid] == key) { result = mid; result = mid; return result; return result; } // if part

37 Cont.. else{ if (elemArray [mid] < key) if (elemArray [mid] < key) bottom = mid + 1; bottom = mid + 1; else else top = mid - 1; top = mid - 1;} } //while loop return result; } // search

38  Pre-conditions satisfied, key element in array  Pre-conditions satisfied, key element not in array  Pre-conditions unsatisfied, key element in array  Pre-conditions unsatisfied, key element not in array  Input array has a single value  Input array has an even number of values  Input array has an odd number of values Binary search equivalence partitions

39

40 Binary search - test cases

41  Kanagalakshmi

42 Software testing metrics  Defects rates  Errors rates  Number of errors  Number of errors found per person hours expended  Measured by:  individual  module  during development  Errors should be categorized by origin, type, cost

43 More metrics  Direct measures - cost, effort, LOC, etc.  Indirect Measures - functionality, quality, complexity, reliability, maintainability  Size Oriented:  Lines of code - LOC  Effort - person months  errors/KLOC  defects/KLOC  cost/KLOC

44 Case study-I Case study-I

45 Case study-II Case study-II

46


Download ppt "SOFTWARE TESTING SOFTWARE TESTING Presented By, C.Jackulin Sugirtha-10mx15 R.Jeyaramar-10mx17K.Kanagalakshmi-10mx20J.A.Linda-10mx25P.B.Vahedha-10mx53."

Similar presentations


Ads by Google