CS 360 Lecture 16.  For a software system to be reliable:  Each stage of development must be done well, with incremental verification and testing. 

Slides:



Advertisements
Similar presentations
Software Engineering COMP 201
Advertisements

Software testing.
Defect testing Objectives
การทดสอบโปรแกรม กระบวนการในการทดสอบ
Chapter 10 Software Testing
Verification and Validation
Software Engineering-II Sir zubair sajid. What’s the difference? Verification – Are you building the product right? – Software must conform to its specification.
CMSC 345, Version 11/07 SD Vick from S. Mitchell Software Testing.
Software Engineering, COMP201 Slide 1 Software Testing Lecture 28 & 29.
Software testing.
CS CS 5150 Software Engineering Lecture 21 Reliability 2.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
CS CS 5150 Software Engineering Lecture 22 Reliability 2.
1 CS 501 Spring 2005 CS 501: Software Engineering Lecture 19 Reliability 1.
CS CS 5150 Software Engineering Lecture 20 Reliability 1.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing 2.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 20 Slide 1 Defect testing l Testing programs to establish the presence of system defects.
Software Engineering Software Testing.
1 CS 501 Spring 2008 CS 501: Software Engineering Lecture 19 Reliability 1.
Testing an individual module
CS 501: Software Engineering Fall 2000 Lecture 22 Dependable Systems II Validation and Verification.
- Testing programs to establish the presence of system defects -
CS CS 5150 Software Engineering Lecture 21 Reliability 1.
Verification and Validation CIS 376 Bruce R. Maxim UM-Dearborn.
1 Software Testing Techniques CIS 375 Bruce R. Maxim UM-Dearborn.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
CSCI 5801: Software Engineering
Software Testing Verification and validation planning Software inspections Software Inspection vs. Testing Automated static analysis Cleanroom software.
©Ian Sommerville 2006Software Engineering, 8th edition. Chapter 23 Slide 1 Software testing Slightly adapted by Anders Børjesson.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Software Testing Hoang Huu Hanh, Hue University hanh-at-hueuni.edu.vn.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 20 Slide 1 Integration testing l Tests complete systems or subsystems composed of integrated.
CS 501: Software Engineering Fall 1999 Lecture 16 Verification and Validation.
CMSC 345 Fall 2000 Unit Testing. The testing process.
Software testing techniques 3. Software testing
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 20 Slide 1 Chapter 20 Software Testing.
1 Software testing. 2 Testing Objectives Testing is a process of executing a program with the intent of finding an error. A good test case is in that.
1 Software Defect Testing Testing programs to establish the presence of system defects.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
CS CS 5150 Software Engineering Lecture 20 Reliability 2.
Software Testing Yonsei University 2 nd Semester, 2014 Woo-Cheol Kim.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 22 Slide 1 Software Verification, Validation and Testing.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
CSC 480 Software Engineering Lecture 15 Oct 21, 2002.
Chapter 12: Software Inspection Omar Meqdadi SE 3860 Lecture 12 Department of Computer Science and Software Engineering University of Wisconsin-Platteville.
1 CS 501 Spring 2002 CS 501: Software Engineering Lecture 22 Reliability II.
CS CS 5150 Software Engineering Lecture 20 Reliability 2.
©Ian Sommerville 2000 Software Engineering, 6th edition. Chapter 20 Slide 1 Defect testing l Testing programs to establish the presence of system defects.
Chapter 8 Lecture 1 Software Testing. Program testing Testing is intended to show that a program does what it is intended to do and to discover program.
Software Engineering1  Verification: The software should conform to its specification  Validation: The software should do what the user really requires.
CS451 Lecture 10: Software Testing Yugi Lee STB #555 (816)
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
Testing Overview Software Reliability Techniques Testing Concepts CEN 4010 Class 24 – 11/17.
Software Testing Reference: Software Engineering, Ian Sommerville, 6 th edition, Chapter 20.
©Ian Sommerville 2004Software Engineering, 7th edition. Chapter 23 Slide 1 Software testing.
SOFTWARE TESTING LECTURE 9. OBSERVATIONS ABOUT TESTING “ Testing is the process of executing a program with the intention of finding errors. ” – Myers.
ANOOP GANGWAR 5 TH SEM SOFTWARE TESTING MASTER OF COMPUTER APPLICATION-V Sem.
Defect testing Testing programs to establish the presence of system defects.
CS 5150 Software Engineering Lecture 21 Reliability 2.
1 CS 501 Spring 2004 CS 501: Software Engineering Lecture 20 Reliability 2.
Software Testing CS 560. Software testing strategies A strategic approach to testing  Functionality meets the requirements that guided its design. 
Software Testing.
Chapter 9, Testing.
IS301 – Software Engineering V:
Chapter 8 – Software Testing
Chapter 18 Software Testing Strategies
Software testing strategies 2
Software testing.
IMPORTANT NOTICE TO STUDENTS:
Presentation transcript:

CS 360 Lecture 16

 For a software system to be reliable:  Each stage of development must be done well, with incremental verification and testing.  Testing and correction do not ensure quality, but reliable systems are not possible without testing. 2

 Static verification testing:  Techniques of verification that do not include execution of the software.  May be manual or use computer tools.  Dynamic verification testing:  Testing the software with trial data.  Debugging to remove errors. 3

 Reviews are a form of static verification testing that is carried out throughout the software development process 4

 Reviews are a fundamental part of good software development  Team members review each other's work:  Can be applied to any stage of software development, but particularly valuable to review program design or code  Can be formal or informal  Preparation  The developer(s) provides information on components they’re building  Models  Specifications and design  Code  Participants (should) study the materials in advance.  Meeting  The developer leads the reviewers through the materials, describing what each section does and encouraging questions. 5

 A review is a structured meeting  Participants and their roles:  Developer(s):  person(s) whose work is being reviewed  Moderator:  ensures that the meeting moves ahead steadily  Scribe:  records discussion in a constructive manner  Interested parties:  other developers on the same project  Client:  representatives of the client(s) who are knowledgeable about this part of the process 6

 Benefits:  Multiple people looking at design/code. Uncover mistakes, suggest improvements  Developers share expertise which helps with training  Incompatibilities between components can be identified  Gives developers an incentive to tidy loose ends  Helps scheduling and management control 7

 To make a review a success:  Senior team members must show leadership  Good reviews require good preparation by everybody  Everybody must be helpful, not threatening  Allow plenty of time and be prepared to continue on another day. 8

 Concept: achieve benefits of review by shared development  Two people work together as a team:  design and/or coding  testing and system integration  documentation and hand-over  Benefits include:  two people create better software with fewer mistakes  cross training  Many software houses report excellent productivity 9

 Formal program reviews whose objective is to detect faults  Code is read or reviewed line by line.  150 to 250 lines of code in 2 hour meeting.  Use checklist of common errors.  Requires team commitment and trained leaders  So effective that it is claimed that it can replace unit testing 10

 Data faults:  Initialization, constants, array bounds  Control faults:  Conditions, loop termination, compound statements, case statements  Input/output faults:  All inputs used, all outputs assigned a value  Interface faults:  Parameter numbers, types, and order, structures and shared memory  Storage management faults:  Modification of links, allocation and deallocation of memory  Exceptions:  Possible errors, error handlers 11

 Program analyzers scan the code for possible errors and anomalies.  Control flow:  Loops with multiple exit or entry points  Data use:  Undeclared or uninitialized variables, unused variables, multiple assignments, array bounds  Interface faults:  Parameter mismatches, non-use of functions return value, uncalled procedures 12

 Static analysis tools  Cross-reference table:  Shows every use of a variable, procedure, object, etc.  Information flow analysis:  Identifies input variables on which an output depends.  Path analysis:  Identifies all possible paths through the program. 13

 Murphy's Law:  If anything can go wrong, it will.  Defensive Programming:  Write simple code.  Avoid risky programming constructs.  If code is difficult to read, rewrite it.  Incorporate redundant code to check system state after modifications. 14

 Testing is most effective if divided into stages  User interface testing (carried out separately)  Unit testing  unit test  System testing  integration test  function test  performance test  installation test  Acceptance testing (carried out separately) 15

 Tests on small sections of a system:  a single class or function  Emphasis is on accuracy of code against specification  If unit testing is not thorough, system testing becomes almost impossible.  If your are working on a project that is behind schedule, do not rush the unit testing. 16

 Subset of user interface testing guidelines:  General  Every action that alters user data can be undone.  All application settings can be restored to default.  The most frequently used functions are found at the top level of the menu structure.  Keyboard  Efficient keyboard access is provided to all application features.  No awkward reaches for frequently performed keyboard operations.  Provides keyboard operations for all mouse operations.  Mouse  No operations depend on input for middle or right mouse buttons.  The mouse pointer is never restricted to part of the screen by the application. 17

 Takes place when modules or sub-systems are integrated to create larger systems  Objectives are to detect faults due to interface errors or invalid assumptions about interfaces 18

 Interface types:  Parameter interfaces  Data passed from one procedure to another  Shared memory interfaces  Block of memory is shared between procedures  Procedural interfaces  Sub-system encapsulates a set of procedures to be called by other sub-systems  Message passing interfaces  Sub-systems request services from other sub-systems 19

 Interface misuse  A calling component calls another component and makes an error in its use of its interface e.g. parameters in the wrong order  Interface misunderstanding  A calling component embeds assumptions about the behaviour of the called component which are incorrect  Timing errors  The called and the calling component operate at different speeds and out-of-date information is accessed 20

 The objective of path testing is to build a set of test cases so that each path through the program is executed at least once  Ensures statement/branch coverage  If every condition in a compound condition is considered  Condition coverage can be achieved as well  Steps for basis path testing:  Draw a (control) flow graph using the source code  Line numbers can dictate each node in the graph  Calculate the cyclomatic complexity using the flow graph  Determine the basis set of linearly independent paths  Design test cases to exercise each path in the basis set 21

 Flow Graphs  Used to depict program control structure  Can be drawn from a piece of source code  Flow Graph Notation: composed of edges and nodes.  An edge starts from a node and ends at another node 22

 Binary search flow graph 23

 Calculating cyclomatic complexity:  E = number of edges  N = number of nodes  Number of components with exit points  CC = E – N + 2P  CC = 11 – = 3  Independent paths through the program:  12, 1, 2, 3, 5, 6, 7, 10  12, 1, 2, 3, 5, 6, 7, 8, 7, 10  12, 1, 2, 3, 5, 6, 7, 8, 9, 7, 10  Test cases should be derived so that all of these paths are executed 24

 Designing the test cases:  Path 1 test case:  12, 1, 2, 3, 5, 6, 7, 10  Input data: [4]  Expected output: 4  Path 2 test case:  12, 1, 2, 3, 5, 6, 7, 8, 7, 10  Input data: [6, 2, 5, 1, 3]  Expected output: 6  Path 3 test case:  12, 1, 2, 3, 5, 6, 7, 8, 9, 7, 10  Input data: [5, 2, 1, 1, 8, 3, 4]  Expected output: 8 25

 Tests complete systems or subsystems composed of integrated components  Main difficulty is localising errors  Errors may not exist until components rely on each other to function properly  Incremental integration testing reduces this problem 26

 Exercises the system beyond its maximum design load.  Stressing the system often causes defects to come to light  Stressing the system test failure behaviour.  Systems should not fail catastrophically.  Stress testing also checks for unacceptable loss of service or data  Particularly relevant to distributed systems which can exhibit severe degradation as a network becomes overloaded 27

 Used to determine if the requirements of a software product are met.  Gather the key acceptance criteria  The list of features/functions that will be evaluated before testing the product.  Determine testing approaches  Types of acceptance tests: stress, timing, compliance, capacity  Testing levels: system level, component level, integration level  Test methods and tools  Test data recording  Description of how acceptance test will be recorded 28

29 Requirements Tests by developer PerformanceAcceptance Client’s Understanding of Requirements Test Installation User Environment Test Usable System Validated System Accepted System Tests by client

 Error prevention (before the system is released):  Use good programming methodology to reduce complexity  Use version control to prevent inconsistent system  Apply verification to prevent algorithmic bugs  Error detection (while system is running):  Testing: Create failures in a planned way  Debugging: Start with an unplanned failures  Monitoring: Deliver information about state. Find performance bugs  Error recovery (recover from failure once the system is released):  Data base systems (atomic transactions)  Modular redundancy  Recovery blocks 30

 Every project needs a test plan that documents the testing procedures for thoroughness, visibility, and for future maintenance.  It should include:  Description of testing approach.  List of test cases and related bugs.  Procedures for running the tests.  Test analysis report. 31

 User interfaces need several categories of testing.  During the design phase, user interface testing is carried out with trial users.  Design testing is also used to develop graphical elements and to validate the requirements.  During the implementa1on phase, the user interface goes through the standard steps of unit and system testing to check the reliability of the implementation.  Finally, acceptance testing is carried out with users, on the complete system. 32

 Test parts of a system which are commonly used rather than those which are rarely executed  acceptance testing is based on the system specifications and requirements  Flow graphs identify test cases which cause all paths through the program to be executed  Interface defects arise because of specification misreading, misunderstanding, errors or invalid timing assumptions 33