Presentation is loading. Please wait.

Presentation is loading. Please wait.

Testing Implementasi Sistem Oleh :Rifiana Arief, SKom, MMSI

Similar presentations


Presentation on theme: "Testing Implementasi Sistem Oleh :Rifiana Arief, SKom, MMSI"— Presentation transcript:

1 Testing Implementasi Sistem Oleh :Rifiana Arief, SKom, MMSI
Pertemuan 2 Strategi dan Teknik Testing Oleh :Rifiana Arief, SKom, MMSI

2 Outline What Testing is Testing In Development Process
Types Of Testing and Definitions Verification & Validation Purpose and Goal of Testing Who Tests Software Testing Technique Testing Step Testing Strategy

3 What’s Wrong? T F A=2 ? A=A + 0.1 A=0 Print A

4 Testing is to execute a program with the purpose of finding defects
What testing is 1) Common definition Testing is to execute a program with the purpose of finding defects testing 2) Wider definition ”Testing is a technical investigation of a product, done to expose quality-related information.”

5 Testing in Development Process
Testing activities take place in all parts of software development From requirement eliciting to final shipment Testing is part of the development process Testing is part of the company business process

6 Testing in Development Process
Testing During implementation: test to verify that software behaves as intended by the designer. Testing After implementation: test for conformance with requirements and reliability, and other non functional requirement

7 Most Common Software problems
Incorrect calculation Incorrect data edits & ineffective data edits Incorrect matching and merging of data Data searches that yields incorrect results Incorrect processing of data relationship Incorrect coding / implementation of business rules Inadequate software performance

8 Confusing or misleading data
Software usability by end users & Obsolete Software Inconsistent processing Unreliable results or performance Inadequate support of business needs Incorrect or inadequate interfaces with other systems Inadequate performance and security controls Incorrect file handling

9 Types of testing and definitions
Validation and Verification Validate correctness or suitability vertical experts to confirm master results Verification confirm software operates as it is required to double check to ensure results match those previously validated and if not then re-validate them

10 Testing can take place as part of each phase of development .
Rational Unified Process (RUP) Transition Construction Inception Elaboration Core Workflow Maintenance Testing Development Design Analysis Requirements Testing can take place as part of each phase of development .

11 Phase Rational Unified Process (RUP) Transition Construction Inception Elaboration Core Workflow Maintenance Testing Development Design Analysis Requirements Testing can take place as part of each core workflow involved in development organization.

12 Verification & Validation
Software V & V defined as a systems engineering methodology to ensure that quality is built into the software during development. Software V & V is complementary to and supportive of quality assurance, project management, systems engineering, and development.

13 Verification & Validation versus Debugging
a process that establish the existence of defects in a system Debugging a process that locates and corrects these defects

14 Verification versus Validation
Software Verification Process is a process for determining whether the software products of an activity fulfill the requirements or conditions imposed on them in the previous activities. Software Validation Process is a process for determining whether the requirements and the final, as-built system or software product fulfills its specific intended use. Is a whole life-cycle process - V & V must be applied during each phase of the software development process. Verification and validation should establish confidence that the software is fit for purpose This does NOT mean completely free of defects Rather, it must be good enough for its intended use and the type of use will determine the degree of confidence that is needed

15 Verification versus Validation
“Are we building the system in the right way?” The system should conform to the specification It does what you specified it should do Validation: “Are we building the right system? ” The system should do what the users really requires

16 Verification versus Validation
Sometimes one of these word is used to mean both verification and validation: Verification in the meaning: verification and validation, or Validation in the meaning: verification and validation

17 The V & V Objectives There are two principal objectives:
To discover and rectify defects in a system To assess whether or not the system is usable in an operational situation. Is a whole life-cycle process - V & V must be applied during each phase of the software development process. Verification and validation should establish confidence that the software is fit for purpose This does NOT mean completely free of defects Rather, it must be good enough for its intended use and the type of use will determine the degree of confidence that is needed

18 The V & V Objectives Software V & V determines that the software performs its intended functions correctly. Ensure that the software performs no unintended functions Measure and assess the quality and reliability of software.

19 The V & V Objectives As a software engineering discipline, software V & V also assesses, analyzes, and tests the software on how it interfaces with systems elements Influences the performance, or reacts to stimuli from system elements

20 The V & V process V & V Is a whole life-cycle process
V & V should be applied at each stage in the software process.

21 Execution base testing
Static and Dynamic V&V Are we building the system In the right way? Check correspondence between a program and its specification Static Verification F or mal specifica tion High-le v el design R equir ements Detailed Code/ Program Pr ototype Dynamic Validation Are we building the right system? Execution base testing

22 Static and Dynamic V&V Static Verification
Concerned with analysis of the static system representation to discover problems Analysis of all documents produced that represent the system Can be applied during all stages of the software process

23 (static verification) observing product behaviour
V & V Inspect artifacts Dynamic = “testing” Execute systems Static Statisk kontra dynamisk testning Dynamisk test = mjukvara exekveras i syfte att hitta defekter Statisk test = artefakter granskas i syfte att hitta defekter Statisk och dynamisk testning är komplementära verifieringstekniker. Den ena kan inte ersätta den andra, bägge bör användas för att verifiera överensstämmelse med krav. to discover problems (static verification) observing product behaviour (dynamic validation)

24 Complements each other
V & V Inspect artifacts Dynamic = “testing” Execute systems Static Statisk kontra dynamisk testning Dynamisk test = mjukvara exekveras i syfte att hitta defekter Statisk test = artefakter granskas i syfte att hitta defekter Statisk och dynamisk testning är komplementära verifieringstekniker. Den ena kan inte ersätta den andra, bägge bör användas för att verifiera överensstämmelse med krav. Complements each other

25 V & V Static Dynamic = ”Testing” Review Inspection Walkthrough
Unit test Integration test Acceptance test System test

26 Static verification Review (desk checking)
Code reading done by a single person, informal. Uneffective compared to walkthrough or inspection Walkthrough The programmer(s) ”walks through”/”executes” his code while invited participants ask questions and makes comments. Relatively informal Inspection Usually a checklist of common errors is used to compare the code against.

27 Purpose and goal of testing are situation dependent
Find defects Maximize bug count Block premature product releases Help managers make ship/no-ship decisions Assess quality Minimize technical support costs Utvärdering av kvaliteten på en produkt - Buggrapporten behöver ej vara lättförståelig, det är inte buggfixning vi är ute efter… [Kaner 2004] Utvärdera att Minimera teknisk supportkostnad – Det är viktigt att fånga de fel som skulle rendera i många supportsamtal oavsett hur allvarliga de är i övrigt [Kaner 2004] Minimera risken för personskador – allt annat är ”out of scope” med denna målsättningen Hitta scenarier som funkar (trots buggar) – typ: ”Vi ska dema systemet i morgon, vad kan vi göra”

28 Purpose and goal of testing are situation dependent
Conform to regulations Minimize safety-related lawsuit risk Assess conformance to specification Find safe scenarios for use of the product (find ways to get it to work, in spite of the bugs) Verify correctness of the product Assure quality Utvärdering av kvaliteten på en produkt - Buggrapporten behöver ej vara lättförståelig, det är inte buggfixning vi är ute efter… [Kaner 2004] Utvärdera att Minimera teknisk supportkostnad – Det är viktigt att fånga de fel som skulle rendera i många supportsamtal oavsett hur allvarliga de är i övrigt [Kaner 2004] Minimera risken för personskador – allt annat är ”out of scope” med denna målsättningen Hitta scenarier som funkar (trots buggar) – typ: ”Vi ska dema systemet i morgon, vad kan vi göra”

29 Purpose and goal of testing are situation dependent
Testing cannot show the absence of errors, only their presence We test a program to find the existence of an error If we find no errors then we have been unsuccessful If an error is found debugging should occur

30 Unsuitable objectives with testing
Show that a system does what it is supposed to do Showing that a system is without errors

31 Testing Levels Unit testing Integration testing System testing Acceptance testing

32 Unit testing The most ‘micro’ scale of testing.
Tests done on particular functions or code modules. Requires knowledge of the internal program design and code. Done by Programmers (not by testers).

33 Unit testing Objectives
To test the function of a program or unit of code such as a program or module To test internal logic To verify internal design To test path & conditions coverage To test exception conditions & error handling When After modules are coded Input Internal Application Design Master Test Plan Unit Test Plan Output Unit Test Report Srihari Techsoft

34 White Box testing techniques Test Coverage techniques Tools Debug
Who Developer Methods White Box testing techniques Test Coverage techniques Tools Debug Re-structure Code Analyzers Path/statement coverage tools Education Testing Methodology Effective use of tools Srihari Techsoft

35 Incremental integration testing
Continuous testing of an application as and when a new functionality is added. Application’s functionality aspects are required to be independent enough to work separately before completion of development. Done by programmers or testers.

36 individual applications client/server applications on a network.
Integration Testing Testing of combined parts of an application to determine their functional correctness. ‘Parts’ can be code modules individual applications client/server applications on a network. Srihari Techsoft

37 Types of Integration Testing
Big Bang testing Top Down Integration testing Bottom Up Integration testing

38 After modules are unit tested Input
Integration testing Objectives To technically verify proper interfacing between modules, and within sub-systems When After modules are unit tested Input Internal & External Application Design Master Test Plan Integration Test Plan Output Integration Test report Srihari Techsoft

39 White and Black Box techniques Problem / Configuration Management
Who Developers Methods White and Black Box techniques Problem / Configuration Management Tools Debug Re-structure Code Analyzers Education Testing Methodology Effective use of tools Srihari Techsoft

40 Objectives When Input Output
System Testing Objectives To verify that the system components perform control functions To perform inter-system test To demonstrate that the system performs both functionally and operationally as specified To perform appropriate types of tests relating to Transaction Flow, Installation, Reliability, Regression etc. When After Integration Testing Input Detailed Requirements & External Application Design Master Test Plan System Test Plan Output System Test Report

41 Who Development Team and Users Methods Problem / Configuration Management Tools Recommended set of tools Education Testing Methodology Effective use of tools

42 Systems Integration Testing
Objectives To test the co-existence of products and applications that are required to perform together in the production-like operational environment (hardware, software, network) To ensure that the system functions together with all the components of its environment as a total system To ensure that the system releases can be deployed in the current environment When After system testing Often performed outside of project life-cycle Input Test Strategy Master Test Plan Systems Integration Test Plan Output Systems Integration Test report

43 Who System Testers Methods White and Black Box techniques Problem / Configuration Management Tools Recommended set of tools Education Testing Methodology Effective use of tools

44 Acceptance Testing Objectives To verify that the system meets the user requirements When After System Testing Input Business Needs & Detailed Requirements Master Test Plan User Acceptance Test Plan Output User Acceptance Test report

45 Who Users / End Users Methods Black Box techniques Problem / Configuration Management Tools Compare, keystroke capture & playback, regression testing Education Testing Methodology Effective use of tools Product knowledge Business Release Strategy

46 Testing Technique Two views on Software testing: White Box Testing
Black box Black Box Testing

47 Testing Technique White box testing - tests what the program does.
Test sets are developed by using knowledge of the algorithms, data structures, and control statements.

48 Testing Technique Black box Black box testing - tests what the program is supposed to do. Test sets are developed and evaluated solely on the specification. There is no knowledge of the algorithms, data structures, or control statements.

49 White-box testing Also known as: Structure based (Structural) testing
Code based testing Glass box testing Clear box testing Logic driven testing

50 White-box testing White-box (or Structural) testing:
Use knowledge of the program to derive test cases to provide more complete coverage Problem: What criteria to use? Two basic strategies for figuring out a test plan: black-box or white-box. (There are also points inbetween.) One is not better than the other. Black-box testing uses boundary analysis, input equivalence partitioning, etc. We’ll be focusing on white-box. We’ll go through a quick example on the board about the differences.

51 White-box testing ... our goal is to ensure that all
Statements, decisions, conditions, and paths have been executed at least once ...

52 White-box testing The system is looked upon as an open box.
The test cases is based on the internal structure of the system (code) Theoretically desirable but impossible and insufficient goal: all paths of the code exercise

53 Black-box testing Also known as: Functional Testing Behavioral Testing
because it test all the functions Behavioral Testing because the program is tested against the expected behavior (described by requirements and/or design)

54 Black-box testing requirements input events output The software is viewed as a black box which transforms input to output based on the specifications of what the software is supposed to do.

55 Black-box testing Check The Conformity of the tested S/W against established behaviour, and Detect errors generated by fault Software fault is a software part which is not according to its definition provided in the development document

56 Black-box testing Functional tests examine the observable behavior of software as evidenced by its outputs without reference to internal functions. If the program consistently provides the desired features with acceptable performance, then specific source code features are irrelevant.

57 Black-box testing Should consider only from the standpoint of its:
Input data Output data Knowledge of its internal structured should not be It is very often impossible to test all the input data It is hence necessary to select a subset of possible input

58 Testing Steps Unit Unit code test . SYSTEM IN USE! Integration
Function Performance Acceptance Installation Unit code . Integrated modules Functioning system Verified, validated software Accepted SYSTEM IN USE! Design specifications System functional requirements Other Customer specification User environment

59 Testing Steps Acceptance Test
software tests customer developer site type of acceptance testing performed by customer at the developer’s site is usually called alpha testing

60 Testing Steps Acceptance Test
software customer site customer tests beta testing is a type of acceptance testing involving a software product to be marketed for use by many users selected users receive the system first and report problems back to the developer users enjoy it - usually receive large discounts and feel important developers like it - exposes their product to real use and often reveals unanticipated errors

61 Testing Strategy Big Bang! Top-down Sandwich Bottom-up incremental
non-incremental incremental Sandwich Compromise

62 Testing Strategy Big bang integration (all components together)
Bottom up integration (from lower levels No test stubs necessary) Top down integration (from higher levels  no test drivers are needed) Sandwich testing (combination of bottom-up and top-down  no test stubs and drivers needed)

63 Kualitas dan Resiko Kualitas
2.What you should test? Kualitas dan Resiko Kualitas

64 Kualitas Perangkat Lunak
Seperti yang telah dijelaskan pada Pertemuan Pertama, tujuan dari pengujian perangkat lunak adalah untuk mendapatkan perangkat lunak dengan kualitas yang sesuai dengan rancangan yang telah dibuat (quality of conformance). Dengan kata lain, pengujian perangkat lunak merupakan cara untuk menentukan kualitas suatu produk perangkat lunak.

65 Defining Quality "features [that] are decisive as to product performance and as to 'product satisfaction' ... freedom from deficiencies... [that] result in complaints, claims, returns, rework and other damage

66 Defining Quality (cont.)
the users and customers become the arbiters of quality when they experience product dissatisfaction - and then make complaints, return merchandise, or call technical support. Testing looks for situation in which a product fails to meet customers' or users' reasonable expectations in specific areas.

67 Standar Kualitas Perangkat Lunak
Standar internasional yang digunakan untuk mengevaluasi kualitas perangkat lunak adalah ISO 9126 yang mendefinisikan karakteristik perangkat lunak yang berkualitas.

68 Karakteristik Kualitas Perangkat Lunak

69 The standard is divided into four parts which address, respectively, the following subjects: quality model; external metrics; internal metrics; and quality in use metrics. The quality model established in the first part of the standard, ISO , classifies software quality in a structured set of characteristics and sub-characteristics as follows: Functionality - A set of attributes that bear on the existence of a set of functions and their specified properties. The functions are those that satisfy stated or implied needs. Suitability Accuracy Interoperability Compliance Security Reliability - A set of attributes that bear on the capability of software to maintain its level of performance under stated conditions for a stated period of time. Maturity Recoverability Fault Tolerance

70 Usability - A set of attributes that bear on the effort needed for use, and on the individual assessment of such use, by a stated or implied set of users. Learnability Understandability Operability Efficiency - A set of attributes that bear on the relationship between the level of performance of the software and the amount of resources used, under stated conditions. Time Behaviour Resource Behaviour Maintainability - A set of attributes that bear on the effort needed to make specified modifications. Stability Analyzability Changeability Testability Portability - A set of attributes that bear on the ability of software to be transferred from one environment to another. Installability Replaceability Adaptability

71 Who Tests Software? independent tester developer user

72 Who Tests Software? User Test while using it Indirect test
It’s not in purpose to do so Indirect test

73 Who Tests Software? Software Developer Independent Tester
Understand system Test gently Driven by delivery Independent Tester Doesn’t understand system Will try to break it Quality driven


Download ppt "Testing Implementasi Sistem Oleh :Rifiana Arief, SKom, MMSI"

Similar presentations


Ads by Google