Presentation is loading. Please wait.

Presentation is loading. Please wait.

January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering.

Similar presentations


Presentation on theme: "January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering."— Presentation transcript:

1 January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering of Standalone Programs University of Colorado

2 January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 2 Adapting Inspections to Models 1.Specify scope – a body of material or set of use cases (for small project, scope may be entire model) 2.Specify depth – level of detail to be covered 3.Identify basis from which the model under test (MUT) was created – set of models from the previous phase 4.Develop test cases for each of the evaluation criteria to be applied using the contents of the basis model as input – scenarios from the use case model are a good starting point for test cases for many models 5.Establish criteria for measuring test coverage – sufficient use cases to touch every class in a class diagram

3 January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 3 Adapting Inspections to Models -- continued 6.Perform the static analysis with an appropriate checklist; ensure consistency between the MUT and the basis model 7.“Execute” the test cases 8.Evaluate the effectiveness of the tests using the coverage measurement. Calculate the coverage percentage. Testing of analysis models and design models is so high-level that 100% coverage is necessary to achieve good results 9.If coverage insufficient, expand the test suite and apply the additional test cases or terminate the testing if additional test cases (e.g. use cases) need to be written

4 January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 4 Coverage in Models Model elements – class, relationships, object, message A test case “covers” an element if it uses that element as part of a test case. –single test case using a particular element probably does not exhaust all possible values of the attributes of that element –e.g. using an object from a class to receive a single message does not test the other methods in the same class Farther in the development cycle, model detail increases and coverage detail increases

5 January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 5 Levels of coverage For domain model, creating a single object from a class will be enough to consider that class “covered” –Coverage at this level is a percentage of classes and relationships covered Design level – use every method in an interface before saying a class is covered –Coverage may be stated by counting all of the methods in the model rather than all of the classes The more abstract the classes, the higher the level of coverage required

6 January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 6 Selecting Test Cases Usually there are many test cases that can be developed from a specific use case Test case selectors –equivalence classes –logical paths – use case paths –Orthogonal Defect Classification triggers –Use profiles – see operational profiles notes –Risk as a test case selector is appropriate during development, actively searching for defects not after development – looking for a level of reliability – then the weight shifts to use profiles

7 January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 7 ODC Orthogonal Defect Classification developed at IBM –“Triggers” are activities that cause a defect to be detected –IBM analyzed a large amount of data and identified triggers –They grouped these triggers based on when they occurred, such as during reviews and inspections –If one structures the guided inspection to encounter as many triggers as possible, then the tests are more likely to trigger as many failures as possible

8 January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 8 Orthog. Def. Classif. “review & inspection” triggers Design conformance – comparison of basis and current model; comparison of current model to requirements Operation semantics – tracing the logic Concurrency – examining synchronization between threads/processes Backward compatibility – comparison to previous products Lateral compatibility – comp. with interfaces using this one Rare situation – examining unspecified system behavior Side effects – behavior outside scope of the current product Document consistency/completeness Language dependencies – exam’ing for lang. specific details

9 January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 9 “Executing test cases” If prototype exists, create executable test cases If not, interactive session with testers and developers –perform a symbolic execution to simulate processing –walk testers through scenarios provided by test cases while using documents available – state diagrams, sequence diagrams, class diagrams, etc. Resist the urge for this to switch into another design session –As problems are uncovered, dev’rs want to change the MUT – stops testing, becomes debugging instead –Diverts attention from finding other defects –Problems recorded

10 January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 10 Criteria for a Requirements Inspection Completeness in this model means –Use cases represent all of the functionality needed for a satisfactory product. No use case is included that is not required functionality. –If possible, done by independent group of domain experts and product definition people. Correctness – Each use case accurately represents a req. –Act of writing test cases for a guided insp. identifies many req’s not precise enough to result in a test case Consistency – Any system functionality is specified in the same manner everywhere it is described –End-to-end scenarios help locate inconsistencies

11 January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 11 Testing the requirements 1.Rank use cases 2.Determine total # of test cases that can be constructed given resources available 3.Ration the tests based on the ranking 4.Write scenarios based only on knowledge of a domain expert (not a developer, not those who wrote the use cases, if possible) 5.The inspection – see next slide

12 January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 12 Testing requirements 5.The inspection –Writer presents a scenario –Requirements modelers identify the use cases that contain the test scenario as a main scenario, extension, exception, or alternate path –If no match, identify as incompleteness defect –If scenario could be represented by 2 or more use cases on the same level of abstraction, identify as inconsistency defect –In either case, ask if use case is incorrect in a limited way such that, if corrected, it would handle scenario

13 January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 13 Test Reuse Reusable assets –Ranking of use cases –Construction of test cases requirements model will be the basis for testing several other models

14 January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 14 Domain Analysis Model Most helpful if domain model can be created by one group of domain experts and tested by a different group of domain experts

15 January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 15 Criteria for domain model inspection Completeness – the concepts are sufficient to cover the scope of the content specified. Sufficient detail is given to describe concepts to the required depth Correctness – The descriptions of domain concepts are accurate; the algorithms will produce the expected results Consistency – Model elements should be consistent with the company’s definitions and meanings. Note: A test case at this level only states details to the level of the domain concepts.

16 January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 16 Detailed Class Design Model Detailed class design populates the architectural model with classes that will implement the interfaces defined in the architecture –Set of class diagrams –Pre and Post conditions for every method of every class –State diagrams for each class –Suggest activity diagrams for significant algorithms Focus on compliance with the architecture

17 January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 17 Criteria for a class design model inspection Completeness – Classes are defined for each interface in the architecture. The preconditions for each method specify sufficient information so that the user can safely use the method. The post conditions for a method show error conditions as well as the normally expected result. Correctness – Each class accurately implements the semantics of an interface. For those classes that correspond to interfaces in the architecture, the class’ specification must correspond to that interface. Consistency – The behaviors in the interface of each class provides either a single way to accomplish a task or they provide the same behavior but with different preconditions if there are multiple ways to accomplish a task.

18 January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 18 Guided inspection of design model Test execution in this context is an interactive session –Construct a message-sequence diagram that includes preconditions for a test case Verification of results –When output from tests is in the form of diagrams, the resulting diagrams must be verified by domain experts Evaluating quality attributes –test cases used for the basic inspection can be used to analyze the expected performance – more later

19 January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 19 Incremental, iterative development If product is being developed in multiple iterations per increment and multiple increments –Tests must be repeatable –Write down the test cases used in various inspections –On successive iterations, reapply all tests that failed the last time some tests that passed add or enhance tests to cover new features

20 January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 20 Special Inspections for Extensibility, etc. “Charter” may be to achieve more aggressive goals such as development of extensible design, reusable framework, etc. Products of analysis and design phases are most critical for achieving these types of objectives Test scenarios are developer actions not user actions Question is, “How must the classes of the system be changed to provide the newly required behavior?” Maintain these as change cases – a use case that is not a requirement but is an anticipated change

21 January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 21 Testing special objectives Explicitly state the objective Construct a “change case” including a specific scenario that illustrates the objective. Create test cases by sampling from the range permitted by the change case. Enumerate the work needed to achieve the objective by specifying the differences in state and behavior required for the new objective. This can be accomplished by identifying the new subclasses that must be defined. Evaluate the current design relative to the design required to achieve the objective. Repeat with add’l test scenarios until all proposed changes are examined.

22 January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 22 Test Process In chapter 3, McGregor and Sykes deal extensively with the Testing Process and Test Planning –5 dimensions of determining a testing process Who performs testing? Which pieces will be tested? When will testing be performed? How will testing be performed? How much testing is adequate? –Using a risk analysis to rank the importance of the use cases to the development effort

23 January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 23 Test Planning Test Planning Activities –Scheduling testing activities –Estimating based on a use-case unit –Selecting an organization model for the testing staff –Test Plan templates for project test plan, component test plans, integration test plans, use case test plans, and system test plan. –IEEE 829 Standard Test Plan

24 January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 24 IEEE 829 Standard Test Plan Outline 1.Introduction 2.Test Items 3.Tested Features 4. Features not tested (per test cycle) 5.Testing Strategy and Approach 1.Syntax 2.Description of functionality 3.Arguments for Tests 4.Expected Output 5.Specific Exclusions 6.Dependencies 7.Test Case Success/Failure Criteria

25 January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 25 IEEE 829 Standard Test Plan Outline 6.Pass/Fail Criteria for the Complete Test Cycle 7.Entrance Criteria/Exit Criteria 8.Test-Suspension Criteria and Resumption Requirements 9.Test Deliverables/Status Communications Vehicles 10.Testing Tasks 11.Hardware and Software Requirements 12.Problem Determination and Correction Responsibilities 13.Staffing and Training Needs/Assignments 14.Test Schedules 15.Risks and Contingencies 16.Approvals

26 January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 26 Topics in this section Testing Models Coverage Selecting test cases Execution of test cases Criteria for and testing of –Requirements –Domain model –Detailed Class Design Model Incremental, iterative development of test cases Extensibility inspections Special objective inspections Test Process, Test planning, IEEE Standard for Test Plan


Download ppt "January 20, 2002ECEN5033 University of Colorado, Testing OO Software Part Two 1 Testing Object-Oriented Software – Testing Models Software Engineering."

Similar presentations


Ads by Google