Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Systems V & V, Quality and Standards Dr Sita Ramakrishnan School CSSE Monash University.

Similar presentations


Presentation on theme: "1 Systems V & V, Quality and Standards Dr Sita Ramakrishnan School CSSE Monash University."— Presentation transcript:

1 1 Systems V & V, Quality and Standards Dr Sita Ramakrishnan School CSSE Monash University

2 © S Ramakrishnan2 Testing Program Components zTopics y Introduction y Fault Classification y Testing Program Components y Integration Testing xReferences: Pfleeger S L (2001), Software Engineering, Theory & Practice, 2 nd Ed., Prentice-Hall. xSommerville I (1995), Software Engineering, 5 th Ed., Addison-Wesley xOther references (books included in the slides, conference & journal papers) given in the handout for the unit

3 © S Ramakrishnan3 Testing Program Components z Introduction zTesting is not the first place where faults are detected. Requirement & Design reviews help as well zIn this lecture, examine techniques to reduce the occurrence of faults in code yTypes of Faults – x algorithmic fault – branching too soon/ too late; testing for wrong condition; not testing for a condition such division by zero etc x computation & precision faults; stress faults; boundary faults; timing faults; performance faults; recovery faults; h/w, s/w faults; standards & procedure faults x documentation faults when documentation does not match what the program does – may be intent defined in design say with assertions not implemented correctly? y useful to categorize and track types of faults not just in code but anywhere in the software system

4 © S Ramakrishnan4 Testing Program Components z Fault Classification y Historical information can help in predicting what types of faults in code we are likely to have, and certain types of faults can make us rethink the design or even requirements y IBM, HP etc have published work on software fault classification. Software fault modeling and causal analysis depend on understanding the number & distribution of types of faults y IBM’s defect prevention process (Mays et al. 1990) finds & documents root cause of faults & what types of faults testers should look for; has reduced the no. of faults injected in software yChillarege et al. (1992) at IBM developed an approach to fault tracking called “orthogonal defect classification”, where faults are put in categories that together give an indication as to which parts of the system needs a closer look as they spawn more faults.

5 © S Ramakrishnan5 Testing Program Components - Fault Classification z Fault classification such as by IBM & HP (See Pfleeger’s text for details) improve the development process by stating which types of faults are found in which development activities x Eg. For each fault identification or testing technique used during development, can build a profile of the types of faults located. x different methods will yield different profiles x can build a fault detection & prevention strategy based on the kinds of faults expected in a system x Chillarege et al. (1992) has shown how fault profile for design review is very different to that from code inspection x Grady (1997) shows HP developers use a fault classification model by selecting 3 descriptors for each fault found: the origin of fault (i.e. where fault was injected in a product), the type of fault, and the mode (which means if info was missing, wrong, unclear, changed). Each HP division tracks its faults separately, & summary statistics are produced. Different divisions have very different fault profiles & these profiles help developers devise SDLC activities that address particular kinds of faults that they encounter

6 © S Ramakrishnan6 Testing Program Components zHave looked at testing vs proving, black-box & structural testing etc. in another lecture zTest Thoroughness – to test code thoroughly, choose test cases using one of the approaches based on data manipulated by the code. Refer to lecture notes on dataflow coverage criteria on statement testing, branch, path, du- path, all-uses and more. all-paths all-du-paths all-uses all-c-uses-some-p-usesall-p-uses-some-c-uses all-c-uses all-p-uses Branch all-defs Statement Fig. Relative strengths of test strategies (Beizer 1990). Eg. Testing all paths stronger than testing all paths from def to use. Stronger the strategy, the more test cases. Think trade-offs between resources available & thoroughness of testing strategy we choose

7 © S Ramakrishnan7 Testing Program Components zAn Example of how test strategy affects the number of test cases from Pfleeger’s text Pointer = false Print Result x = x + 1 x > k? 1 2 3 4 6 5 7 yes Pointer = true NoNo No Call suCall su Call sub (x, Pointer, Result) ResultResult Result > 0 ? Fig. Logic Flow statement testing requires test cases to execute statements 1 through 7. by choosing x > k that produces +ve Result, we can execute statements 1-2-3-4-5-6-7 in order, so 1 test case is enough. for branch testing, identify decision points 2 test cases will exercise paths 1-2-3-4-5-6-7 & 1-2-4-5-6-1 and traverse the path at least once for exercising each possible path through the program, need more test cases. the paths 1-2-3-4-5-6-7, 1-2-3-4-5-6-1, 1-2-4-5-6-7, 1-2-4-5-6-1 cover all cases, 2 decision points with 2 choices at each branch In this eg., statement testing requires fewer test cases than branch testing, which requires fewer test cases than path testing. This relationship holds in general.

8 © S Ramakrishnan8 Testing Program Components zAn Example of how test strategy affects the number of test cases from Pfleeger’s text Pointer = false Print Result x = x + 1 x > k? 1 2 3 4 6 5 7 yes Pointer = true NoNo No Call suCall su Call sub (x, Pointer, Result) ResultResult Result > 0 ? Fig. Logic Flow Graph for Flow chart on left In this eg., statement testing requires fewer test cases than branch testing, which requires fewer test cases than path testing. This relationship holds in general.

9 © S Ramakrishnan9 Testing Program Components y Integration Testing x when integrating individual tested components, the order in which components are tested affects our choice of test cases and tools x for large systems, some components may be in coding phase, others in unit test phase, and some others may be in integration testing phase. Strategy chosen will affect the integration timing and coding order, and also have impact on cost and thoroughness of testing x system is viewed as made of hierarchy of components, where each component is seen as belonging to a layer of the design x use top down, bottom up or a combination of the 2 for testing

10 © S Ramakrishnan10 Testing Program Components z Bottom-up Integration Testing y for merging components to test the larger system y each component at the lower level of the hierarchy tested individually first y next, test components which call the previously tested components, and repeat until all tested y bottom-up approach useful when design is O-O, low level components are general purpose routines which are called by others or for system integration of a number of stand-alone reusable components. y when writing bottom-up testing, we need to write test drivers to test the lower level components y Disadvantage: architectural faults are unlikely to be discovered until much of the system has been tested

11 © S Ramakrishnan11 Testing Program Components – Bottom-up Integration Testing Level N Level N - 1 Test Drivers Level Level Level N Test Drivers (Source: Sommerville Text, p. 455 (adapted) A BCD GFE Test E Test F Test G Test D,G Test C Test B,E,F Test A,B,C, D,E,F,G SourSour Source: Pfleeger text, p.357 In Pfleeger’s eg., testing bottom-up, test lowest level E,F &G. as no components are ready to call these, write a driver code to call the component & pass test case to it. Once these low level are tested, then to the higher level. The next level, B is combined with components it calls (E & F) & tested together & so on …

12 © S Ramakrishnan12 Top-down Integration Testing zTop level, is tested by itself. Then, components called by the tested component(s) are tested as a larger unit. Repeated until all components are integrated & tested stub z A higher level component may call another which has not been tested yet. So, need to write a stub to simulate the action of a missing lower level component. The stub works by returning a suitable output for the testing process to continue z Testing top-down allows test teams to exercise one function at a time, following control from higher level down to the lowest level component -> if functions in components are localized using top-down design. Test cases can be defined in terms of functions being checked. zDesign faults, functional feasibility etc can be addressed more at the beginning of testing rather than at the end. No drivers needed but may need to write a large number of stubs Test A Test A,B,C,D Test A,B,C,D, E,F,G frfr From Pfleeger, p.358)

13 © S Ramakrishnan13 Modified Top-down Integration Testing zMay need to write a large number of stubs with top-down testing zInstead of testing an entire level at a time, a modified top-down testing approach is to test each level’s components individually before merging them yE.g. test A, then test B,C,D & then merge the 4 to test level 1 & 2, then E,F,G tested separately, then entire system tested. But this leads to more coding as stubs & drivers are needed for each component Test A A,B,C,D Test A,B,C,D, E,F,G frfr Fig. Source: Pfleeger, p.359) Test E Test C Test D Test B Test F Test G

14 © S Ramakrishnan14 Big-bang Testing & Sandwich Testing y Myers (1979) w.r.t. big-bang integration states: when all components are tested in isolation, tempting to put them together as the final system & test if it works y Not recommended for any system 1.as it requires stubs & drivers to test individual components 2.difficult to find the cause of any failure when all components are integrated at once 3.Interface faults cant be distinguished from other types of faults xMyers (1979) combines top-down strategy with bottom-up to form a sandwich testing approach. xSystem viewed as 3 layers like a sandwich – target layer in the middle, levels above & below the target. Top-down approach used in top layer & bottom up in lower layer. Testing converges on the target layer. Choose the target layer based on structure of component hierarchy & system characteristics

15 © S Ramakrishnan15 Sandwich Testing zTesting converges on the target layer in sandwich testing approach zIf bottom layer contains general utility functions, target layer may be the one above, with components which use the utility functions. Sandwich approach allows bottom-up testing to check the correctness of the utilities at the beginning of testing. No need for stubs as utilities are there to use zSandwich testing allows integration testing to begin early in the testing process. zCombines advantages of top-down & bottom-up by testing control & utilities from the beginning zBut, does not test individual components thoroughly before integration zA modified sandwich testing approach allows upper level components to be tested before merging them with others zHomework: See p. 362 Pfleeger text for Comparison of integration strategies and check if Microsoft synch-and-stabilize approach will work in CSE4431 or CSE4002 project


Download ppt "1 Systems V & V, Quality and Standards Dr Sita Ramakrishnan School CSSE Monash University."

Similar presentations


Ads by Google