Presentation is loading. Please wait.

Presentation is loading. Please wait.

Static Techniques (on code) (Static Analysis of code) performed on the code without executing the code.

Similar presentations

Presentation on theme: "Static Techniques (on code) (Static Analysis of code) performed on the code without executing the code."— Presentation transcript:

1 Static Techniques (on code) (Static Analysis of code) performed on the code without executing the code

2 Categories of Static Techniques Manual (Semi) Mechanised

3 Static techniques vs. code testing Code testing tries to characterize set of executions throughout one test case --- (minimal) coverage (of input similar executions by using classes of input data, of paths, others) is the most important issue Static techniques characterize once set of executions; that ’ s the reason to call qualify them as static techniques Static techniques are usually used within a verification activity (i.e they may come before testing) Static techniques and testing have complementary advantages and disadvantages; additionally, some static techniques during the testing to support the test case design

4 Informal analysis techniques: Code walkthroughs Recommended prescriptions –Small number of people (three to five) –Participants receive written documentation from the designer few days before the meeting –Predefined duration of meeting (few hours) –Focus on the discovery of defects, not on fixing them –Participants: designer, moderator, and a secretary –Foster cooperation; no evaluation of people Experience shows that most defects are discovered by the designer during the presentation, while trying to explain the design to other people.

5 Informal analysis techniques: Code inspection A reading code technique aiming at defect discovery Based on checklist (also called defect-guessing), e.g.: –use of uninitialized variables; –jumps into loops; –nonterminating loops; –array indexes out of bounds; –…

6 Defect Guessing From intuition and experience, enumerate a list of possible defects or defect prone situations Defect guessing can also be uses to write test cases to expose those defect

7 Defect Guessing: Esempio Nel caso di array o stringhe, ogni indice è compreso nei limiti della dimensione corrispondente? Ci si riferisce ad una variabile non inizializzata? Per i riferimenti attraverso puntatore/riferimento, la corrispondente area di memoria è stata allocata (dangling reference problem)? Una variabile (eventualmente riferita tramite puntatore) ha tipo diverso da quello usato dal programma? Esistono variabili con nome simile (pratica pericolosa)?

8 Defect Guessing: Esempio I calcoli coinvolgono tipi diversi e inconsistenti (ad es., stringhe e interi)? Esistono delle inconsistenze nei calcoli misti (ad es., interi e reali)? Esistono dei calcoli che coinvolgono tipi compatibili ma di precisione differente? In un’assegnazione (x:=exp, x=exp), il valore sinistro ha rappresentazione meno precisa del valore destro? È possibile una condizione di overflow o underflow (ad esempio nei calcoli intermedi)? Divisione per zero?

9 Defect Guessing: Esempio Nelle espressioni che contengono vari operatori aritmetici, le assunzioni sull’ordine di valutazione e di precedenza degli operatori sono corrette? Il valore di una variabile esce dall’intervallo ragionevole? (ad es., il peso dovrebbe essere positivo, …) Ci sono usi sbagliati dell’aritmetica fra interi, in particolare delle divisioni? Stiamo prendendo in considerazione adeguatamente la precisione finita dei reali?

10 Gli operatori di confronto sono usati correttamente? Le espressioni booleane sono corrette (uso appropriato di and, or, not)? Nelle espressioni che contengono vari operatori booleani, le assunzioni sull’ordine di valutazione e di precedenza degli operatori sono corrette? Gli operandi di un’espressione booleana sono booleani? La maniera in cui viene valutata l’espressione booleana ha impatto sul significato dell’espressione stessa (pratica pericolosa)? Defect Guessing: Esempio

11 Correctness proofs

12 A program and its specification (Hoare notation) {true} begin read (a); read (b); x := a + b; write (x); end {output = input1 + input2} proof by backwards substitution

13 Proof rules Claim1, Claim2 Claim3 Notation: If Claim 1 and Claim 2 have been proven, one can deduce Claim3

14 Proof rules for a language {F1}S1{F2}, {F2}S2{F3} {F1}S1;S2{F3} sequence {Pre and cond} S1 {Post},{Pre and not cond} S2 {Post} {Pre} if cond then S1 ; else S 2 ; end if; {Post} if-then-else while-do {I and cond} S {I} {I} while cond loop S; end loop; {I and not cond} I is called the loop invariant

15 Correctness proof Partial correctness proof –validity of {Pre} program {Post} guarantees that if the Pre holds before the execution of program, and if the program ever terminates, then Post will be achieved Total correctness proof –Pre guarantees program termination and the truth of Post These problems are undecidable!!!

16 Example {input1 > 0 and input2 > 0} begin read (input1); read (input2); x=input1; y=input2; div := 0; while x >= y loop div := div + 1; x := x - y; end loop; write (div); write (x); end; {input1 = div * input2 + x}

17 Discovery of loop invariant Difficult and creative because invariants cannot be constructed automatically In the previous example input1 = div * y + x and y=input2

18 Combining correctness proofs We can prove correctness of operations (e.g. operations on a class) Then use the result of each single proof to make proof for complex modules containing these operations or complex combinations of these operations

19 Example module TABLE; exports type Table_Type (max_size: NATURAL): ?; no more than max_size entries may be stored in a table; user modules must guarantee this procedure Insert (Table: in out TableType ; ELEMENT: in ElementType); procedure Delete (Table: in out TableType; ELEMENT: in ElementType); function Size (Table: in Table_Type) return NATURAL; provides the current size of a table … end TABLE

20 {true} Delete (Table, Element); {Element  Table}; {Size (Table) < max_size} Insert (Table, Element) {Element  Table}; Having proved these We can then prove properties of programs using tables For example, that after executing the sequence Insert(T, x); Delete(T, x); x is not present in T

21 An assessment of correctness proofs Still not used in practice However –possibly used for very critical parts of code or in high risk software! –assertions (any intermediate property) may be the basis for a systematic way of inserting runtime checks (instead of checking values of variables) –proofs may become more practical as more powerful support tools are developed –knowledge of correctness theory helps programmers being rigorous –post conditions can be used to design test cases

22 Symbolic execution Can be viewed as a middle way between testing and pure verification (but it is anyway a verification technique) Executes the program on symbolic values (symbolic expressions) One symbolic execution corresponds to many usual program executions

23 Example (1) Consider executing the following program with x=X, y=Y, a=A (bindings) x := y + 2; if x > a then a := a + 2; else y := x + 3; end if; x := x + a + y; Control graph! x := y + 2 a := a + 2 y := x + 3 x := x + a + y x > a x <= a

24 Example(2) When control reaches a decision, in general, symbolic values do not allow to select a branch One can choose a branch, and record the choice in a path condition Result:, Y + 2 execution pathpath condition

25 Symbolic execution rules (1) read (x) –removes any existing binding for x and adds binding x = X, where X is a newly introduced symbolic value write (expression) –output(n) = computed_symbolic_value_expression (n counter initialized to 1 and automatically incremented after each write statement) symbolic state:

26 Symbolic execution rules (2) x:= expression –construct symbolic value of expression, SV; replace existing binding of x with x=SV After execution of a statement of a path that corresponds to an edge of control graph, append the edge to execution path

27 Symbolic execution rules (3) if cond then S1; else S2; endif while cond loop … endloop –condition is symbolically evaluated eval (cond) –If it is possible to state eval (cond)  true or false then execution proceeds by following the appropriate branch –otherwise, make a choice of true or false, and conjoin eval (cond) (resp., not eval (cond) to the path condition

28 Symbolic execution and testing The path condition describes the data that traverse a certain path Use in testing: –select path –symbolically execute it (if possible) –synthesize data that satisfy the path condition These data will execute that path and are a test case for the path.

29 Example (1) found := false; counter := 1; while (not found) and counter <= number_of_items loop if table (counter) = desired_element then found := true; end if; counter := counter + 1; end loop; if found then write ("the desired element exists in the table"); else write ("the desired element does not exist in the table"); end if;

30 Example (2) write "the desired element exists in the table" write "the desired element does not exist in the table" found:= true counter:= counter+1 1,2,3,5,6,2,4… is not feasible!

31 Why so many approaches to testing and analysis? Testing versus static techniques Formal versus informal techniques White-box versus black-box techniques Fully mechanised vs. semi-mechanised techniques (for undecidable properties) … view all these as complementary

32 OO Unit Code Defect Testing

33 OO Unit: Test Case Design Coverage is the always the key point (what things are covered by the test cases). Minimality of this coverage is the other key point (do not write two distinct test cases covering the same things). In OO software what is covered can be: –states, interactions of classes –further partitions can be based on states, interactions and structure of classes (like attributes) This confirms that Black box and White box are methods comprising several techniques

34 Testing Objectives in OO General defect testing – The tester looks for plausible defects (i.e., aspects of the implementation of the system that may result in defects). To determine whether these defects exist, test cases are designed to exercise the code. Specialized defect testing: Class Testing and Class Hierarchy –Inheritance does not obviate the need for thorough testing of all derived classes. In fact, it can actually complicate the testing process. Scenario-Based Test Design (defect but also acceptance testing) –Scenario-based testing concentrates on what the user does, not what the product does. This means capturing the tasks (via use-cases) that the user has to perform, then applying them and their variants as test cases.

35 OO Software: Random Testing Random testing to test a single class: –identify operations applicable to a class –define constraints on their use –identify a minimal test sequences of operations a sequence of operations that defines the minimum life history of the objects of the class –generate a variety of random (but possible) test sequences of operations exercise other (more complex) class objects life histories White box if based on class code! Black box if based on transition diagrams or other….!

36 Esempio ev_pianoascensore Pre: la richiesta R non è servita Post: la richiesta R è marcata servita

37 OO Software: Behavior Testing The tests to be designed should achieve all state coverage [KIR94]. That is, the operation sequences should cause the Account class to make transition through all allowable states Black box!

38 OO Software: Inter-Class Testing Inter-class testing to exercise interactions between classes: –For each class, use the list of class operations to generate a series of random test sequences of operations. The operations will send messages to other classes. –For each message that is generated, determine the destination class and the corresponding operations. –For each operation in the destination class (that has been invoked by incoming messages), determine the messages that it transmits. –For each of the messages, determine the next level of operations that are invoked and incorporate these into the test sequence White box if based on class code! Black box if based on sequence diagrams!


40 OO Software: Additional tests New issues –inheritance –genericity –polymorphism –dynamic binding Open problems still exist White box!

41 How to test classes of a hierarchy? “ Flattening ” the whole hierarchy and considering every class as a totally independent unit –does not exploit incremental class definition Finding an ad-hoc way to take advantage of the hierarchy

42 A simple technique test case design for class hierarchy A test case that does not have to be repeated for any heir A test case that must be performed for heir class X and all of its further heirs A test case that must be redone by applying the same input data, but verifying that the output is not (or is) changed A test case that must be modified by adding other input parameters and verifying that the output changes accordingly

43 Black Box testing concurrent and real-time systems Non-determinism (of events driving the control flow) inherent in concurrency affects repeatability of failures For real-time systems, a test case consists not only of input data, but also of the time when such data are supplied (events) events input output

44 Software Testing in the large

45 What should be tested! Which is the product under test? We have tested modules according to the expected behavior and (some forms) of unexpected behavior This is largely related to functional requirements and to discovery defects in modules What about non-functional requirements and software quality attributes in general? We need to talk about a software system (not just the software but the software installed and running) for talking about the software running in its environment! The software system is part of the whole system and it is therefore a subsystem of it (as many others). Software product to be delivered to the customer comprises additional things (e.g. user documentation)

46 Software System and Software BB/WB Operating systems, Middleware,Co mpilers, Interpreters, N° of installation of the same module, environment parameters BB/WB software system (sub-system) Perfect technology assumption! Wrong assumptions during the requirement engineering! software system system software code

47 Separate concerns in testing Testing for functionality is not enough, e.g.: –Overload (stress) testing (reliability) –Robustness testing (test unexpected situations) (safety) –Performance testing (test response time) –… These tests are typically related to some software quality attributes and non-functional requirements and usually performed on the system (or subsystems) not on single modules Tests concern additional software quality attributes and non functional requirements such as the maintainability, portability and so on Test concern complementary software quality attributes and non-functional requirements and is related to the software product in general (e.g. the user manual)

48 Software Architectures and Testing Software architectures provides the structural of a complex software; therefore, it is natural to perform testing by following the architecture Call-return Layered Object-oriented

49 Testing Activities in the Software Process Software Requirements Engineering SRS Architecture Design Detailed Design Coding Design Model SRS (analysis model) Module design Code Unit test Tested modules Integration Test Integrated modules Software System Test Tested software system Acceptance Test User Manual planned The word “system” refers to the “whole system” and to the “software system”. The software system implicitly encompasses hardware and allocation of software on hardware. System Requirements System Test

50 Levels of Testing Low-level testing –Unit (module) testing –Integration testing High-level testing –System testing –Acceptance testing Programmer Development team Independent Test Group Customer/End Users Type of TestingPerformed By

51 Unit Testing –done on individual units (or units of modules) –test unit with white-box and black-box –mostly done by programmer –requires stubs and drivers –unit testing of several units can be done in parallel

52 What are Stubs, Drivers ? Stub –dummy module which simulates the functionality of a module called by a given module under test Driver –a module which transmits test cases in the form of input arguments to the given module under test and either prints or interprets the results produced by it ABC Driver B Stub for C e.g. to test B in isolation e.g. module call hierarchy

53 Integration Testing –tests a group of modules –tests structure w.r.t expected behavior (i.e. typically black box) –usually focuses on interfaces (i.e. calls and parameters passing) between modules (defects are in the way modules are called) –largely architecture-dependent –done by one/group of developers

54 Top-down Integration Begin with the top module in the module call hierarchy (represented as a structure chart) Stub modules are produced –But stubs are often complicated The next module to be tested is any module with at least one previously tested superordinate (calling) module (depth first or breadth first ways) After a module has been tested, one of its stubs is replaced by an actual module (the next one to be tested) and its required stubs

55 Example of a Module Hierarchy ABCDFHE

56 Top-down Integration Testing A Stub BStub CStub D Example:

57 Top-down Integration Testing A B Stub CStub D Stub FStub E Example:

58 Bottom-Up Integration Begin with the terminal (leaves) modules (those that do not call other modules) of the modules call hierarchy A driver module is produced for every module The next module to be tested is any module whose subordinate modules (the modules it calls) have all been tested After a module has been tested, its driver is replaced by an actual module (the next one to be tested) and its driver

59 Example of a Module Hierarchy ABCDFHE

60 Bottom-Up Integration Testing Driver E FE Driver F Example:

61 Bottom-Up Integration Testing B FE Driver B Example:

62 Comparison Top-down Integration Advantage –a skeletal version of the program exists early Disadvantage –required stubs could be expensive Bottom-up Integration Disadvantage –the program as a whole does not exist until the last module is added Effective alternative -- use hybrid of bottom-up and top-down: - prioritize the integration of modules based on risk - highest risk modules are integration tested earlier than modules with low risk No clear winner

63 Regression Testing Re-run of previous test cases to ensure that software or the system already tested has not regressed to an earlier defect level after making changes (or integration) to the software or in the system Regression testing can also be performed during the entire life a software Reusability of test cases is the key point!

64 Non-incremental ( Big-Bang integration ) –tests each module independently –combines all the modules to form the integrated code in one step, and test (black box) Incremental –instead of testing each module in isolation, the next module to be tested is combined with the set of modules that have already been tested –With two possible approaches: Top-down, Bottom-up Integration Test Approaches

65 Comparison Non-Incremental requires more stubs,drivers module interfacing defects are detected late debugging defects is difficult Incremental requires less stubs, drivers module interfacing defects detected early debugging defects is easier not all modules should be implemented for starting test results in more thorough testing of modules

66 Esempio x := y + 2; if x > a then a := a + 2; else y := x + 3; a := F(x,y) end if; x := x + a + y; Come si fa a fare il test della seguente unità? Esecuzione simbolica, dirà che prima della chiamata di F(x,y) x=Y+2, a=A e y=Y+5 (con Y+2<=a) Definizione dei test cases (per esempio Black Box): Es. A=5, Y=1, X=7 Per cui deve essere noto F(3,6)

67 Esempio P(x,y) x := y + 2; if x > a then a := a + 2; else y := x + 3; a := F(x, y) end if; x := x + a + y; Come si fa a fare il test di integrazione? Driver Stub for F If xy then y=x+7 xy Read (x,y,a); call P(x,y); if x=…then write (test ok) else write (test not ok) P x := y + 2; if x > a then a := a + 2; else y := x + 3; a := STUB_F(x, y) end if; x := x + a + y;

68 Esempio P(x,y) x := y + 2; if x > a then a := a + 2; else y := x + 3; a := F(x, y) end if; x := x + a + y; Come si fa a fare il test di integrazione? Driver Stub for F xy P Identificare i test cases per P Input Output xy

69 Esempio P(x,y) x := y + 2; if x > a then a := a + 2; else y := x + 3; a := F(x, y) end if; x := x + a + y; Come si fa a fare il test di integrazione? Driver Stub for F Write (x) Read (y) xy Read (x,y,a); call P(x,y); if x=…then write (test ok) else write (test not ok) P x := y + 2; if x > a then a := a + 2; else y := x + 3; a := STUB_F(x, y) end if; x := x + a + y;

70 Levels of Testing Low-level testing –Unit (module) testing –Integration testing High-level testing –System testing –Acceptance testing Programmer Development team Independent Test Group Customer Type of TestingPerformed By

71 (Sub)System Testing Process of attempting to demonstrate that system (or subsystem) does not meet its original requirements and objectives as stated in the requirements specification document Usually, it is not only a code testing It is a defect testing Test cases derived from – software requirements specification (analysis model) – system objectives, user scenarios

72 Types of Software System Testing Volume testing –to determine whether the system can handle the required volumes of data, requests, etc. Load/Stress testing –to identify peak load conditions at which the system will fail to handle required processing loads within required time spans Usability (human factors) testing –to identify discrepancies between the user interfaces of the system (software) and the human engineering requirements of its potential users. Security Testing –to show that the system’s security requirements can be subverted

73 Types of Software System Testing Performance testing (also code testing) –to determine whether the system meets its performance requirements (eg. response times, throughput rates, etc.) Reliability/availability testing –to determine whether the system meets its reliability and availability requirements (here availability is related to failure; however availability may only be related to “out of service” situations not necessarily related to failures) Recovery testing –to determine whether the system meets its requirements for recovery after a failure

74 Types of Software System Testing Installability testing –to identify ways in which the installation procedures lead to incorrect results Configuration Testing –to determine whether the system operates properly when the software or hardware is configured in a required manner Compatibility testing –to determine whether the compatibility (interoperability) objectives of the system have been met Resource usage testing –to determine whether the system uses resources (memory, disk space, etc.) at levels which exceed requirements Others

75 Alpha and Beta Testing Acceptance testing performed on the developed software before its released to the whole user community. Alpha testing –conducted at the developer site by End Users (who will use the software once delivered) –tests conducted in a controlled environment Beta testing –conducted at one or more customer sites by the End Users –it is a “live” use of the delivered software in an environment over which the developers has no control

76 When to Stop Testing ? Testing is potentially a never ending activity! However, an “exit condition” should be defined, e.g.: –Stop when the scheduled time for testing expires –Stop when all the test cases execute without detecting failures both criteria are not good

77 Better Code Testing Stop condition Stop on use of specific test-case design techniques. Example: Test cases derived from –1) satisfying multiple condition coverage and –2) boundary-value analysis and –3) …. –…. –all resultant test cases are eventually unsuccessful (i.e they do not lead to failures)

78 Better Code Testing Stop condition Sia ND il numero dei difetti Inserire un insieme di difetti NDI nello unit Far eseguire il test (a qualcuno che non conosce i difetti inseriti) attraverso un certo numero di tecniche L’efficacia del nostro test è quindi: –NumeroDifettiInseritieScoperti/NumeroDifettiInseriti (NDIS/NDI) –Nell’ipotesi che i difetti siano simili si può dire che –(NDS/ND)=(NDIS/NDI) e quindi ND=NDI*NDS/NDIS ove NDS è il NumeroDifettiScoperti

79 Better Code Testing Stop condition Miglioramento con due gruppi indipendenti di test che trovano x e y difetti, di cui q sono comuni Il numero n totali dei difetti è quindi pari a x*y/q=n (poiché si ipotizza che x/n=q/y)

80 System Testing Stop condition Stop in terms of failures (rate) to be found It tries to characterise test stop in term of time to be spent in testing This is stop condition is closely related to the reliability of the software system

81 Steps in Test Cases definition and Execution

82 Testing automation tools

83 Debugging The activity of locating and correcting defects in software code It can start once a failure has been detected It usually follows test (test possibly performed in the maintenance) Need sometime the definition of an intermediate concept, error i.e. a situation leading to the failure and due to the defect The goal is closing up the gap between a fault and failure –watch points –intermediate assertions defect (fault)  error  failure discovering What is seen from an external observer What is recognized as non correct situation The cause

84 Autodebugging, System management and Fault-tolerance Detect errors and alert on them, may stop or may not stop the execution (leading to failure) Detect errors and undertake a fault management strategy (recovery, alternatives etc.) that allows to tolerate the fault!

85 Testing and verifying quality attributes in general

86 Performance Worst case analysis (static technique --- verification) –focus is on proving that the (sub)system response time is bounded by some function of the external requests and parameters –can be applied to code (without referring to any system) vs. average behavior Analytical vs. experimental approaches, an both may concern the (software) system: –Queue models, statistics and probability theory (Markov chains) –Simulation

87 Correctness review Correcteness is an absolute feature of software with a binary result (the software is correct, the software is not correct) Typically, correcteness is expressed in term of functional requirements or component specifications derived from functional requirements Less important for real systems where hypotheses (made during the requirement engineering or as perfect technology assumption) on which these systems are built are only true in probability or not true Correcteness can be reformulated as: –Reliability (probability to work without failures) Robustness (management of unexpected situations (i.e. failures elsewhere)) Safety (probability that something specific does not happen)

88 Reliability (1) There are approaches to measure reliability on a probabilistic basis, as in other engineering fields, i.e. the probability the (software) system will work without failure for a period of time and under some conditions (shortly, probability to do not fail within a time frame) Unfortunately, there are some difficulties with this approach: –independence of failures does not hold for software If x>0 then write(y) else (write(x); write(z);) X is wrongly assigned to 7 instead of 6; Z is wrongly assigned X is wrongly assigned to 0 instead of 1; Z is wrongly assigned

89 Reliability (2) Reliability is concerned with measuring the probability of the occurrence of failures Meaningful parameters include: average total number of failures observed at time t: AF(t) failure intensity: FI(t)=AF'(t) mean time to fail at time t: MTTF(t)=1/FI(t) mean time between failures MTBF(t)=MTTF(t)+MTTR(t) (after a failure there is the need to repair) Time is the execution time but also the calendar time (because in part of the software system can be shared with other software systems)

90 Basic reliability model Assumes that the decrement per failure experienced (i.e., the derivative with respect to the number of observed failures) of the failure intensity function is constant –i.e., FI is a function of AF FI(AF) = FI 0 (1 - AF/AF ∞ ) where FI 0 is the initial failure intensity and AF ∞ is the total number of failures The model is based on optimistic hypothesis that a decrease in failures is due to the fixing of defects that were sources of failures

91 AF  t Basic model AF law FI(t) FI(0) Af(t)=Af *(1- exp(-t*  )) 

92 Estimation Af(t i )=Af *(1- exp(-t i *  ))  Stima: Af e  Calcola il tempo t per cui Af(t)=Af(T)+ 1 ove T è il tempo cui si è arrivati con il test e si sono osservato Af(T) failures Fare il test per almeno t-T in modo da osservare un ulteriore failure 

93 Testing subjective (less factual) quality attributes Quality assessment on code of subjective quality attributes Consider quality attribute like simplicity, reusability, understandability … There is the need of metrics

94 Internal attributes of quality Software quality attributes (also called external attributes) Internal quality attributes

95 McCabe's source code metric Cyclomatic complexity of the control graph –C = e - n + 2p e is # edges, n is # nodes, p is # connected components McCabe contends that well-structured modules have C in range 3.. 7, and C = 10 is a reasonable upper limit for the complexity of a single module –confirmed by empirical evidence

96 Halstead's software science Tries to measure some software qualities, such as abstraction level, and other features such as effort, by measuring some quantities on code, such as – n1, number of distinct operators in the program – n2, number of distinct operands in the program – N1, number of occurrences of operators in the program – N2, number of occurrences of operands in the program N= n1 log n1 + n2 log n2 (length of the program) (log is log base 2) V = N log (n1+n2) (volume of the program)

97 Halstead's software science Can be used to estimate interesting quantities like the number of expected defects in a code! Estimated Number of Defects B= [E exp (2/3)] / 3000 where Mental Effort E = [(n1) (N2) (N1+N2) ln(n1+n2)] / 2(n2) indication of the effort required to understand and further develop a program.

98 Conclusioni Testing in generale (anche in relazione con la verifica e la validazione e la più generale assicurazione di qualità) Testing Conventional Unit Code (Black – White box) Tecniche statiche (di Verifica) e Conventional Unit Code Testing (Inspection, Walkthrougth, Symbolic Execution, Correcteness Proof) Testing OO Unit Code Testing in the large (Integration and System Testing) Testing attributi di qualità diversi da correttezza, affidabilità, robustezza, safety e prestazioni

Download ppt "Static Techniques (on code) (Static Analysis of code) performed on the code without executing the code."

Similar presentations

Ads by Google