Presentation is loading. Please wait.

Presentation is loading. Please wait.

Dr.S.Sridhar, Ph.D., RACI(Paris, NICE), RMR(USA), RZFM(Germany) LMCSI, LMISTE, RIEEEProc., RIETCom PRINCIPAL, CREC, TIRUPATHI (AP)

Similar presentations


Presentation on theme: "Dr.S.Sridhar, Ph.D., RACI(Paris, NICE), RMR(USA), RZFM(Germany) LMCSI, LMISTE, RIEEEProc., RIETCom PRINCIPAL, CREC, TIRUPATHI (AP)"— Presentation transcript:

1 Dr.S.Sridhar, Ph.D., RACI(Paris, NICE), RMR(USA), RZFM(Germany) LMCSI, LMISTE, RIEEEProc., RIETCom PRINCIPAL, CREC, TIRUPATHI (AP)

2 What is Software? Software is a set of items or objects that form a “configuration” that includes programs documents data...

3 What is Software?  software is engineered  software doesn’t wear out  software is complex  software is a ‘differentiator’  software is like an ‘aging factory’

4 Wear vs. Deterioration

5 The Cost of Change

6 Software Applications  system software  real-time software  business software  engineering/scientific software  embedded software  PC software  AI software  WebApps (Web applications)

7 Software Poses Challenges

8 Chapter 2 The Process

9 Software Engineering A Layered Technology Software Engineering a “quality” focus process model methods tools

10 A Common Process Framework Common process framework Framework activities work tasks work products milestones & deliverables QA checkpoints Umbrella Activities

11  Software project management  Formal technical reviews  Software quality assurance  Software configuration management  Document preparation and production  Reusability management  Measurement  Risk management

12 Process as Problem Solving

13 The Process Model: Adaptability  the framework activities will always be applied on every project... BUT  the tasks (and degree of rigor) for each activity will vary based on:  the type of project (an “entry point” to the model)  characteristics of the project  common sense judgment; concurrence of the project team

14 The Primary Goal: High Quality Remember: High quality = project timeliness Why? Less rework!

15 The Linear Model

16 Iterative Models Prototyping RAD

17 The Incremental Model

18 An Evolutionary (Spiral) Model

19 Still Other Process Models  Component assembly model—the process to apply when reuse is a development objective  Concurrent process model—recognizes that different part of the project will be at different places in the process  Formal methods—the process to apply when a mathematical specification is to be developed  Cleanroom software engineering—emphasizes error detection before testing

20 Chapter 13 Design Concepts and Principles

21 Analysis to Design

22 Where Do We Begin? Spec Prototype Design modeling

23 Design Principles  The design process should not suffer from ‘tunnel vision.’  The design should be traceable to the analysis model.  The design should not reinvent the wheel.  The design should “minimize the intellectual distance” [DAV95] between the software and the problem as it exists in the real world.  The design should exhibit uniformity and integration.  The design should be structured to accommodate change.  The design should be structured to degrade gently, even when aberrant data, events, or operating conditions are encountered.  Design is not coding, coding is not design.  The design should be assessed for quality as it is being created, not after the fact.  The design should be reviewed to minimize conceptual (semantic) errors. From Davis [DAV95]

24 Fundamental Concepts  abstraction—data, procedure, control  refinement—elaboration of detail for all abstractions  modularity—compartmentalization of data and function  architecture—overall structure of the software  Structural properties  Extra-structural properties  Styles and patterns  procedure—the algorithms that achieve function  hiding—controlled interfaces

25 Data Abstraction door implemented as a data structure manufacturer model number type swing direction inserts lights type number weight opening mechanism

26 Procedural Abstraction open implemented with a "knowledge" of the object that is associated with enter details of enter algorithm

27 Stepwise Refinement open walk to door; reach for knob; open door; walk through; close door. repeat until door opens turn knob clockwise; if knob doesn't turn, then take key out; find correct key; insert in lock; endif pull/push door move out of way; end repeat

28 Modular Design

29 Modularity: Trade-offs What is the "right" number of modules for a specific software design? optimal number of modules of modules cost of cost of software software number of modules moduleintegrationcost module development cost

30 Sizing Modules: Two Views

31 Functional Independence

32 Architecture “The overall structure of the software and the ways in which that structure provides conceptual integrity for a system.” [SHA95a] Structural properties. This aspect of the architectural design representation defines the components of a system (e.g., modules, objects, filters) and the manner in which those components are packaged and interact with one another. For example, objects are packaged to encapsulate both data and the processing that manipulates the data and interact via the invocation of methods. Extra-functional properties. The architectural design description should address how the design architecture achieves requirements for performance, capacity, reliability, security, adaptability, and other system characteristics. Families of related systems. The architectural design should draw upon repeatable patterns that are commonly encountered in the design of families of similar systems. In essence, the design should have the ability to reuse architectural building blocks.

33 Information Hiding module controlled interface "secret" algorithm algorithm data structure data structure details of external interface details of external interface resource allocation policy resource allocation policy clients a specific design decision

34 Why Information Hiding?  reduces the likelihood of “side effects”  limits the global impact of local design decisions  emphasizes communication through controlled interfaces  discourages the use of global data  leads to encapsulation—an attribute of high quality design  results in higher quality software

35 Chapter 15 User Interface Design

36 Interface Design Easy to use? Easy to understand? Easy to learn?

37 Interface Design lack of consistency too much memorization no guidance / help no context sensitivity poor response Arcane/unfriendly Typical Design Errors

38 Golden Rules  Place the user in control  Reduce the user’s memory load  Make the interface consistent

39 Place the User in Control Define interaction modes in a way that does not force a user into unnecessary or undesired actions. Provide for flexible interaction. Allow user interaction to be interruptible and undoable. Streamline interaction as skill levels advance and allow the interaction to be customized. Hide technical internals from the casual user. Design for direct interaction with objects that appear on the screen.

40 Reduce the User’s Memory Load Reduce demand on short-term memory. Establish meaningful defaults. Define shortcuts that are intuitive. The visual layout of the interface should be based on a real world metaphor. Disclose information in a progressive fashion.

41 Make the Interface Consistent Allow the user to put the current task into a meaningful context. Maintain consistency across a family of applications. If past interactive models have created user expectations, do not make changes unless there is a compelling reason to do so.

42 User Interface Design Models  System perception — the user’s mental image of what the interface is  User model — a profile of all end users of the system  System image — the “presentation” of the system projected by the complete interface  Design model — data, architectural, interface and procedural representations of the software

43 User Interface Design Process

44 Task Analysis and Modeling  All human tasks required to do the job (of the interface) are defined and classified  Objects (to be manipulated) and actions (functions applied to objects) are identified for each task  Tasks are refined iteratively until the job is completely defined

45 Interface Design Activities 1. Establish the goals and intentions for each task. 2. Map each goal/intention to a sequence of specific actions. 3. Specify the action sequence of tasks and subtasks, also called a user scenario, as it will be executed at the interface level. 4. Indicate the state of the system, i.e., what does the interface look like at the time that a user scenario is performed? 5. Define control mechanisms, i.e., the objects and actions available to the user to alter the system state. 6. Show how control mechanisms affect the state of the system. 7. Indicate how the user interprets the state of the system from information provided through the interface.

46 Design Evaluation Cycle

47 Chapter 17 Software Testing Techniques

48 Software Testing Testing is the process of exercising a program with the specific intent of finding errors prior to delivery to the end user.

49 Testability  Operability—it operates cleanly  Observability—the results of each test case are readily observed  Controlability—the degree to which testing can be automated and optimized  Decomposability—testing can be targeted  Simplicity—reduce complex architecture and logic to simplify tests  Stability—few changes are requested during testing  Understandability—of the design

50 What Testing Shows errors requirements conformance performance an indication of quality

51 Who Tests the Software? developer independent tester Understands the system but, will test "gently" and, is driven by "delivery" Must learn about the system, but, will attempt to break it and, is driven by quality

52 Exhaustive Testing loop < 20 X There are 10 possible paths! If we execute one test per millisecond, it would take 3,170 years to test this program!! 14

53 Selective Testing loop < 20 X Selected path

54 Software Testing Methods Strategies white-box methods black-box methods

55 Test Case Design "Bugs lurk in corners and congregate at boundaries..." Boris Beizer OBJECTIVE CRITERIA CONSTRAINT to uncover errors in a complete manner with a minimum of effort and time

56 White-Box Testing... our goal is to ensure that all statements and conditions have been executed at least once...

57 Why Cover? logic errors and incorrect assumptions are inversely proportional to a path's execution probability we often believe that a path is not that a path is not likely to be executed; in fact, reality is often counter intuitive typographical errors are random; it's likely that untested paths will contain some

58 Basis Path Testing First, we compute the cyclomatic complexity: number of simple decisions + 1 or number of enclosed areas + 1 In this case, V(G) = 4

59 Cyclomatic Complexity A number of industry studies have indicated that the higher V(G), the higher the probability or errors. V(G) modules modules in this range are more error prone

60 Basis Path Testing Next, we derive the independent paths: Since V(G) = 4, there are four paths Path 1: 1,2,3,6,7,8 Path 2: 1,2,3,5,7,8 Path 3: 1,2,4,7,8 Path 4: 1,2,4,7,2,4,...7,8 Finally, we derive test cases to exercise these paths

61 Basis Path Testing Notes you don't need a flow chart, but the picture will help when you trace program paths count each simple logical test, compound tests count as 2 or more basis path testing should be applied to critical modules

62 Loop Testing NestedLoops Concatenated Loops Unstructured Loops Simpleloop

63 Loop Testing: Simple Loops Minimum conditions—Simple Loops 1. skip the loop entirely 2. only one pass through the loop 3. two passes through the loop 4. m passes through the loop m < n 5. (n-1), n, and (n+1) passes through the loop where n is the maximum number of allowable passes

64 Loop Testing: Nested Loops Start at the innermost loop. Set all outer loops to their minimum iteration parameter values. Test the min+1, typical, max-1 and max for the innermost loop, while holding the outer loops at their minimum values. Move out one loop and set it up as in step 2, holding all other loops at typical values. Continue this step until the outermost loop has been tested. If the loops are independent of one another then treat each as a simple loop then treat each as a simple loop else* treat as nested loops else* treat as nested loops endif* for example, the final loop counter value of loop 1 is used to initialize loop 2. Nested Loops Concatenated Loops

65 Black-Box Testing requirements events input output

66 Equivalence Partitioning userqueries mousepicks outputformats prompts FKinput data

67 Sample Equivalence Classes user supplied commands responses to system prompts file names computational data physical parameters physical parameters bounding values bounding values initiation values initiation values output data formatting responses to error messages graphical data (e.g., mouse picks) data outside bounds of the program physically impossible data proper value supplied in wrong place Valid data Invalid data

68 Boundary Value Analysis userqueries mousepicks outputformats prompts FKinput data outputdomain input domain

69 Other Black Box Techniques  error guessing methods  decision table techniques  cause effect graphing

70 Chapter 18 Software Testing Strategies

71 Testing Strategy unit test integrationtest validationtest systemtest

72 Unit Testing module to be tested test cases results softwareengineer

73 Unit Testing interface local data structures boundary conditions independent paths error handling paths module to be tested test cases

74 Unit Test Environment Module stub stub driver RESULTS interface local data structures boundary conditions independent paths error handling paths test cases

75 Integration Testing Strategies Options: the “big bang” approachthe “big bang” approach an incremental construction strategyan incremental construction strategy

76 Top Down Integration top module is tested with stubs stubs are replaced one at a time, "depth first" as new modules are integrated, some subset of tests is re-run A B C DE FG

77 Bottom-Up Integration drivers are replaced one at a time, "depth first" worker modules are grouped into builds and integrated A B C DE FG cluster

78 Sandwich Testing Top modules are tested with stubs Worker modules are grouped into builds and integrated A B C DE FG cluster

79 High Order Testing validation test system test alpha and beta test other specialized testing

80 Debugging: A Diagnostic Process

81 The Debugging Process test cases results Debugging suspectedcauses identifiedcauses corrections regressiontests new test cases

82 Debugging Effort time required to diagnose the symptom and determine the cause time required to correct the error and conduct regression tests

83 Symptoms & Causes symptom cause symptom and cause may be geographically separated symptom may disappear when another problem is fixed cause may be due to a combination of non-errors cause may be due to a system or compiler error cause may be due to assumptions that everyone believes symptom may be intermittent

84 Consequences of Bugs damage mild annoying disturbing serious extreme catastrophic infectious Bug Type Bug Categories: function-related bugs, function-related bugs, system-related bugs, data bugs, coding bugs, design bugs, documentation bugs, standards violations, etc.

85 Debugging Techniques brute force / testing backtracking induction deduction

86 Debugging: Final Thoughts Don't run off half-cocked, think about the about the symptom you're seeing. Use tools (e.g., dynamic debugger) to gain (e.g., dynamic debugger) to gain more insight. If at an impasse, get help from someone else. from someone else. Be absolutely sure to conduct regression tests when you do "fix" the bug

87 Chapter 20 Object-Oriented Concepts and Principles

88 The OO Process Model

89 The OO Mindset problem domain objects

90 Key Concepts classes and class hierarchiesclasses and class hierarchies –instances –inheritance –abstraction and hiding objectsobjects –attributes –methods –encapsulation –polymorphism messagesmessages

91 Classes object-oriented thinking begins with the definition of a class often defined as:object-oriented thinking begins with the definition of a class often defined as: – template – generalized description – pattern – “blueprint”... describing a collection of similar items a metaclass (also called a superclass) is a collection of classesa metaclass (also called a superclass) is a collection of classes once a class of items is defined, a specific instance of the class can be definedonce a class of items is defined, a specific instance of the class can be defined

92 Building a Class

93 What is a Class? external entities things occurrences roles organizational units places structures class name attributes: operations:

94 Encapsulation/Hiding The object encapsulates both data and the logical procedures required to manipulate the data Achieves “information hiding” method # 1 data method # 2 method # 4 method # 5 method # 6

95 Class Hierarchy chair table desk"chable" instances of chair furniture (superclass) subclasses of the furniture superclass

96 Methods (a.k.a. Operations, Services) An executable procedure that is encapsulated in a class and is designed to operate on one or more data attributes that are defined as part of the class. A method is invoked via message passing.

97 Messages

98 Chapter 21 Object-Oriented Analysis

99 Domain Analysis DOMAIN ANALYSIS SOURCES OF DOMAIN KNOWLEDGE DOMAIN ANALYSIS MODEL techncial literature existing applications customer surveys expert advice current/future requirements class taxononmies reuse standards functional models domain languages

100 OOA- A Generic View define use cases extract candidate classes establish basic class relationships define a class hierarchy identify attributes for each class specify methods that service the attributes indicate how classes/objects are related build a behavioral model iterate on the first five steps

101 Use Cases  a scenario that describes a “thread of usage” for a system  actors represent roles people or devices play as the system functions  users can play a number of different roles for a given scenario

102 Developing a Use Case  What are the main tasks or functions that are performed by the actor?  What system information will the the actor acquire, produce or change?  Will the actor have to inform the system about changes in the external environment?  What information does the actor desire from the system?  Does the actor wish to be informed about unexpected changes?

103 Selecting Classes—Criteria needed services multiple attributes common attributes common operations essential requirements retained information

104 Unified Modeling Language (UML) User model view. This view represents the system (product) from the user’s (called “actors” in UML) perspective. Structural model view. Data and functionality is viewed from inside the system. That is, static structure (classes, objects, and relationships) is modeled. Behavioral model view. This part of the analysis model represents the dynamic or behavioral aspects of the system. Implementation model view. The structural and behavioral aspects of the system are represented as they are to be built. Environment model view. The structural and behavioral aspects of the environment in which the system is to be implemented are represented.

105 UML: Use-Case Diagram

106 CRC Modeling

107 Guidelines for Allocating Responsibilities to Classes 1. System intelligence should be evenly distributed. 2. Each responsibility should be stated as generally as possible. 3. Information and the behavior that is related to it should reside within the same class. 4. Information about one thing should be localized with a single class, not distributed across multiple classes. 5. Responsibilities should be shared among related classes, when appropriate.

108 Reviewing the CRC Model 1. All participants in the review (of the CRC model) are given a subset of the CRC model index cards. 2. All use-case scenarios (and corresponding use-case diagrams) should be organized into categories. 3. The review leader reads the use-case deliberately. As the review leader comes to a named object, she passes the token to the person holding the corresponding class index card. 4. When the token is passed, the holder of the class card is asked to describe the responsibilities noted on the card. The group determines whether one (or more) of the responsibilities satisfies the use-case requirement. 5. If the responsibilities and collaborations noted on the index cards cannot accommodate the use-case, modifications are made to the cards.

109 UML: Class Diagrams Generalization- specialization Composite aggregates

110 UML: Package Reference

111 Relationships between Objects

112 Object-Behavior Model 1. Evaluate all use-cases to fully understand the sequence of interaction within the system. 2. Identify events that drive the interaction sequence and understand how these events relate to specific objects. 3. Create an event trace [RUM91] for each use-case. 4. Build a state transition diagram for the system 5. Review the object-behavior model to verify accuracy and consistency

113 UML: State Transition

114 UML: Event Trace

115 Chapter 22 Object-Oriented Design

116 Object-Oriented Design

117 OOA and OOD

118

119 Design Issues  decomposability—the facility with which a design method helps the designer to decompose a large problem into subproblems that are easier to solve;  composability—the degree to which a design method ensures that program components (modules), once designed and built, can be reused to create other systems;  understandability—the ease with which a program component can be understood without reference to other information or other modules;  continuity—the ability to make small changes in a program and have these changes manifest themselves with corresponding changes in just one or a very few modules;  protection—a architectural characteristic that will reduce the propagation of side affects if an error does occur in a given module.

120 Generic Components for OOD  Problem domain component—the subsystems that are responsible for implementing customer requirements directly;  Human interaction component —the subsystems that implement the user interface (this included reusable GUI subsystems);  Task Management Component—the subsystems that are responsible for controlling and coordinating concurrent tasks that may be packaged within a subsystem or among different subsystems;  Data management component—the subsystem that is responsible for the storage and retrieval of objects.

121 Process Flow for OOD

122 System Design Process Partition the analysis model into subsystems. Partition the analysis model into subsystems. Identify concurrency that is dictated by the problem. Identify concurrency that is dictated by the problem. Allocate subsystems to processors and tasks. Allocate subsystems to processors and tasks. Develop a design for the user interface. Develop a design for the user interface. Choose a basic strategy for implementing data management. Choose a basic strategy for implementing data management. Identify global resources and the control mechanisms required to access them. Identify global resources and the control mechanisms required to access them. Design an appropriate control mechanism for the system, including task management. Design an appropriate control mechanism for the system, including task management. Consider how boundary conditions should be handled. Consider how boundary conditions should be handled. Review and consider trade-offs. Review and consider trade-offs.

123 System Design

124 Subsystem Example

125 Subsystem Design Criteria The subsystem should have a well-defined interface through which all communication with the rest of the system occurs. With the exception of a small number of “communication classes,” the classes within a subsystem should collaborate only with other classes within the subsystem. The number of subsystems should be kept small. A subsystem can be partitioned internally to help reduce complexity.

126 Subsystem Collaboration Table

127 Object Design  A protocol description establishes the interface of an object by defining each message that the object can receive and the related operation that the object performs  An implementation description shows implementation details for each operation implied by a message that is passed to an object.  information about the object's private part  internal details about the data structures that describe the object’s attributes  procedural details that describe operations

128 Design Patterns... you’ll find recurring patterns of classes and communicating objects in many object-oriented systems. These patterns solve specific design problems and make object-oriented design more flexible, elegant, and ultimately reusable. They help designers reuse successful designs by basing new designs on prior experience. A designer who is familiar with such patterns can apply them immediately to design problems without having to rediscover them. Gamma and his colleagues [GAM95]

129 Design Pattern Attributes  The design pattern name is an abstraction that conveys significant meaning about it applicability and intent.  The problem description indicates the environment and conditions that must exist to make the design pattern applicable.  The pattern characteristics indicate the attributes of the design that may be adjusted to enable the pattern to accommodate into a variety of problems.  The consequences associated with the use of a design pattern provide an indication of the ramifications of design decisions.

130 Chapter 23 Object-Oriented Testing

131 Object-Oriented Testing  begins by evaluating the correctness and consistency of the OOA and OOD models  testing strategy changes  the concept of the ‘unit’ broadens due to encapsulation  integration focuses on classes and their execution across a ‘thread’ or in the context of a usage scenario  validation uses conventional black box methods  test case design draws on conventional methods, but also encompasses special features

132 Broadening the View of “Testing” It can be argued that the review of OO analysis and design models is especially useful because the same semantic constructs (e.g., classes, attributes, operations, messages) appear at the analysis, design, and code level. Therefore, a problem in the definition of class attributes that is uncovered during analysis will circumvent side effects that might occur if the problem were not discovered until design or code (or even the next iteration of analysis).

133 Testing the CRC Model 1. Revisit the CRC model and the object-relationship model. 2. Inspect the description of each CRC index card to determine if a delegated responsibility is part of the collaborator’s definition. 3. Invert the connection to ensure that each collaborator that is asked for service is receiving requests from a reasonable source. 4. Using the inverted connections examined in step 3, determine whether other classes might be required or whether responsibilities are properly grouped among the classes. 5. Determine whether widely requested responsibilities might be combined into a single responsibility. 6. Steps 1 to 5 are applied iteratively to each class and through each evolution of the OOA model.

134 OOT Strategy  class testing is the equivalent of unit testing  operations within the class are tested  the state behavior of the class is examined  integration applied three different strategies  thread-based testing—integrates the set of classes required to respond to one input or event  use-based testing—integrates the set of classes required to respond to one use case  cluster testing—integrates the set of classes required to demonstrate one collaboration

135 OOT—Test Case Design Berard [BER93] proposes the following approach: 1.Each test case should be uniquely identified and should be explicitly associated with the class to be tested, 2.The purpose of the test should be stated, 3.A list of testing steps should be developed for each test and should contain [BER94]: a.a list of specified states for the object that is to be tested b.a list of messages and operations that will be exercised as a consequence of the test c.a list of exceptions that may occur as the object is tested d.a list of external conditions (i.e., changes in the environment external to the software that must exist in order to properly conduct the test) e.supplementary information that will aid in understanding or implementing the test.

136 OOT Methods: Random Testing  Random testing  identify operations applicable to a class  define constraints on their use  identify a miminum test sequence  an operation sequence that defines the minimum life history of the class (object)  generate a variety of random (but valid) test sequences  exercise other (more complex) class instance life histories

137 OOT Methods: Partition Testing  Partition Testing  reduces the number of test cases required to test a class in much the same way as equivalence partitioning for conventional software  state-based partitioning  categorize and test operations based on their ability to change the state of a class  attribute-based partitioning  categorize and test operations based on the attributes that they use  category-based partitioning  categorize and test operations based on the generic function each performs

138 OOT Methods: Inter-Class Testing  Inter-class testing  For each client class, use the list of class operators to generate a series of random test sequences. The operators will send messages to other server classes.  For each message that is generated, determine the collaborator class and the corresponding operator in the server object.  For each operator in the server object (that has been invoked by messages sent from the client object), determine the messages that it transmits.  For each of the messages, determine the next level of operators that are invoked and incorporate these into the test sequence

139 Chapter 23 Object-Oriented Testing

140 Object-Oriented Testing  begins by evaluating the correctness and consistency of the OOA and OOD models  testing strategy changes  the concept of the ‘unit’ broadens due to encapsulation  integration focuses on classes and their execution across a ‘thread’ or in the context of a usage scenario  validation uses conventional black box methods  test case design draws on conventional methods, but also encompasses special features

141 Broadening the View of “Testing” It can be argued that the review of OO analysis and design models is especially useful because the same semantic constructs (e.g., classes, attributes, operations, messages) appear at the analysis, design, and code level. Therefore, a problem in the definition of class attributes that is uncovered during analysis will circumvent side effects that might occur if the problem were not discovered until design or code (or even the next iteration of analysis).

142 Testing the CRC Model 1. Revisit the CRC model and the object-relationship model. 2. Inspect the description of each CRC index card to determine if a delegated responsibility is part of the collaborator’s definition. 3. Invert the connection to ensure that each collaborator that is asked for service is receiving requests from a reasonable source. 4. Using the inverted connections examined in step 3, determine whether other classes might be required or whether responsibilities are properly grouped among the classes. 5. Determine whether widely requested responsibilities might be combined into a single responsibility. 6. Steps 1 to 5 are applied iteratively to each class and through each evolution of the OOA model.

143 OOT Strategy  class testing is the equivalent of unit testing  operations within the class are tested  the state behavior of the class is examined  integration applied three different strategies  thread-based testing—integrates the set of classes required to respond to one input or event  use-based testing—integrates the set of classes required to respond to one use case  cluster testing—integrates the set of classes required to demonstrate one collaboration

144 OOT—Test Case Design Berard [BER93] proposes the following approach: 1.Each test case should be uniquely identified and should be explicitly associated with the class to be tested, 2.The purpose of the test should be stated, 3.A list of testing steps should be developed for each test and should contain [BER94]: a.a list of specified states for the object that is to be tested b.a list of messages and operations that will be exercised as a consequence of the test c.a list of exceptions that may occur as the object is tested d.a list of external conditions (i.e., changes in the environment external to the software that must exist in order to properly conduct the test) e.supplementary information that will aid in understanding or implementing the test.

145 OOT Methods: Random Testing  Random testing  identify operations applicable to a class  define constraints on their use  identify a miminum test sequence  an operation sequence that defines the minimum life history of the class (object)  generate a variety of random (but valid) test sequences  exercise other (more complex) class instance life histories

146 OOT Methods: Partition Testing  Partition Testing  reduces the number of test cases required to test a class in much the same way as equivalence partitioning for conventional software  state-based partitioning  categorize and test operations based on their ability to change the state of a class  attribute-based partitioning  categorize and test operations based on the attributes that they use  category-based partitioning  categorize and test operations based on the generic function each performs

147 OOT Methods: Inter-Class Testing  Inter-class testing  For each client class, use the list of class operators to generate a series of random test sequences. The operators will send messages to other server classes.  For each message that is generated, determine the collaborator class and the corresponding operator in the server object.  For each operator in the server object (that has been invoked by messages sent from the client object), determine the messages that it transmits.  For each of the messages, determine the next level of operators that are invoked and incorporate these into the test sequence

148 Supplementary Slides for Software Engineering: A Practitioner's Approach, 5/e Supplementary Slides for Software Engineering: A Practitioner's Approach, 5/e copyright © 1996, 2001 R.S. Pressman & Associates, Inc. For University Use Only May be reproduced ONLY for student use at the university level when used in conjunction with Software Engineering: A Practitioner's Approach. Any other reproduction or use is expressly prohibited. This presentation, slides, or hardcopy may NOT be used for short courses, industry seminars, or consulting purposes.

149 Chapter 29 Web Engineering

150 Attributes of Web-Based Applications Network intensive. By its nature, a WebApp is network intensive. It resides on a network and must serve the needs of a diverse community of clients. Content-Driven. In many cases, the primary function of a WebApp is to use hypermedia to present text, graphics, audio, and video content to the end-user. Continuous evolution. Unlike conventional application software that evolves over a series of planned, chronologically-spaced releases, Web applications evolve continuously.

151 WebApp Characteristics Immediacy. Web-based applications have an immediacy [NOR99] that is not found in any other type of software. That is, the time to market for a complete Web-site can be a matter of a few days or weeks. Security. In order to protect sensitive content and provide secure modes of data transmission, strong security measures must be implemented throughout the infrastructure that supports a WebApp and within the application itself. Aesthetics. An undeniable part of the appeal of a WebApp is its look and feel. When an application has been designed to market or sell products or ideas, aesthetics may have as much to do with success as technical design.

152 WebApp Quality Factors

153 The WebE Process

154 Formulation  Allows the customer and developer to establish a common set of goals  Address three questions:  What is the main motivation for the WebApp?  Why is the WebApp needed?  Who will use the WebApp?  Defines two categories of goals”  Informational goals—indicate an intention to provide specific content and/or information the the end user  Applicative goals—indicate the ability to perform some task within the WebApp

155 Analysis for WebE Content Analysis. The full spectrum of content to be provided by the WebApp is identified, including text, graphics and images, video, and audio data. Data modeling can be used to identify and describe each of the data objects. Interaction Analysis. The manner in which the user interacts with the WebApp is described in detail. Use-cases can be developed to provide detailed descriptions of this interaction. Functional Analysis. The usage scenarios (use-cases) created as part of interaction analysis define the operations that will be applied to WebApp content and imply other processing functions. All operations and functions are described in detail. Configuration Analysis. The environment and infrastructure in which the WebApp resides are described in detail.

156 Design for WebE  Architectural design — laying out the page structure of the WebApp  Navigation design — defining the manner in which pages will be navigated  Interface design — establishing consistent and effective user interaction mechanisms

157 Architectural Styles Hierarchical structure Grid structure Linear structure Network structure

158 Navigation Design  identify the semantics of navigation for different users of the site  User roles must be defined  Semantics of navigation for each role must be identified  A semantic navigation unit (SNU) should be defined for each goal associated with each user  Ways of navigating (WoN) are defined  define the mechanics (syntax) of achieving the navigation  options are text-based links, icons, buttons and switches, and graphical metaphors

159 Interface Design Guidelines Server errors, even minor ones, are likely to cause a user to leave the Web site and look elsewhere for information or services. Server errors, even minor ones, are likely to cause a user to leave the Web site and look elsewhere for information or services. Reading speed on a computer monitor is approximately 25 percent slower than reading speed for hardcopy. Therefore, do not force the user to read voluminous amounts of text. Reading speed on a computer monitor is approximately 25 percent slower than reading speed for hardcopy. Therefore, do not force the user to read voluminous amounts of text. Avoid “under construction” signs—they raise expectations and cause an unnecessary link that is sure to disappoint. Avoid “under construction” signs—they raise expectations and cause an unnecessary link that is sure to disappoint. Users prefer not to scroll. Important information should be placed within the dimensions of a typical browser window. Users prefer not to scroll. Important information should be placed within the dimensions of a typical browser window. Navigation menus and headbars should be designed consistently and should be available on all pages that are available to the user. The design should not rely on browser functions to assist in navigation. Navigation menus and headbars should be designed consistently and should be available on all pages that are available to the user. The design should not rely on browser functions to assist in navigation. Aesthetics should never supersede functionality. Aesthetics should never supersede functionality. Navigation options should be obvious, even to the casual user. The user should have to search the screen to determine how to link to other content or services. Navigation options should be obvious, even to the casual user. The user should have to search the screen to determine how to link to other content or services.

160 Testing for WebE – I 1. The content model for the WebApp is reviewed to uncover errors. This ‘testing’ activity is similar in many respects to copy-editing for a written document. 2. The design model for the WebApp is reviewed to uncover navigation errors. Use-cases, derived as part of the analysis activity, allow a Web engineer to exercise each usage scenario against the architectural and navigation design. 3. Selected processing components and Web pages are unit tested. When WebApps are considered, the concept of the unit changes. Each Web page encapsulates content, navigation links and processing elements (forms, scripts, applets). 4. The architecture is constructed and integration tests are conducted. The strategy for integration testing depends on the architecture that has been chosen a linear, grid, or simple hierarchical structure—integration is similar to conventional software a linear, grid, or simple hierarchical structure—integration is similar to conventional software mixed hierarchy or network (Web) architecture — integration testing is similar to the approach used for OO systems. mixed hierarchy or network (Web) architecture — integration testing is similar to the approach used for OO systems.

161 Testing for WebApps – II 5. The assembled WebApp is tested for overall functionality and content delivery. Like conventional validation, the validation of Web-based systems and applications focuses on user visible actions and user recognizable outputs from the system. 6. The WebApp is implemented in a variety of different environmental configurations and is tested for compatibility with each configuration. A cross reference matrix the defines all probable operating systems, browsers, hardware platforms, and communications protocols is created. Tests are then conducted to uncover errors associated with each possible configuration. 7. The WebApp is tested by a controlled and monitored population of end- users. A population of users that encompasses every possible user role is chosen. The WebApp is exercised by these users and the results of their interaction with the system are evaluated for content and navigation errors, usability concerns, compatibility concerns, and WebApp reliability and performance.

162 Project Management for WebE  Initiate the project  Many of the analysis activities should be performed internally even if the project is outsourced  A rough design for the WebApp should be developed internally.  A rough project schedule, including not only final delivery dates, but also milestone dates should be developed.  The degree of oversight and interaction by the contractor with the vendor should be identified.

163 Project Management for WebE  Select candidate outsourcing vendors  interview past clients to determine the Web vendor’s professionalism, ability to meet schedule and cost commitments, and ability to communicate effectively:  determine the name of the vendor’s chief Web engineer(s) for successful past projects (and later, be certain that this person is contractually obligated to be involved in your project  carefully examine samples of the vendor’s work that are similar in look and feel (and business area) to the WebApp that is to be contracted.

164 Project Management for WebE  Assess the validity of price quotes and the reliability of estimates  Does the quoted cost of the WebApp provide a direct or indirect return-on-investment that justifies the project?  Does the vendor that has provided the quote exhibit the professionalism and experience we require?  Establish the degree of project management expected from both parties  Assess the development schedule  WBS should have high granularity  Milestones should be defined at tight intervals

165 SCM for WebE  WebApp content is extremely varied  SCO’s must be defined  The “longevity of the SCO must be identified  Many different people participate in content creation  Determine who “owns” the WebApp  Establish who can make changes and who approves them  Manage scale  As a small WebApp grows, the impact of an seemingly insignificant change can be magnified


Download ppt "Dr.S.Sridhar, Ph.D., RACI(Paris, NICE), RMR(USA), RZFM(Germany) LMCSI, LMISTE, RIEEEProc., RIETCom PRINCIPAL, CREC, TIRUPATHI (AP)"

Similar presentations


Ads by Google