13 The Process Model: Adaptability the framework activities will always be applied on every project ... BUTthe tasks (and degree of rigor) for each activity will vary based on:the type of project (an “entry point” to the model)characteristics of the projectcommon sense judgment; concurrence of the project team
14 The Primary Goal: High Quality Remember:High quality = project timelinessWhy?Less rework!
19 Still Other Process Models Component assembly model—the process to apply when reuse is a development objectiveConcurrent process model—recognizes that different part of the project will be at different places in the processFormal methods—the process to apply when a mathematical specification is to be developedCleanroom software engineering—emphasizes error detection before testing
23 Design PrinciplesThe design process should not suffer from ‘tunnel vision.’The design should be traceable to the analysis model.The design should not reinvent the wheel.The design should “minimize the intellectual distance” [DAV95] between the software and the problem as it exists in the real world.The design should exhibit uniformity and integration.The design should be structured to accommodate change.The design should be structured to degrade gently, even when aberrant data, events, or operating conditions are encountered.Design is not coding, coding is not design.The design should be assessed for quality as it is being created, not after the fact.The design should be reviewed to minimize conceptual (semantic) errors.From Davis [DAV95]
24 Fundamental Concepts abstraction—data, procedure, control refinement—elaboration of detail for all abstractionsmodularity—compartmentalization of data and functionarchitecture—overall structure of the softwareStructural propertiesExtra-structural propertiesStyles and patternsprocedure—the algorithms that achieve functionhiding—controlled interfaces
25 Data Abstraction door manufacturer model number type swing direction insertslightstypenumberweightopening mechanismimplemented as a data structure
26 Procedural Abstraction opendetails of enteralgorithmimplemented with a "knowledge" of theobject that is associated with enter
27 Stepwise Refinement open walk to door; reach for knob; open door; repeat until door opensturn knob clockwise;walk through;if knob doesn't turn, thenclose door.take key out;find correct key;insert in lock;endifpull/push doormove out of way;end repeat
32 Architecture“The overall structure of the software and the ways in which that structure provides conceptual integrity for a system.” [SHA95a]Structural properties. This aspect of the architectural design representation defines the components of a system (e.g., modules, objects, filters) and the manner in which those components are packaged and interact with one another. For example, objects are packaged to encapsulate both data and the processing that manipulates the data and interact via the invocation of methods .Extra-functional properties. The architectural design description should address how the design architecture achieves requirements for performance, capacity, reliability, security, adaptability, and other system characteristics.Families of related systems. The architectural design should draw upon repeatable patterns that are commonly encountered in the design of families of similar systems. In essence, the design should have the ability to reuse architectural building blocks.
33 Information Hiding module clients • algorithm controlled interface• data structure• details of external interface• resource allocation policyclients"secret"a specific design decision
34 Why Information Hiding? reduces the likelihood of “side effects”limits the global impact of local design decisionsemphasizes communication through controlled interfacesdiscourages the use of global dataleads to encapsulation—an attribute of high quality designresults in higher quality software
36 Interface DesignEasy to learn?Easy to use?Easy to understand?
37 Interface Design Typical Design Errors lack of consistency too much memorizationno guidance / helpno context sensitivitypoor responseArcane/unfriendly
38 Golden Rules Place the user in control Reduce the user’s memory load Make the interface consistent
39 Place the User in Control Define interaction modes in a way that does not force a user into unnecessary or undesired actions.Provide for flexible interaction.Allow user interaction to be interruptible and undoable.Streamline interaction as skill levels advance and allow the interaction to be customized.Hide technical internals from the casual user.Design for direct interaction with objects that appear on the screen.
40 Reduce the User’s Memory Load Reduce demand on short-term memory.Establish meaningful defaults.Define shortcuts that are intuitive.The visual layout of the interface should be based on a real world metaphor.Disclose information in a progressive fashion.
41 Make the Interface Consistent Allow the user to put the current task into a meaningful context.Maintain consistency across a family of applications.If past interactive models have created user expectations, do not make changes unless there is a compelling reason to do so.
42 User Interface Design Models System perception — the user’s mental image of what the interface isUser model — a profile of all end users of the systemSystem image — the “presentation” of the system projected by the complete interfaceDesign model — data, architectural, interface and procedural representations of the software
44 Task Analysis and Modeling All human tasks required to do the job (of the interface) are defined and classifiedObjects (to be manipulated) and actions (functions applied to objects) are identified for each taskTasks are refined iteratively until the job is completely defined
45 Interface Design Activities 1. Establish the goals and intentions for each task.2. Map each goal/intention to a sequence of specific actions.3. Specify the action sequence of tasks and subtasks, also called a user scenario, as it will be executed at the interface level.4. Indicate the state of the system, i.e., what does the interface look like at the time that a user scenario is performed?5. Define control mechanisms, i.e., the objects and actions available to the user to alter the system state.6. Show how control mechanisms affect the state of the system.7. Indicate how the user interprets the state of the system from information provided through the interface.
48 Software Testing Testing is the process of exercising a program with the specific intent of findingerrors prior to delivery to the end user.
49 Testability Operability—it operates cleanly Observability—the results of each test case are readily observedControlability—the degree to which testing can be automated and optimizedDecomposability—testing can be targetedSimplicity—reduce complex architecture and logic to simplify testsStability—few changes are requested during testingUnderstandability—of the design
50 What Testing Shows errors requirements conformance performance an indicationof quality
51 Who Tests the Software? developer independent tester Understands the systemMust learn about the system,but, will test "gently"but, will attempt to break itand, is driven by "delivery"and, is driven by quality
52 Exhaustive Testing loop < 20 X 14There are 10 possible paths! If we execute onetest per millisecond, it would take 3,170 years totest this program!!
55 Test Case Design "Bugs lurk in corners and congregate at boundaries ..."Boris BeizerOBJECTIVEto uncover errorsCRITERIAin a complete mannerCONSTRAINTwith a minimum of effort and time
56 White-Box Testing ... our goal is to ensure that all statements and conditions havebeen executed at least once ...
57 Why Cover? logic errors and incorrect assumptions are inversely proportional to a path'sexecution probabilitywe oftenbelievethat a path is notlikely to be executed; in fact, reality isoften counter intuitivetypographical errors are random; it'slikely that untested paths will containsome
58 Basis Path Testing First, we compute the cyclomatic complexity: number of simple decisions + 1ornumber of enclosed areas + 1In this case, V(G) = 4
59 Cyclomatic Complexity A number of industry studies have indicatedthat the higher V(G), the higher the probabilityor errors.modulesV(G)modules in this range aremore error prone
60 Basis Path Testing Next, we derive the independent paths: 12345678independent paths:Since V(G) = 4,there are four pathsPath 1: 1,2,3,6,7,8Path 2: 1,2,3,5,7,8Path 3: 1,2,4,7,8Path 4: 1,2,4,7,2,4,...7,8Finally, we derive testcases to exercise thesepaths.
61 Basis Path Testing Notes you don't need a flow chart,but the picture will help whenyou trace program pathscount each simple logical test,compound tests count as 2 ormorebasis path testing should beapplied to critical modules
63 Loop Testing: Simple Loops Minimum conditions—Simple Loops1. skip the loop entirely2. only one pass through the loop3. two passes through the loop4. m passes through the loop m < n5. (n-1), n, and (n+1) passes throughthe loopwhere n is the maximum numberof allowable passes
64 Loop Testing: Nested Loops Start at the innermost loop. Set all outer loops to theirminimum iteration parameter values.Test the min+1, typical, max-1 and max for theinnermost loop, while holding the outer loops at theirminimum values.Move out one loop and set it up as in step 2, holding allother loops at typical values. Continue this step untilthe outermost loop has been tested.Concatenated LoopsIf the loops are independent of one anotherthen treat each as a simple loopelse* treat as nested loopsendif*for example, the final loop counter value of loop 1 isused to initialize loop 2.
67 Sample Equivalence Classes Valid datauser supplied commandsresponses to system promptsfile namescomputational dataphysical parametersbounding valuesinitiation valuesoutput data formattingresponses to error messagesgraphical data (e.g., mouse picks)Invalid datadata outside bounds of the programphysically impossible dataproper value supplied in wrong place
68 Boundary Value Analysis userqueriesoutputformatsFKinputmousepicksdatapromptsoutputdomaininput domain
69 Other Black Box Techniques error guessing methodsdecision table techniquescause effect graphing
81 The Debugging Process test cases results Debugging new test cases regressiontestssuspectedcausescorrectionsDebuggingidentifiedcauses
82 Debugging Effort time required to diagnose the symptom and determine thecausetime requiredto correct the errorand conductregression tests
83 Symptoms & Causes symptom cause symptom and cause may be geographically separatedsymptom may disappear whenanother problem is fixedcause may be due to acombination of non-errorscause may be due to a systemor compiler errorcause may be due tosymptomassumptions that everyonecausebelievessymptom may be intermittent
84 Consequences of Bugs infectious damage catastrophic extreme serious disturbingannoyingmildBug TypeBug Categories:function-related bugs,system-related bugs, data bugs, coding bugs,design bugs, documentation bugs, standardsviolations, etc.
85 Debugging Techniques brute force / testing backtracking induction deduction
86 Debugging: Final Thoughts 1.Don't run off half-cocked,thinkabout thesymptom you're seeing.2.Use tools(e.g., dynamic debugger) to gainmore insight.3.If at an impasse,get helpfrom someone else.4.Be absolutely sure toconduct regression testswhen you do "fix" the bug.
87 Chapter 20 Object-Oriented Concepts and Principles
90 Key Concepts classes and class hierarchies instances inheritance abstraction and hidingobjectsattributesmethodsencapsulationpolymorphismmessages
91 Classesobject-oriented thinking begins with the definition of a class often defined as:templategeneralized descriptionpattern“blueprint” ... describing a collection of similar itemsa metaclass (also called a superclass) is a collection of classesonce a class of items is defined, a specific instance of the class can be defined
93 What is a Class? occurrences roles organizational units things places external entitiesstructuresclass nameattributes:operations:
94 Encapsulation/Hiding The object encapsulatesboth data and the logicalprocedures required tomanipulate the datamethod# 1method# 2datamethod# 6method# 5method# 4Achieves “information hiding”
95 Class Hierarchy furniture (superclass) table chair desk "chable" subclasses of thefurniture superclassinstances of chair
96 Methods (a.k.a. Operations, Services) An executable procedure that is encapsulated in a class and is designed to operate on one or more data attributes that are defined as part of the class.A method is invokedvia message passing.
99 Domain Analysis class taxononmies techncial literature SOURCES OF reuse standardsDOMAINDOMAINexisting applicationsDOMAINANALYSISKNOWLEDGEANALYSISfunctional modelsMODELcustomer surveysdomain languagesexpert advicecurrent/future requirements
100 OOA- A Generic View define use cases extract candidate classes establish basic class relationshipsdefine a class hierarchyidentify attributes for each classspecify methods that service the attributesindicate how classes/objects are relatedbuild a behavioral modeliterate on the first five steps
101 Use Cases a scenario that describes a “thread of usage” for a system actors represent roles people or devices play as the system functionsusers can play a number of different roles for a given scenario
102 Developing a Use CaseWhat are the main tasks or functions that are performed by the actor?What system information will the the actor acquire, produce or change?Will the actor have to inform the system about changes in the external environment?What information does the actor desire from the system?Does the actor wish to be informed about unexpected changes?
104 Unified Modeling Language (UML) User model view. This view represents the system (product) from the user’s (called “actors” in UML) perspective.Structural model view. Data and functionality is viewed from inside the system. That is, static structure (classes, objects, and relationships) is modeled.Behavioral model view. This part of the analysis model represents the dynamic or behavioral aspects of the system.Implementation model view. The structural and behavioral aspects of the system are represented as they are to be built.Environment model view. The structural and behavioral aspects of the environment in which the system is to be implemented are represented.
107 Guidelines for Allocating Responsibilities to Classes 1. System intelligence should be evenly distributed.2. Each responsibility should be stated as generally as possible.3. Information and the behavior that is related to it should reside within the same class.4. Information about one thing should be localized with a single class, not distributed across multiple classes.5. Responsibilities should be shared among related classes, when appropriate.
108 Reviewing the CRC Model 1. All participants in the review (of the CRC model) are given a subset of the CRC model index cards.2. All use-case scenarios (and corresponding use-case diagrams) should be organized into categories.3. The review leader reads the use-case deliberately. As the review leader comes to a named object, she passes the token to the person holding the corresponding class index card.4. When the token is passed, the holder of the class card is asked to describe the responsibilities noted on the card. The group determines whether one (or more) of the responsibilities satisfies the use-case requirement.5. If the responsibilities and collaborations noted on the index cards cannot accommodate the use-case, modifications are made to the cards.
109 Generalization-specialization UML: Class DiagramsGeneralization-specializationComposite aggregates
112 Object-Behavior Model 1. Evaluate all use-cases to fully understand the sequence of interaction within the system.2. Identify events that drive the interaction sequence and understand how these events relate to specific objects.3. Create an event trace [RUM91] for each use-case.4. Build a state transition diagram for the system5. Review the object-behavior model to verify accuracy and consistency
119 Design Issuesdecomposability—the facility with which a design method helps the designer to decompose a large problem into subproblems that are easier to solve;composability—the degree to which a design method ensures that program components (modules), once designed and built, can be reused to create other systems;understandability—the ease with which a program component can be understood without reference to other information or other modules;continuity—the ability to make small changes in a program and have these changes manifest themselves with corresponding changes in just one or a very few modules;protection—a architectural characteristic that will reduce the propagation of side affects if an error does occur in a given module.
120 Generic Components for OOD Problem domain component—the subsystems that are responsible for implementing customer requirements directly;Human interaction component —the subsystems that implement the user interface (this included reusable GUI subsystems);Task Management Component—the subsystems that are responsible for controlling and coordinating concurrent tasks that may be packaged within a subsystem or among different subsystems;Data management component—the subsystem that is responsible for the storage and retrieval of objects.
122 System Design Process • Partition the analysis model into subsystems. • Identify concurrency that is dictated by the problem.• Allocate subsystems to processors and tasks.• Develop a design for the user interface.• Choose a basic strategy for implementing data management.• Identify global resources and the control mechanisms required to access them.• Design an appropriate control mechanism for the system, including task management.• Consider how boundary conditions should be handled.• Review and consider trade-offs.
125 Subsystem Design Criteria • The subsystem should have a well-defined interface through which all communication with the rest of the system occurs.• With the exception of a small number of “communication classes,” the classes within a subsystem should collaborate only with other classes within the subsystem.• The number of subsystems should be kept small.• A subsystem can be partitioned internally to help reduce complexity.
127 Object DesignA protocol description establishes the interface of an object by defining each message that the object can receive and the related operation that the object performsAn implementation description shows implementation details for each operation implied by a message that is passed to an object.information about the object's private partinternal details about the data structures that describe the object’s attributesprocedural details that describe operations
128 Design Patterns... you’ll find recurring patterns of classes and communicating objects in many object-oriented systems. These patterns solve specific design problems and make object-oriented design more flexible, elegant, and ultimately reusable. They help designers reuse successful designs by basing new designs on prior experience. A designer who is familiar with such patterns can apply them immediately to design problems without having to rediscover them.Gamma and his colleagues [GAM95]
129 Design Pattern Attributes The design pattern name is an abstraction that conveys significant meaning about it applicability and intent.The problem description indicates the environment and conditions that must exist to make the design pattern applicable.The pattern characteristics indicate the attributes of the design that may be adjusted to enable the pattern to accommodate into a variety of problems.The consequences associated with the use of a design pattern provide an indication of the ramifications of design decisions.
131 Object-Oriented Testing begins by evaluating the correctness and consistency of the OOA and OOD modelstesting strategy changesthe concept of the ‘unit’ broadens due to encapsulationintegration focuses on classes and their execution across a ‘thread’ or in the context of a usage scenariovalidation uses conventional black box methodstest case design draws on conventional methods, but also encompasses special features
132 Broadening the View of “Testing” It can be argued that the review of OO analysis and design models is especially useful because the same semantic constructs (e.g., classes, attributes, operations, messages) appear at the analysis, design, and code level. Therefore, a problem in the definition of class attributes that is uncovered during analysis will circumvent side effects that might occur if the problem were not discovered until design or code (or even the next iteration of analysis).
133 Testing the CRC Model1. Revisit the CRC model and the object-relationship model.2. Inspect the description of each CRC index card to determine if a delegated responsibility is part of the collaborator’s definition.3. Invert the connection to ensure that each collaborator that is asked for service is receiving requests from a reasonable source.4. Using the inverted connections examined in step 3, determine whether other classes might be required or whether responsibilities are properly grouped among the classes.5. Determine whether widely requested responsibilities might be combined into a single responsibility.6. Steps 1 to 5 are applied iteratively to each class and through each evolution of the OOA model.
134 OOT Strategy class testing is the equivalent of unit testing operations within the class are testedthe state behavior of the class is examinedintegration applied three different strategiesthread-based testing—integrates the set of classes required to respond to one input or eventuse-based testing—integrates the set of classes required to respond to one use casecluster testing—integrates the set of classes required to demonstrate one collaboration
135 OOT—Test Case Design Berard [BER93] proposes the following approach: 1. Each test case should be uniquely identified and should be explicitly associated with the class to be tested,2. The purpose of the test should be stated,3. A list of testing steps should be developed for each test and should contain [BER94]:a. a list of specified states for the object that is to be testedb. a list of messages and operations that will be exercised as a consequence of the testc. a list of exceptions that may occur as the object is testedd. a list of external conditions (i.e., changes in the environment external to the software that must exist in order to properly conduct the test)e. supplementary information that will aid in understanding or implementing the test.
136 OOT Methods: Random Testing identify operations applicable to a classdefine constraints on their useidentify a miminum test sequencean operation sequence that defines the minimum life history of the class (object)generate a variety of random (but valid) test sequencesexercise other (more complex) class instance life histories
137 OOT Methods: Partition Testing reduces the number of test cases required to test a class in much the same way as equivalence partitioning for conventional softwarestate-based partitioningcategorize and test operations based on their ability to change the state of a classattribute-based partitioningcategorize and test operations based on the attributes that they usecategory-based partitioningcategorize and test operations based on the generic function each performs
138 OOT Methods: Inter-Class Testing For each client class, use the list of class operators to generate a series of random test sequences. The operators will send messages to other server classes.For each message that is generated, determine the collaborator class and the corresponding operator in the server object.For each operator in the server object (that has been invoked by messages sent from the client object), determine the messages that it transmits.For each of the messages, determine the next level of operators that are invoked and incorporate these into the test sequence
140 Object-Oriented Testing begins by evaluating the correctness and consistency of the OOA and OOD modelstesting strategy changesthe concept of the ‘unit’ broadens due to encapsulationintegration focuses on classes and their execution across a ‘thread’ or in the context of a usage scenariovalidation uses conventional black box methodstest case design draws on conventional methods, but also encompasses special features
141 Broadening the View of “Testing” It can be argued that the review of OO analysis and design models is especially useful because the same semantic constructs (e.g., classes, attributes, operations, messages) appear at the analysis, design, and code level. Therefore, a problem in the definition of class attributes that is uncovered during analysis will circumvent side effects that might occur if the problem were not discovered until design or code (or even the next iteration of analysis).
142 Testing the CRC Model1. Revisit the CRC model and the object-relationship model.2. Inspect the description of each CRC index card to determine if a delegated responsibility is part of the collaborator’s definition.3. Invert the connection to ensure that each collaborator that is asked for service is receiving requests from a reasonable source.4. Using the inverted connections examined in step 3, determine whether other classes might be required or whether responsibilities are properly grouped among the classes.5. Determine whether widely requested responsibilities might be combined into a single responsibility.6. Steps 1 to 5 are applied iteratively to each class and through each evolution of the OOA model.
143 OOT Strategy class testing is the equivalent of unit testing operations within the class are testedthe state behavior of the class is examinedintegration applied three different strategiesthread-based testing—integrates the set of classes required to respond to one input or eventuse-based testing—integrates the set of classes required to respond to one use casecluster testing—integrates the set of classes required to demonstrate one collaboration
144 OOT—Test Case Design Berard [BER93] proposes the following approach: 1. Each test case should be uniquely identified and should be explicitly associated with the class to be tested,2. The purpose of the test should be stated,3. A list of testing steps should be developed for each test and should contain [BER94]:a. a list of specified states for the object that is to be testedb. a list of messages and operations that will be exercised as a consequence of the testc. a list of exceptions that may occur as the object is testedd. a list of external conditions (i.e., changes in the environment external to the software that must exist in order to properly conduct the test)e. supplementary information that will aid in understanding or implementing the test.
145 OOT Methods: Random Testing identify operations applicable to a classdefine constraints on their useidentify a miminum test sequencean operation sequence that defines the minimum life history of the class (object)generate a variety of random (but valid) test sequencesexercise other (more complex) class instance life histories
146 OOT Methods: Partition Testing reduces the number of test cases required to test a class in much the same way as equivalence partitioning for conventional softwarestate-based partitioningcategorize and test operations based on their ability to change the state of a classattribute-based partitioningcategorize and test operations based on the attributes that they usecategory-based partitioningcategorize and test operations based on the generic function each performs
147 OOT Methods: Inter-Class Testing For each client class, use the list of class operators to generate a series of random test sequences. The operators will send messages to other server classes.For each message that is generated, determine the collaborator class and the corresponding operator in the server object.For each operator in the server object (that has been invoked by messages sent from the client object), determine the messages that it transmits.For each of the messages, determine the next level of operators that are invoked and incorporate these into the test sequence
150 Attributes of Web-Based Applications Network intensive. By its nature, a WebApp is network intensive. It resides on a network and must serve the needs of a diverse community of clients.Content-Driven. In many cases, the primary function of a WebApp is to use hypermedia to present text, graphics, audio, and video content to the end-user.Continuous evolution. Unlike conventional application software that evolves over a series of planned, chronologically-spaced releases, Web applications evolve continuously.
151 WebApp Characteristics Immediacy. Web-based applications have an immediacy [NOR99] that is not found in any other type of software. That is, the time to market for a complete Web-site can be a matter of a few days or weeks.Security. In order to protect sensitive content and provide secure modes of data transmission, strong security measures must be implemented throughout the infrastructure that supports a WebApp and within the application itself.Aesthetics. An undeniable part of the appeal of a WebApp is its look and feel. When an application has been designed to market or sell products or ideas, aesthetics may have as much to do with success as technical design.
154 FormulationAllows the customer and developer to establish a common set of goalsAddress three questions:What is the main motivation for the WebApp?Why is the WebApp needed?Who will use the WebApp?Defines two categories of goals”Informational goals—indicate an intention to provide specific content and/or information the the end userApplicative goals—indicate the ability to perform some task within the WebApp
155 Analysis for WebEContent Analysis. The full spectrum of content to be provided by the WebApp is identified, including text, graphics and images, video, and audio data. Data modeling can be used to identify and describe each of the data objects.Interaction Analysis. The manner in which the user interacts with the WebApp is described in detail. Use-cases can be developed to provide detailed descriptions of this interaction.Functional Analysis. The usage scenarios (use-cases) created as part of interaction analysis define the operations that will be applied to WebApp content and imply other processing functions. All operations and functions are described in detail.Configuration Analysis. The environment and infrastructure in which the WebApp resides are described in detail.
156 Design for WebEArchitectural design — laying out the page structure of the WebAppNavigation design — defining the manner in which pages will be navigatedInterface design — establishing consistent and effective user interaction mechanisms
157 Architectural Styles Linear structure Grid structure Network structure Hierarchicalstructure
158 Navigation Designidentify the semantics of navigation for different users of the siteUser roles must be definedSemantics of navigation for each role must be identifiedA semantic navigation unit (SNU) should be defined for each goal associated with each userWays of navigating (WoN) are defineddefine the mechanics (syntax) of achieving the navigationoptions are text-based links, icons, buttons and switches, and graphical metaphors
159 Interface Design Guidelines • Server errors, even minor ones, are likely to cause a user to leave the Web site and look elsewhere for information or services.• Reading speed on a computer monitor is approximately 25 percent slower than reading speed for hardcopy. Therefore, do not force the user to read voluminous amounts of text.• Avoid “under construction” signs—they raise expectations and cause an unnecessary link that is sure to disappoint.• Users prefer not to scroll. Important information should be placed within the dimensions of a typical browser window.• Navigation menus and headbars should be designed consistently and should be available on all pages that are available to the user. The design should not rely on browser functions to assist in navigation.• Aesthetics should never supersede functionality.• Navigation options should be obvious, even to the casual user. The user should have to search the screen to determine how to link to other content or services.
160 Testing for WebE – I1. The content model for the WebApp is reviewed to uncover errors. This ‘testing’ activity is similar in many respects to copy-editing for a written document.2. The design model for the WebApp is reviewed to uncover navigation errors. Use-cases, derived as part of the analysis activity, allow a Web engineer to exercise each usage scenario against the architectural and navigation design.3. Selected processing components and Web pages are unit tested. When WebApps are considered, the concept of the unit changes. Each Web page encapsulates content, navigation links and processing elements (forms, scripts, applets).4. The architecture is constructed and integration tests are conducted. The strategy for integration testing depends on the architecture that has been chosen• a linear, grid, or simple hierarchical structure—integration is similar to conventional software• mixed hierarchy or network (Web) architecture — integration testing is similar to the approach used for OO systems.
161 Testing for WebApps – II 5. The assembled WebApp is tested for overall functionality and content delivery. Like conventional validation, the validation of Web-based systems and applications focuses on user visible actions and user recognizable outputs from the system.6. The WebApp is implemented in a variety of different environmental configurations and is tested for compatibility with each configuration. A cross reference matrix the defines all probable operating systems, browsers, hardware platforms, and communications protocols is created. Tests are then conducted to uncover errors associated with each possible configuration.7. The WebApp is tested by a controlled and monitored population of end-users. A population of users that encompasses every possible user role is chosen. The WebApp is exercised by these users and the results of their interaction with the system are evaluated for content and navigation errors, usability concerns, compatibility concerns, and WebApp reliability and performance.
162 Project Management for WebE Initiate the projectMany of the analysis activities should be performed internally even if the project is outsourcedA rough design for the WebApp should be developed internally.A rough project schedule, including not only final delivery dates, but also milestone dates should be developed.The degree of oversight and interaction by the contractor with the vendor should be identified.
163 Project Management for WebE Select candidate outsourcing vendorsinterview past clients to determine the Web vendor’s professionalism, ability to meet schedule and cost commitments, and ability to communicate effectively:determine the name of the vendor’s chief Web engineer(s) for successful past projects (and later, be certain that this person is contractually obligated to be involved in your projectcarefully examine samples of the vendor’s work that are similar in look and feel (and business area) to the WebApp that is to be contracted.
164 Project Management for WebE Assess the validity of price quotes and the reliability of estimatesDoes the quoted cost of the WebApp provide a direct or indirect return-on-investment that justifies the project?Does the vendor that has provided the quote exhibit the professionalism and experience we require?Establish the degree of project management expected from both partiesAssess the development scheduleWBS should have high granularityMilestones should be defined at tight intervals
165 SCM for WebE WebApp content is extremely varied SCO’s must be definedThe “longevity of the SCO must be identifiedMany different people participate in content creationDetermine who “owns” the WebAppEstablish who can make changes and who approves themManage scaleAs a small WebApp grows, the impact of an seemingly insignificant change can be magnified