Presentation is loading. Please wait.

Presentation is loading. Please wait.

Predictive Metrics To Guide SOA-Based System Development John Salasin Visiting Researcher National Institute of Standards and Technology

Similar presentations


Presentation on theme: "Predictive Metrics To Guide SOA-Based System Development John Salasin Visiting Researcher National Institute of Standards and Technology"— Presentation transcript:

1 Predictive Metrics To Guide SOA-Based System Development John Salasin Visiting Researcher National Institute of Standards and Technology jsalasin@nist.gov 301 975-2923

2 Contents Background –SOA now –Predictive Metrics –Status Assumptions –“Unique” aspects of SOA –Foci of concern Illustrative Metrics / Indicators –Early stage –Architecture / Construction –Operations –Evolution

3 Background – SOA Now Need – business rate of change

4 Background – SOA Now Opportunity – enabling technology

5 Background – SOA Now Challenge – emerging standards

6 Background – Predictive Metrics Definition -- early life cycle measures of the application and technology that can: Suggest likely problems / opportunities at future stages of development / deployment Suggest corrective action. Example – design concept plans use of ESB run-time transformation to reconcile data in multiple “stovepipes”. Useful metrics: Overhead as a function of the number of separate ontologies Percent of transformations that can be expressed in simple scripting language

7 Background -- Status Initial set of indicators/metrics identified –Being reviewed by Government / Industry –Still working on improving quantification and refining (adding or deleting) new measures. Initial (partial) specification of applicability to V&V documented. Looking for interested people to provide comments, suggest uses. Planning on developing guidance / standards for metrics and their use in V&V of SOA-based systems.

8 Assumptions “Unique” aspects of SOA Continuous evolution –Dynamic orchestration / choreography –Development is evolution (adding services, data sources,…) Synchronization with business processes –“Business Process Reengineering” every 12-18 months –Expensive without “mappable” representations Need to “think strategically, implement tactically” –“Big Bang” is high risk –Need to show: You can reuse existing stuff Others can reuse SOA Services / components

9 Assumptions Foci of concern / technical factors “Buy-in” Total Cost of Ownership “Time to market” for introducing or revising services “Traditional” concerns, e.g.: –Efficiency, e.g., response time, interoperability, user accessibility Throughput / response time –Reliability & Availability –Comprehensiveness Notation compatibility –Horizontally –Vertically Integration over task –Fraction of the targeted task addressed Effort and skill required to, e.g.: –Specify and develop rules –Develop protocols to enforce rules Automatic change detection and analysis

10 Organization “Soft” life cycle stages– no specific methodology assumed –Early stage – Concept development, Obtaining management support General specification of functions Preliminary estimates of system’s value by initial implementation sites –Architecture/Construction – Specification of: Services Architecture (service orchestration / choreography Infrastructure components, e.g.: –Enterprise Service Buss or Portal development tools, –data/information and process (Service) reconciliation and integration/coordination, –inter-service contractual agreements (e.g., SLAs regarding Quality of Service (QOS)). –Operations –Evolution includes system expansion

11 Organization (cont.) Within stages, organizes measures based on Federal Enterprise Architecture (FEA) Performance Reference Model (PRM) categories, e.g.: –Mission and Business Results – outcomes that agencies seek to achieve quantitative measures of these outcomes. –Customer Results – how well customers are being served, ways of measuring these results, e.g., using measures related to: –Service Coverage, –Timeliness & Responsiveness –Service Quality –Processes and Activities – Specify desired outcomes Identify quantitative measures for outcomes. –Ability to capture and analyze data that contribute to Mission and Business Results and Customer Results outcomes –Evolution (added to PRM categories) – ease with which a system can be modified cost/time to expand system to include additional organizational units, functions, ect.

12 Illustrative Metrics / Indicators Early stage Foci: Estimates of the system’s value, obtaining support / coordination, adequacy of predictive models, ease of “metastasis” Examples: –How well does the system concept map to business processes? E.G., Extent of: Explicit mappings Formal documentation of business processes allowing automated analyses Ability to effectively support capture and react to business events of interest –Quality of customer service, E.g.: Planned Service Coverage, Timeliness & Responsiveness Usability measures (from prototyping ?) –Potential for growth, E.G.: Number of components and services from reuse of existing assets Effort required to “wrap” and incorporate legacy systems Estimates of reusability of new services /components

13 Illustrative Metrics / Indicators Architecture/Construction Foci: Revisit early stage metrics with refined requirements, designs, specifications, models; analyses of COTS infrastructure(s), user- interface prototyping, and some results from early implementations. Support to co-evolution of business and technical processes, Policy / Security, robustness, performance and scalability. Examples: –Change transfer across architectural layers based on linked designs and refinement rules; –Effort required to identify needed changes and to change processes or process orchestration; –Tools for monitoring rule conformance and percent of rules that can be monitored using these tools; –Recovery time and amount of information that can be recovered under various failure scenarios; –Impact of various configurations and environments on performance; –Scalability of mechanisms for data movement and storage (caching), response times, variety of systems with which it can collaborate/ interoperate

14 Illustrative Metrics / Indicators Operations Foci: Refining estimates and models from previous stages, measuring actual user satisfaction and system performance, and actual usage. Examples: –Is the “consumption of services” by other organizational elements tracking earlier predictions? –Is their a change in the use of separate services? –How often are tools used to identify conflicting policies within and across services? –How long does it take an employee to become self sufficient in performing their SOA-related task(s) – including writing scripts? –What are error rates for such specifications? –Effort required to specify and monitor compliance, and to define / invoke corrective action for Level of Service contracts. –Ability and effort to specify (re)deployment of services to nodes for load balancing. –Performance, MTTF, MTTR

15 Illustrative Metrics / Indicators Evolution Foci: Validate previous estimates / model results to determine whether the system representation(s) used and tools provided make it easy to modify the system or to expand it to include more functions in the “enterprise”. Examples: –How much effort is required to: specify a new monitoring capability; integrate a new service component, or; mirror a change in business processes? –Are there checks on policy consistency / applicability when system changes are made? Are these (partially) automated? Is their a scripting language? How much effort is required to learn and apply?

16 To get copy of draft report: Ask for it: jsalasin@nist.gov ANDjsalasin@nist.gov Promise to provide a few technical comments on, e.g.: –The validity of the indicators – are they predictive? Are some irrelevant? –Better ways of quantifying the indicators; –Anecdotal examples of situations where the existence of a condition has (or has not) had a favorable (or unfavorable) impact; –How these indicators/metrics might be most useful in Program/Project management – particularly with respect to ongoing Verification and Validation (V&V). –Experiments that might provide benchmark data indicating expected ranges for different values and predict their impact on system performance, utility, and TCO.


Download ppt "Predictive Metrics To Guide SOA-Based System Development John Salasin Visiting Researcher National Institute of Standards and Technology"

Similar presentations


Ads by Google