Predictive Metrics To Guide SOA-Based System Development John Salasin Visiting Researcher National Institute of Standards and Technology

Slides:



Advertisements
Similar presentations
1 Service Oriented Architectures (SOA): What Users Need to Know. OGF 19: January 31, 2007 Charlotte, NC John Salasin, Ph.D, Visiting Researcher National.
Advertisements

Software Architecture in Practice (3 rd Ed) Understanding Quality Attributes Understanding the following: How to express the qualities we want our architecture.
The System and Software Development Process Instructor: Dr. Hany H. Ammar Dept. of Computer Science and Electrical Engineering, WVU.
Chapter 3 Process Models
ITIL: Service Transition
Reference Models مدل های مرجع معماری.
Automated Analysis and Code Generation for Domain-Specific Models George Edwards Center for Systems and Software Engineering University of Southern California.
Creating Architectural Descriptions. Outline Standardizing architectural descriptions: The IEEE has published, “Recommended Practice for Architectural.
Chapter 1: Overview of Workflow Management Dr. Shiyong Lu Department of Computer Science Wayne State University.
IIBA Denver | may 20, 2015 | Kym Byron , MBA, CBAP, PMP, CSM, CSPO
1 Computer Systems & Architecture Lesson 1 1. The Architecture Business Cycle.
Software Process and Product Metrics
Georgetown UNIVERSITY Introduction to SOA Part II: SOA in the enterprise Seminars in Academic Computing, Directors Leadership Seminar, August 7, 2007 Piet.
Defining Services for Your IT Service Catalog
Capability Maturity Model
Enterprise Architecture
System Design/Implementation and Support for Build 2 PDS Management Council Face-to-Face Mountain View, CA Nov 30 - Dec 1, 2011 Sean Hardman.
A Research Agenda for Accelerating Adoption of Emerging Technologies in Complex Edge-to-Enterprise Systems Jay Ramanathan Rajiv Ramnath Co-Directors,
Don Von Dollen Senior Program Manager, Data Integration & Communications Grid Interop December 4, 2012 A Utility Standards and Technology Adoption Framework.
Managing Software Quality
What is Software Engineering? the application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software”
CLEANROOM SOFTWARE ENGINEERING.
N By: Md Rezaul Huda Reza n
Chapter 8 Architecture Analysis. 8 – Architecture Analysis 8.1 Analysis Techniques 8.2 Quantitative Analysis  Performance Views  Performance.
ITEC224 Database Programming
NIST Special Publication Revision 1
Architecture Business Cycle
Introduction to Software Engineering LECTURE 2 By Umm-e-Laila 1Compiled by: Umm-e-Laila.
CONTENTS Arrival Characters Definition Merits Chararterstics Workflows Wfms Workflow engine Workflows levels & categories.
CSI - Introduction General Understanding. What is ITSM and what is its Value? ITSM is a set of specialized organizational capabilities for providing value.
December 14, 2011/Office of the NIH CIO Operational Analysis – What Does It Mean To The Project Manager? NIH Project Management Community of Excellence.
Service Transition & Planning Service Validation & Testing
Role-Based Guide to the RUP Architect. 2 Mission of an Architect A software architect leads and coordinates technical activities and artifacts throughout.
What is a Business Analyst? A Business Analyst is someone who works as a liaison among stakeholders in order to elicit, analyze, communicate and validate.
Chapter 1: Overview of Workflow Management Dr. Shiyong Lu Department of Computer Science Wayne State University.
DESIGNING A LONG-TERM INTEGRATION ARCHITECTURE FOR PROVISIONING TNC May 2007, Copenhagen Aida Omerovic Scientist & project manager – UNINETT FAS,
Development Process and Testing Tools for Content Standards OASIS Symposium: The Meaning of Interoperability May 9, 2006 Simon Frechette, NIST.
Other Quality Attributes Other Important Quality attributes Variability: a special form of modifiability. The ability of a system and its supporting artifacts.
Object-Oriented Software Engineering Practical Software Development using UML and Java Chapter 1: Software and Software Engineering.
1 Advanced Software Architecture Muhammad Bilal Bashir PhD Scholar (Computer Science) Mohammad Ali Jinnah University.
Software Engineering - I
FEA DRM Management Strategy Presented by : Mary McCaffery, US EPA.
Software Product Line Material based on slides and chapter by Linda M. Northrop, SEI.
Chapter 3: Software Project Management Metrics
ANKITHA CHOWDARY GARAPATI
Business Analysis. Business Analysis Concepts Enterprise Analysis ► Identify business opportunities ► Understand the business strategy ► Identify Business.
Predictive Metrics for Service Oriented Architecture (SOA) John Salasin, Ph.D. Visiting Researcher National Institute of Standards and Technology The MITRE.
16/11/ Semantic Web Services Language Requirements Presenter: Emilia Cimpian
Smart Home Technologies
Evaluate Phase Pertemuan Matakuliah: A0774/Information Technology Capital Budgeting Tahun: 2009.
Robert Mahowald August 26, 2015 VP, Cloud Software, IDC
 CMMI  REQUIREMENT DEVELOPMENT  SPECIFIC AND GENERIC GOALS  SG1: Develop CUSTOMER Requirement  SG2: Develop Product Requirement  SG3: Analyze.
March 24, 2007 SOA CoP Demo Model Driven Enterprise SOA GSA Financial Management Enterprise Architecture Cory Casanave cory-c (at) modeldriven.com Oct.
Building Preservation Environments with Data Grid Technology Reagan W. Moore Presenter: Praveen Namburi.
1 Acquisition Automation – Challenges and Pitfalls Breakout Session # E11 Name: Jim Hargrove and Allen Edgar Date: Tuesday, July 31, 2012 Time: 2:30 pm-3:45.
LECTURE 5 Nangwonvuma M/ Byansi D. Components, interfaces and integration Infrastructure, Middleware and Platforms Techniques – Data warehouses, extending.
Enterprise Architectures Course Code : CPIS-352 King Abdul Aziz University, Jeddah Saudi Arabia.
Managing Enterprise Architecture
 System Requirement Specification and System Planning.
ITIL: Service Transition
Software Project Configuration Management
Object-Oriented Systems Development Life Cycle (CH-3)
JOINED AT THE HIP: DEVSECOPS AND CLOUD-BASED ASSETS
Capability Maturity Model
Service Oriented Architectures (SOA): What Users Need to Know.
Vijay Rachamadugu and David Snyder September 7, 2006
Automated Analysis and Code Generation for Domain-Specific Models
Capability Maturity Model
Introduction to SOA Part II: SOA in the enterprise
Presentation transcript:

Predictive Metrics To Guide SOA-Based System Development John Salasin Visiting Researcher National Institute of Standards and Technology

Contents Background –SOA now –Predictive Metrics –Status Assumptions –“Unique” aspects of SOA –Foci of concern Illustrative Metrics / Indicators –Early stage –Architecture / Construction –Operations –Evolution

Background – SOA Now Need – business rate of change

Background – SOA Now Opportunity – enabling technology

Background – SOA Now Challenge – emerging standards

Background – Predictive Metrics Definition -- early life cycle measures of the application and technology that can: Suggest likely problems / opportunities at future stages of development / deployment Suggest corrective action. Example – design concept plans use of ESB run-time transformation to reconcile data in multiple “stovepipes”. Useful metrics: Overhead as a function of the number of separate ontologies Percent of transformations that can be expressed in simple scripting language

Background -- Status Initial set of indicators/metrics identified –Being reviewed by Government / Industry –Still working on improving quantification and refining (adding or deleting) new measures. Initial (partial) specification of applicability to V&V documented. Looking for interested people to provide comments, suggest uses. Planning on developing guidance / standards for metrics and their use in V&V of SOA-based systems.

Assumptions “Unique” aspects of SOA Continuous evolution –Dynamic orchestration / choreography –Development is evolution (adding services, data sources,…) Synchronization with business processes –“Business Process Reengineering” every months –Expensive without “mappable” representations Need to “think strategically, implement tactically” –“Big Bang” is high risk –Need to show: You can reuse existing stuff Others can reuse SOA Services / components

Assumptions Foci of concern / technical factors “Buy-in” Total Cost of Ownership “Time to market” for introducing or revising services “Traditional” concerns, e.g.: –Efficiency, e.g., response time, interoperability, user accessibility Throughput / response time –Reliability & Availability –Comprehensiveness Notation compatibility –Horizontally –Vertically Integration over task –Fraction of the targeted task addressed Effort and skill required to, e.g.: –Specify and develop rules –Develop protocols to enforce rules Automatic change detection and analysis

Organization “Soft” life cycle stages– no specific methodology assumed –Early stage – Concept development, Obtaining management support General specification of functions Preliminary estimates of system’s value by initial implementation sites –Architecture/Construction – Specification of: Services Architecture (service orchestration / choreography Infrastructure components, e.g.: –Enterprise Service Buss or Portal development tools, –data/information and process (Service) reconciliation and integration/coordination, –inter-service contractual agreements (e.g., SLAs regarding Quality of Service (QOS)). –Operations –Evolution includes system expansion

Organization (cont.) Within stages, organizes measures based on Federal Enterprise Architecture (FEA) Performance Reference Model (PRM) categories, e.g.: –Mission and Business Results – outcomes that agencies seek to achieve quantitative measures of these outcomes. –Customer Results – how well customers are being served, ways of measuring these results, e.g., using measures related to: –Service Coverage, –Timeliness & Responsiveness –Service Quality –Processes and Activities – Specify desired outcomes Identify quantitative measures for outcomes. –Ability to capture and analyze data that contribute to Mission and Business Results and Customer Results outcomes –Evolution (added to PRM categories) – ease with which a system can be modified cost/time to expand system to include additional organizational units, functions, ect.

Illustrative Metrics / Indicators Early stage Foci: Estimates of the system’s value, obtaining support / coordination, adequacy of predictive models, ease of “metastasis” Examples: –How well does the system concept map to business processes? E.G., Extent of: Explicit mappings Formal documentation of business processes allowing automated analyses Ability to effectively support capture and react to business events of interest –Quality of customer service, E.g.: Planned Service Coverage, Timeliness & Responsiveness Usability measures (from prototyping ?) –Potential for growth, E.G.: Number of components and services from reuse of existing assets Effort required to “wrap” and incorporate legacy systems Estimates of reusability of new services /components

Illustrative Metrics / Indicators Architecture/Construction Foci: Revisit early stage metrics with refined requirements, designs, specifications, models; analyses of COTS infrastructure(s), user- interface prototyping, and some results from early implementations. Support to co-evolution of business and technical processes, Policy / Security, robustness, performance and scalability. Examples: –Change transfer across architectural layers based on linked designs and refinement rules; –Effort required to identify needed changes and to change processes or process orchestration; –Tools for monitoring rule conformance and percent of rules that can be monitored using these tools; –Recovery time and amount of information that can be recovered under various failure scenarios; –Impact of various configurations and environments on performance; –Scalability of mechanisms for data movement and storage (caching), response times, variety of systems with which it can collaborate/ interoperate

Illustrative Metrics / Indicators Operations Foci: Refining estimates and models from previous stages, measuring actual user satisfaction and system performance, and actual usage. Examples: –Is the “consumption of services” by other organizational elements tracking earlier predictions? –Is their a change in the use of separate services? –How often are tools used to identify conflicting policies within and across services? –How long does it take an employee to become self sufficient in performing their SOA-related task(s) – including writing scripts? –What are error rates for such specifications? –Effort required to specify and monitor compliance, and to define / invoke corrective action for Level of Service contracts. –Ability and effort to specify (re)deployment of services to nodes for load balancing. –Performance, MTTF, MTTR

Illustrative Metrics / Indicators Evolution Foci: Validate previous estimates / model results to determine whether the system representation(s) used and tools provided make it easy to modify the system or to expand it to include more functions in the “enterprise”. Examples: –How much effort is required to: specify a new monitoring capability; integrate a new service component, or; mirror a change in business processes? –Are there checks on policy consistency / applicability when system changes are made? Are these (partially) automated? Is their a scripting language? How much effort is required to learn and apply?

To get copy of draft report: Ask for it: Promise to provide a few technical comments on, e.g.: –The validity of the indicators – are they predictive? Are some irrelevant? –Better ways of quantifying the indicators; –Anecdotal examples of situations where the existence of a condition has (or has not) had a favorable (or unfavorable) impact; –How these indicators/metrics might be most useful in Program/Project management – particularly with respect to ongoing Verification and Validation (V&V). –Experiments that might provide benchmark data indicating expected ranges for different values and predict their impact on system performance, utility, and TCO.