Presentation is loading. Please wait.

Presentation is loading. Please wait.

Evaluating the Correctness and Effectiveness of a Middleware QoS Configuration Process in DRE Systems Institute for Software Integrated Systems Dept of.

Similar presentations


Presentation on theme: "Evaluating the Correctness and Effectiveness of a Middleware QoS Configuration Process in DRE Systems Institute for Software Integrated Systems Dept of."— Presentation transcript:

1 Evaluating the Correctness and Effectiveness of a Middleware QoS Configuration Process in DRE Systems Institute for Software Integrated Systems Dept of EECS, Vanderbilt University Nashville, TN, USA Amogh Kavimandan, Anantha Narayanan, Aniruddha Gokhale, Gabor Karsai a.gokhale@vanderbilt.edu www.dre.vanderbilt.edu/~gokhale Presented at ISORC 2008, Orlando, FL May 5-7, 2008

2 Distributed Real-time and Embedded (DRE) Systems DRE systems traits: Composed from diverse, complex sub-systems Stringent requirements on resources Multiple, simultaneous QoS requirements at local- (sub- system) and global- (application) level Heterogeneous platforms Increasing scale 2 Trend towards component-based application development Functionality realized via composition, deployment & configuration of application components on execution platforms

3 DRE Systems Software Development Processes DRE systems illustrate the following (non-exhaustive) development stages: Specification – functional description, interface definition, implementation 3

4 DRE Systems Software Development Processes DRE systems illustrate the following (non-exhaustive) development stages: Specification – functional description, interface definition, implementation Composition – functional integration, hierarchical organization & packaging 4

5 DRE Systems Software Development Processes DRE systems illustrate the following (non-exhaustive) development stages: Specification – functional description, interface definition, implementation Composition – functional integration, hierarchical organization & packaging Deployment – computing resource allocation, node placement 5

6 DRE Systems Software Development Processes DRE systems illustrate the following (non-exhaustive) development stages: Specification – functional description, interface definition, implementation Composition – functional integration, hierarchical organization & packaging Deployment – computing resource allocation, node placement Configuration – choosing right set of parameters for hosting infrastructure (e.g., middleware) for actuating application QoS requirements 6 Focus on DRE systems middleware QoS configuration

7 7 What vs how - Middleware platforms provide what is required to achieve system QoS not always how it can be achieved No centralized orchestrator to realize QoS from options providing individual QoS control Non-trivial to manually perform QoS configuration activity Choose appropriate configuration mechanisms in an application-specific manner, particularly for large applications Middleware does not prevent developers from choosing semantically invalid configurations to achieve QoS Middleware QoS Configuration: Hard Challenges Lack of effective QoS configuration tools result in QoS policy mis-configurations that are hard to analyze & debug

8 Automated Middleware QoS Configuration: Hard Challenges 8 Use DRE system QoS requirements to automated middleware QoS configurations Different DRE systems (e.g., shipboard computing environment, emergency response services) exhibit variability in domain-specific QoS requirements How do we deal with this variability? Must deal with plethora of configuration mechanisms of hosting middleware platforms for muliple m/w platforms How do we bridge the gap between domain-specific requirements and configuration mechanisms? Tool must address the variability and bridge this gap

9 –Shields developers from configuration semantics Automated translation using model transformation to generate system QoS configurations –Reusable, one-step translation –Encodes best practices in QoS mapping QUality of service pICKER (QUICKER): Domain-independent QoS modeling languages – express system QoS in terms of requirements semantics –Easier to model, evolve, lesser modeling effort 9 Solution Approach: QUICKER ISORC 2007: General idea of QUICKER RTAS 2008: Model transformation algorithms for CCM/RT

10 10 ISORC 2008: Evaluating QUICKER 1.Is the transformation process correct? We use structural correspondence to prove the transformations correct 2.Are the generated artifacts correct? We use model checking 3.Do the generated configurations deliver the desired QoS? We use empirical validation Focus on the correctness & effectiveness of our QoS configuration process

11 Verification framework Specify correctness properties at meta-level Add annotations for each instance (correspondence rules) Use annotations to automatically verify whether the instances satisfy the correctness properties We do not attempt to prove the general correctness of the transformation itself Source Meta Target Meta Source Model Target Model Correctness Specification Model Transformation Correctness Checker Annotations Certificate (1) Verifying the Correctness of QoS Mapping Algorithms 11

12 Add cross links to identify corresponding elements Rules specify correspondence conditions for selected types At the end of the transformation, the instance models are checked if they satisfy all the correspondence conditions Input Model Output Model Correspondence Rules crosslink (1) Verifying the Correctness of QoS Mapping Algorithms 12

13 13 Container COMPONENT EXECUTORS Component Home POA Callback Interfaces I n t e r n a l I n t e r f a c e s E v e n t S i n k s F a c e t s R e c e p t a c l e s E v e n t S o u r c e s Component Reference C o m p o n e n t C o n t e x t COMPONENT SERVER 1 Container COMPONENT EXECUTORS Component Home POA Callback Interfaces I n t e r n a l I n t e r f a c e s E v e n t S i n k s F a c e t s R e c e p t a c l e s E v e n t S o u r c e s Component Reference C o m p o n e n t C o n t e x t COMPONENT SERVER 2 ORB End-to-End Priority Propagation Thread Pools Portable Priorities Protocol Properties Priority Band Dependencies may span beyond “immediate neighbors”, e.g., –application execution path –components belonging to separate assemblies Empirically validating configuration changes slows down development & QA process considerably Several iterations before desired QoS is achieved (if at all) Assembly 1Assembly n Priority Model Priority Band (2) Verifying the Generated QoS Configurations

14 options of dependent component(s) triggers detection of potential mismatches e.g., dependency between Gizmo invocation priority & Comm lane priority 14 Leveraging Bogor model checking framework Dependency structure maintained in Bogor used to track dependencies between QoS options of components, e.g.: –Analysis & Comm are connected –Gizmo & Comm are dependent Change(s) in QoS Detect mismatch if either values change (2) Verifying the Generated QoS Configurations

15 15 Representation of middleware QoS options in Bogor model-checker BIR extensions allow representing domain-level concepts in a system model QUICKER defines new BIR extensions for QoS options –Allows representing QoS options & domain entities directly in a Bogor input model –e.g., CCM components, Real-time CORBA lanes/bands are first-class Bogor data types Reduces size of system model by avoiding multiple low-level variables to represent domain concepts & QoS options (3) Verifying the Generated QoS Configurations

16 16 Representation of properties (that a system should satisfy) in Bogor BIR primitives define language constructs to access & manipulate domain-level data types, e.g.: –Used to define rules that validate QoS options & check if property is satisfied Automatic generation of BIR of DRE system from QUICKER-generated output models Model interpreters auto-generate Bogor Input Representation of a system from its model (3) Verifying the Generated QoS Configurations

17 17 DRE System Case Study Basic single processor (BasicSP) scenario Components use event-based communication paradigm Position updated periodically at 20Hz GPS generates data which is ultimately comsumed by NavDisplay in a event-push, data-pull fashion

18 18 Evaluation conducted on ISISlab Each node was 2.8 GHz Intel Xeon dual processor, 1GB physical memory, 1 GHz network interface, and 40GB hard disks Used CIAO Version 0.6 middleware platform Applied QUICKER to BasicSP to generate its configurations which were used in our evaluations (3) Empirically Evaluating QoS Configurations

19 19 (3) Empirically Evaluating QoS Configurations Avg. latency = ~1925 us Variation in std. deviation was quite small

20 of correctness of QUICKER’s QoS configuration process Verified the generated QoS configurations through model-checking QUICKER toolchain provides QoS requirements modeling languages QoS mapping algorithms for mapping requirements to middleware QoS options We discussed verification 20 Concluding Remarks QUICKER can be downloaded from www.dre.vanderbilt.edu/CoSMIC/ Verified the correctness of QoS Mapping Algorithms through structural correspondence Empirically validated the configurations by applying the QUICKER process to representative DRE system case study Future work based on capturing variability in requirements & middleware

21 Questions? 21 1-510-2021-100

22 BACKUP

23 Overview of QUICKER: Specifying QoS Requirements Challenge 1. QoS requirements specification DRE developers are domain experts who understand domain-level issues, system QoS specification must be expressible at the same level of abstraction 23

24 24 Challenge 1. QoS requirements specification DRE developers are domain experts who understand domain-level issues, system QoS specification must be expressible at the same level of abstraction Large gap between what is required (by the application) and how it can be achieved (by the middleware platform) Configurations can not be reused; difficult to scale to large-scale systems Overview of QUICKER: Specifying QoS Requirements

25 25 Application requirements are expressed as QUICKER QoS policy models QUICKER captures policies in a platform-independent manner –Specifying QoS is tantamount to answering questions about application; rather than using low-level mechanisms (such as type of publisher proxy collection, event dispatching mechanism etc.) to achieve QoS Overview of QUICKER: Specifying QoS Requirements

26 Representation at multiple levels of granularity –e.g., component- or assembly-level 26 Application requirements are expressed as QUICKER QoS policy models QUICKER captures policies in a platform-independent manner –Specifying QoS is tantamount to answering questions about application; rather than using low-level mechanisms (such as type of publisher proxy collection, event dispatching mechanism etc.) to achieve QoS Overview of QUICKER: Specifying QoS Requirements

27 27 Benefits of QUICKER QoS policy modeling QoS policy specifications can be containment inherited, reused QoS policy inherited by all contained objects More than one connections can share QoS policies Scalable, flexible QoS policy models Overview of QUICKER: Specifying QoS Requirements

28 28 iterator: COPY_ON_READ COPY_ON_WRITE DELAYED IMMEDIATE dispatching: REACTIVE PRIORITY MT scheduling: NULL PRIORITY bands: low_prio high_prio fltrgrp: DISJUNCTION CONJUNCTION LOGICAL_AND TPool: stacksize lane_borrowing request_bufferring lanes: static_thrds dyna_thrds Challenge 2. QoS realization Very large configuration space providing high degree of flexibility and configurability Semantic compatibility of QoS configurations enforced via low-level mechanisms – tedious, error-prone Prune middleware configuration space, instantiate configuration set selected, validate their values Overview of QUICKER: Realizing System QoS

29 29 Mapping application QoS policies onto configuration options using model transformation developed in GReAT Semantic translation algorithms specified in terms of input & output languages –e.g., rules that translate multiple application service requests & service level policies to corresponding QoS options Transformation output as system model – allow for further analysis & translation Simplifies application development & enhances traceability Provider Service Request Provider Service Levels Level 1 Level 2 Level 3 Multiple Service RequestsService Levels Priority Model Policy Thread Pool Lanes Overview of QUICKER: Realizing System QoS

30 30 Challenge 2 Resolved: Realizing System QoS Algorithm shown is RT-CCM QoS mapping that uses Application structural properties to automatically deduce configurations –Lines 7-16 show thread resource allocation scheme –Line 27 shows client-side QoS configurations QoS policies specified used for the remaining configurations –Service invocation profile used to assign thread resources –Line 28 resolves priority dependency of connected components # of client components and interface operations used to calculate # of threads required

31 31 Challenge 3. QoS options dependency resolution of application sub-systems Configurations of connected components may exhibit dependency relationship –e.g., server-side priority model and client-side priority bands must match Manually tracking dependencies between components is hard Overview of QUICKER: Realizing System QoS

32 –Resolves sub- system non- functional dependencies, verifies correctness of configurations Verification of correctness of transformation algorithms using structural correspondence techniques 32 Evaluating QUICKER: 3 issues Is the transformation correct? Generated artifact correct? Is QoS realized? Design-time verification of generated QoS configurations using model- checking –Considerably faster system QoS design, and evolution than manual approach We focus on the correctness & effectiveness of our QoS configuration process


Download ppt "Evaluating the Correctness and Effectiveness of a Middleware QoS Configuration Process in DRE Systems Institute for Software Integrated Systems Dept of."

Similar presentations


Ads by Google