Presentation is loading. Please wait.

Presentation is loading. Please wait.

Sponsored by the U.S. Department of Defense © 2003 by Carnegie Mellon University Version 1.0 page 1 Pittsburgh, PA 15213-3890 Six Lectures on Predictable.

Similar presentations


Presentation on theme: "Sponsored by the U.S. Department of Defense © 2003 by Carnegie Mellon University Version 1.0 page 1 Pittsburgh, PA 15213-3890 Six Lectures on Predictable."— Presentation transcript:

1 Sponsored by the U.S. Department of Defense © 2003 by Carnegie Mellon University Version 1.0 page 1 Pittsburgh, PA Six Lectures on Predictable Assembly (from Certifiable Components) Kurt C. Wallnau

2 © 2003 by Carnegie Mellon University Version 1.0 page 2 Organization and Objectives Background Ideas & Context25 min 45 min Principles of Prediction- Enabled Component Technology 20 minCOMTEK Illustration, Q & A Methods for Developing PECTs 1.Co-refinement 2.Empirical Validation 45 min 1pm 2pm 3pm 4pm DAY 1

3 © 2003 by Carnegie Mellon University Version 1.0 page 3 Organization and Objectives Construction & Composition Language (CCL) – Structure 25 min 45 min 20 min 25 min 1pm 2pm 3pm 4pm DAY 2 CCL – Behavior CCL – Processor 20 min Pin – Component Technology Projects – Ideas and Options

4 © 2003 by Carnegie Mellon University Version 1.0 page 4 Giving Credit Where Credit Is Due Scott Hissam James Ivers Mark Klein Paulo Merson Gabriel Moreno Daniel Plakosh Natasha Sharygina Kurt Wallnau SEI contributors: Paul Clemens, John Hudak, Linda Northrop CMU contributors: Prof. Edmund Clark, Prof. John Lehoczky, Joël Ouaknine, Sagar Chaki, Nishant Sinha MdH contributors: Magnus Larsson Summary

5 © 2003 by Carnegie Mellon University Version 1.0 page 5 What is a Component? -1 These are fundamental intrinsic properties These constitute a basic categorical definition of component i-1. Implementation i-2. Encapsulation i-3. Interface Fundamentals

6 © 2003 by Carnegie Mellon University Version 1.0 page 6 What is a Component? e-1. Independently deployed/substituted component supplier component integrator Fundamentals e-2. Third-Party Composition These are commonly sited extrinsic properties, and are not universally accepted

7 © 2003 by Carnegie Mellon University Version 1.0 page 7 What is a Component? -3 Sometimes compliance with a component standard is also sited as a key criterion defines component types, patterns of interaction IComponentStd IApplication e-3. Adheres to Component Standard Fundamentals

8 © 2003 by Carnegie Mellon University Version 1.0 page 8 What is a Component -4? A component runtime environment (aka container, framework) implements parts of the component standard (deployment, resource sharing, constraint enforcement) Component Framework IComponentStd Fundamentals

9 © 2003 by Carnegie Mellon University Version 1.0 page 9 Component Method Space Different configurations of extrinsic properties lead to different classes of design problem Different classes of design problem call for different design methods Four configurations of extrinsic properties are particularly significant: these comprise the component method space Method Space

10 © 2003 by Carnegie Mellon University Version 1.0 page 10 Control Dimension -1 Do the components exist prior to (and/or independent of) a design problem? systems dominated by one or the other pose fundamentally different design problems pre-existing components  selection problems custom components  optimization problems Pre-existing Components Custom Components Method Space

11 © 2003 by Carnegie Mellon University Version 1.0 page 11 Control Dimension -2 Optimization decisions arise when there is a continuous (often implicit) range of options for example, choosing a setting on a sixteen channel graphical equalizer The design problem: generate optimal design from a continuous range of options in component-based design: generate and specify the scope of a component, its interface, its interaction model, etc. Custom Components Method Space

12 © 2003 by Carnegie Mellon University Version 1.0 page 12 Control Dimension -3 Selection decisions arise when there are N discrete (for small N) options for example, choosing between six popular manufacturers of graphical equalizer equipment The design problem: to select the design that best achieves design goals in component based design: select and integrate components, component features, component repairs, patterns of component interaction Pre-existing Components Method Space

13 © 2003 by Carnegie Mellon University Version 1.0 page 13 Control Dimension -4 Today, the commercial market is the primary source of pre-existing components their use involves a loss of control to the “market regime” market-induced complications also arise (complexity, idiosyncrasy, instability) Pre-existing Components Commercial Method Space

14 © 2003 by Carnegie Mellon University Version 1.0 page 14 Deployment Dimension -1 Will components be deployed to a component framework or native platform? a framework is a “designed” environment for component-based applications an operating system views components in more primitive terms Operating System Platform Component Framework Platform Method Space

15 © 2003 by Carnegie Mellon University Version 1.0 page 15 Deployment Dimension -2 Operating System Many, incompatible composition primitives “Architectural mismatch” Component Framework Restricted, compatible composition primitives Simplified integration Method Space Operating System Platform Component Framework Platform

16 © 2003 by Carnegie Mellon University Version 1.0 page 16 Deployment Dimension -3 e.g., challenging security infrastructure design e.g.,security “by construction” Method Space Operating System Support a wide variety of application requirements Component Framework Focus on application- specific requirements Operating System Platform Component Framework Platform

17 © 2003 by Carnegie Mellon University Version 1.0 page 17 Deployment Dimension -4 Can design a round peg for a round hole Method Space Operating System Significant flexibility to meet design goals Component Framework Limited repertoire of design options Operating System Platform Component Framework Platform Stuck with “one size fits all” pegs regardless of holes

18 © 2003 by Carnegie Mellon University Version 1.0 page 18 Component Space -1 Commercial Supplier Internal Supplier Component Standard Platform Operating System Platform Control over a component is “orthogonal” to its intended deployment environment. These dimensions describe a space of software engineering concerns Method Space

19 © 2003 by Carnegie Mellon University Version 1.0 page 19 Specification & Optimization Selection & Adaptation Component Space -2 Commercial Supplier Internal Supplier Component Standard Platform Operating System Platform Method Space & &

20 © 2003 by Carnegie Mellon University Version 1.0 page 20 Specification & Optimization Selection & Adaptation Component Space -3 Commercial Supplier Internal Supplier Component Standard Platform Operating System Platform Method Space & & Total application design, including infrastructure and quality attributes Rapid functional partitioning and/or application assembly

21 © 2003 by Carnegie Mellon University Version 1.0 page 21 Method Space -1 Commercial Supplier Internal Supplier Component Standard Platform Operating System Platform Method Space Mature Methods Emerging Methods Uncharted Waters

22 © 2003 by Carnegie Mellon University Version 1.0 page 22 (wants to be here) Method Space -2 Commercial Supplier Internal Supplier Component Standard Platform Operating System Platform Method Space Catalysis Rational Unified Process (RUP) UML Components Bus. Component Factory Building Systems from Commercial Components evolved...

23 © 2003 by Carnegie Mellon University Version 1.0 page 23 Presentation Overview Introduction What is PACC? Why is PACC important to industry? The three key principles of PACC How we apply the principles Summary and Status Roadmap

24 © 2003 by Carnegie Mellon University Version 1.0 page 24 Reliability Theory Safety Theory A Simple Idea Construction Land Components and Assemblies Analysis Land Performance Theory A Model in the Theory Models and Predictions Interpret to satisfy analysis assumptions to make analysis tractable to simplify interpretation Construction rules (restrictions)Objective basis for trust by requiring sound theories by requiring empirical validation by encouraging 3 rd party validation Introduction to PACC

25 © 2003 by Carnegie Mellon University Version 1.0 page 25 Why Is PACC Important to Industry? For systems with critical runtime properties, industry can: Introduction to PACC Establish design and implementation standards that lead to software systems with predictable runtime behavior Use automation to enforce these standards, with the result that systems become predictable by construction Define objective standards and measures for components, whether internally developed or supplied by 3rd parties Certify software components to these standards and measures Trust the engineering predictions based certified components Incrementally and systematically introduce state of the art prediction for a broader class of systems and properties

26 © 2003 by Carnegie Mellon University Version 1.0 page 26 What Is--And Is Not--New Here? The PACC vision is not revolutionary: Stripped to its essentials: we want to know how a system will behave before we build it What makes our approach noteworthy is: How we combine software component technology, software architecture technology, and analysis technologies With an aim to achieve practical, sound, and objectively trustworthy results Three key principles underlie our approach to PACC If these are understood, then the specifics of our technology approach become clear Introduction to PACC

27 © 2003 by Carnegie Mellon University Version 1.0 page 27 Presentation Overview Introduction The three key principles of PACC Principle of Restriction Principle of Enforced Discipline Principle of Objective Confidence How we apply the principles Summary and Status Roadmap

28 © 2003 by Carnegie Mellon University Version 1.0 page 28 Restricted for predictable safety Principle of Restriction Key idea: Restrict design and implementation to only those patterns that are analyzable—and that yield objectively predictable behavior This principle leads to predictability by design Three Principles of PACC Restricted for predictable timing Unrestricted and unpredictable assemblies

29 © 2003 by Carnegie Mellon University Version 1.0 page 29 Principle of Enforced Discipline Key idea: An assembly is predictable with respect to a property theory if and only if it satisfies the invariants of that theory Enforcing these invariants leads to predictability by construction Three Principles of PACC * timing theory Restricted for predictable timing

30 © 2003 by Carnegie Mellon University Version 1.0 page 30 Principle of Enforced Discipline Key idea: An assembly is predictable with respect to a property theory if and only if it satisfies the invariants of that theory Enforcing these invariants leads to predictability by construction Three Principles of PACC Satisfy * invariants? Refine assembly specification no yes predictable with * * timing theory

31 © 2003 by Carnegie Mellon University Version 1.0 page 31 Principle of Enforced Discipline Key idea: An assembly is predictable with respect to a property theory if and only if it satisfies the invariants of that theory Enforcing these invariants leads to predictability by construction Three Principles of PACC Satisfy * invariants? Refine assembly specification no yes predictable with * * timing theory The ability to check these design invariants often implies other abilities generate models in a property theory generate well-formed code fragments These have the net effect of hiding much complexity

32 © 2003 by Carnegie Mellon University Version 1.0 page 32 Principle of Objective Confidence Key idea: Assertions (about components and assemblies) must be mathematically demonstrable and/or independently verifiable Applying this principle leads economies of trust Three Principles of PACC Trusted Descriptions Certified Components have Faster, better component acceptance testing Reduced discovery cost Trusted Predictions lead to Reduced integration cost Reduced validation cost Reduced optimization cost

33 © 2003 by Carnegie Mellon University Version 1.0 page 33 Presentation Overview Introduction The three key principles of PACC How we apply the principles Building Prediction-Enabled Component Technology (PECT) The Starter Kit: Making PACC real in industry Summary and Status Roadmap

34 © 2003 by Carnegie Mellon University Version 1.0 page 34 Applying the Principle of Restriction 1 Restrictions is applied in two complementary ways Constructive Restriction Restriction for predictable assembly Analytic Restriction =+ Our Approach 1.Constructive restriction imposes the rules of a target architecture on developers if these rules are followed, then components will “plug” if these rules are followed, then components will “play” 2. Analytic restrictions impose structural and behavioral rules that make specific assembly behaviors predictable

35 © 2003 by Carnegie Mellon University Version 1.0 page 35 We have defined the minimal requirements for a prediction- enabled component technology Applying the Principle of Restriction 2 1.Constructive restriction to a component technology Our Approach (e) e) Well-specified component deployment, lifecycle contracts d) Well-specified specified interaction protocols (d) Platform/OS Component Runtime (c) c) A well-specified and standard runtime environment a)Specified component interface and context dependencies (a’) (a) (a’) b) Behavior restricted to specified interface(s) (b)

36 © 2003 by Carnegie Mellon University Version 1.0 page 36 Reasoning frameworks are the nexus of predictability Applying the Principle of Restriction 3 1.Constructive restriction to a component technology 2.Analytic restriction via reasoning frameworks Our Approach Model Representation Decision Procedure (d) d) Analysis is fully automated (c) c) Reality and assumptions are preserved under interpretation Interpretation (b) b) A mapping from constructive to analytic view is defined Model Theory (a) a)Demonstrably sound analytic theories

37 © 2003 by Carnegie Mellon University Version 1.0 page 37 Principle of Enforced Discipline Model Generator Analysis & Prediction Component & Assembly Descriptions in a Uniform Specification Language Construction Rule Checker Key : Automation Our Approach Analysis Rule Checker checking can not be by-passed much of this complexity can be hidden Component & Assembly Generator if all constraints satisfied

38 © 2003 by Carnegie Mellon University Version 1.0 page 38 The Result : A PECT Prediction Enabled Component Technology (PECT) The component technology provides “plug-ability” The reasoning frameworks provide “play-ability” The interpretation enforces analysis invariants Reasoning Framework PECT Component Technology enforced constructive constraints Our Approach enforced analytic constraints Reasoning Framework analysis assumptions guaranteed consistency interpretation

39 © 2003 by Carnegie Mellon University Version 1.0 page 39 To Illustrate… 1. Given a specification of this Pin assembly Our Approach

40 © 2003 by Carnegie Mellon University Version 1.0 page 40 To Illustrate… 2. Apply the interpretation to the -RMA reasoning framework (also for -RMA/QT,  -MAGIC) -RMA interpretation Our Approach

41 © 2003 by Carnegie Mellon University Version 1.0 page 41 To Illustrate… -RMA interpretation Our Approach 3. If the assembly is well-formed a)Generate simulation model b)Apply decision procedure c)Predict average execution latency of assembly paths

42 © 2003 by Carnegie Mellon University Version 1.0 page 42 To Illustrate… Our Approach BUT… Why should we believe this prediction? How much confidence should we have in this belief?

43 © 2003 by Carnegie Mellon University Version 1.0 page 43 Applying the Principle of Confidence 1 Confidence in Prediction Objective Confidence for predictable assembly = + Our Approach Confidence in Components Objective confidence takes two forms in our approach 1.Confidence in prediction is achieved by rigorous verification and validation of reasoning frameworks all predictions must be observable (i.e., refutable) 2. Confidence in components is achieved by linking component properties to reasoning frameworks all properties must be independently verifiable

44 © 2003 by Carnegie Mellon University Version 1.0 page 44 Applying the Principle of Confidence 2 1.The quality of predictions made by a reasoning framework must be objectively established Reasoning frameworks must provide evidence of their validity All reasoning frameworks must be demonstrably (mathematically) sound Reasoning frameworks based on measure (e.g., observed time) must in addition demonstrate empirical validity Our Approach Statistical Label Property Theory  Version 1 Mean MRE † (  MRE )0.5% Confidence level (  ) Proportion* (  )80% 99.29% Upper Bound ( UB )1% *Goal:  =99%,  =80%, UB=5% † Values based on sample of 75 assemblies, 156 tasks, and 2,952 jobs We have developed tools and methods for assigning meaningful statistical labels to “empirical” reasoning frameworks

45 © 2003 by Carnegie Mellon University Version 1.0 page 45 Applying the Principle of Confidence 3 2. Those component properties that are required by a reasoning framework can be objectively validated s r Reasoning frameworks specify required component properties The value set and the validation procedure Certified properties compose the analytic interface of a component (a) s() exec SR (b) C.dll sat SR Our Approach

46 © 2003 by Carnegie Mellon University Version 1.0 page 46 The Starter PECT ABB and the SEI have developed a laboratory grade basis PECT emphasis to date has been on developing sound theories emphasis this year will getting trial use by early—Open Robot Controller lab for starters Construction & Composition Language (CCL) WindowsCE Windows/RTX -RMA Timing (deterministic) Theory: RMA -RMA/QT Timing (semi-stochastic) Theory: RMA + queuing  -MAGIC Safety, liveness Theory: Model checking * Lawrence Summers, President Harvard University Our Approach Pin Component Technology

47 © 2003 by Carnegie Mellon University Version 1.0 page 47 For More Information on PACC See: for general information about SEI research in predictable assembly from certifiable components.www.sei.cmu.edu/pacc for a detailed report on an experiment in predictable assembly for substation automation systems.www.sei.cmu.edu/publications/documents/02.reports/02tr031.html for a synopsis of the above for an in-depth treatment of the technical foundations of PECT.www.sei.cmu.edu/publications/documents/03.reports/03tr009.html for background on the key technical concepts underlying software component technology.www.sei.cmu.edu/publications/documents/00.reports/00tr008.html for a high level introduction to the Construction and Composition Languagewww.sei.cmu.edu/publications/documents/03.reports/03tn025 Summary

48 © 2003 by Carnegie Mellon University Version 1.0 page 48 Takeaway Points Assembling systems with predictable runtime behavior, from certified software components, is possible today Three core principles define our approach Restriction: Build only what is predictable Enforced Discipline: Make it practical for developers Objective confidence: Establish a sound basis for trust The result is Prediction-Enabled Component Technology The Starter Kit is how we plan to transition PACC into practice—but its development requires much more work

49 © 2003 by Carnegie Mellon University Version 1.0 page 49 PECT Illustration: COMTEK- COMTEK was an SEI-developed prototype (PACC precursor)

50 © 2003 by Carnegie Mellon University Version 1.0 page 50 Component Model connection componentinputPort dataType: String required: Boolean outputPort dataType:String Assembly context a : Application inv: a.fullyConnected = a.component  forAll(c | c.fullyConnected) fullyConnected: Boolean fullyConnected: Boolean context c : Component inv: c.fullyConnected = c.inputPort  forAll(i | i.required implies i.connection  notEmpty() ) context c : Connection inv: c.source <> c.sink and c.sink.inputPort  exists (i | c.input = i) and c.source.outputPort  exists(o | c.output = o) and c.output.dataType = c.input.dataType source sink * * * * *

51 © 2003 by Carnegie Mellon University Version 1.0 page 51 Audio Component Assembly To preserve playback quality, an execution cycle A.latency must consume <.046 seconds A A.latency

52 © 2003 by Carnegie Mellon University Version 1.0 page 52 -Property Theory -1 A property theory… …parameterized by analytic component types

53 © 2003 by Carnegie Mellon University Version 1.0 page 53 -Property Theory -2 Type  instances block on a periodic external event Type  instances depend only on pipes and never block

54 © 2003 by Carnegie Mellon University Version 1.0 page 54 Analytic Extensions componentinputPort dataType: String required: Boolean fullyConnected: Boolean source sink * * outputPort dataType:String > aperiodic  e:Time > periodic  e, p: Time > connection >

55 © 2003 by Carnegie Mellon University Version 1.0 page 55 Assumptions/Restrictions -1 Analytic restrictions The COMTEK component model, plus Certifiable interfaces .{e, p} and .{e} -e execution time -p period of external event -p, e  T = {j:  j  0.0} time - .e, .e, .p constant - ,  are un-buffered -for all  j,  k  A   j =  k uniform period

56 © 2003 by Carnegie Mellon University Version 1.0 page 56 Assumptions/Restrictions -2 - .e - .e - .p - .,  un-buffered -  j,  k  A   j =  k uniform period certifiable component properties property of component or guarantee by component runtime environment property guaranteed by component assembly environment Analytic Restrictions:

57 © 2003 by Carnegie Mellon University Version 1.0 page 57 COMTEK- Interpretation -1 Component Assembly

58 © 2003 by Carnegie Mellon University Version 1.0 page 58 COMTEK- Interpretation -2 {  c1,  c7,  c2,  c3,  c4,  c5,  c6 } Component Assembly Analytic Assembly Interpretation Execution order, data flow irrelevant      

59 © 2003 by Carnegie Mellon University Version 1.0 page 59 COMTEK- Validation Empirical property theories will be objectively validated this leads to use of statistical models property theories and components can each be validated we have some ideas about standard models and labels for both Component Certification Property Theory Validation

60 © 2003 by Carnegie Mellon University Version 1.0 page 60 Co-refinement Creating a PECT involves negotiation restricted/relaxed construction model general/specific reasoning framework construction modelreasoning framework restrict relax more utility interpretable validatable useful less utility Illustration

61 © 2003 by Carnegie Mellon University Version 1.0 page 61 Predicting Latency of Controller Operations Operator Station Switch Controller 2. Controller PECT 1. Operator PECT MRE .10 p .80  =.90 MRE .05 p .80  =.99 Substation Automation System (SAS) 3. SAS PECT MRE .10 p .80  =.90 norms Illustration

62 © 2003 by Carnegie Mellon University Version 1.0 page 62 Switch Controller Pin + IEC SwPos SwSel ~~~ swSboPos swSboSel ~~~ swDoPos ~~~ SwPos SwSel BrPos XCBR ~~~ opPos ~~~ opSel swSboPos swSboSel CSWI opPos opSel SwPos SwSel BrPos BrTrip C W V OPCgateway ~~~~~~ SwPos SwSel swMonSink [ ] SwPos [ ] SwSel [ ] BrPos [ ] V C swMonSource swDoPos BrTrip [ ] C PIOC W C V MMXU V V C C ~~~~~~ ~~~ XCBR ~~~~~~ ~~~ ~~~~~~ CSWI ~~~~~~ [ ] OPCgateway [ ] ~~~~~~ TCTR TVTR Signal CLK 0 Signal 0 Illustration

63 © 2003 by Carnegie Mellon University Version 1.0 page 63 Start with GRMA Property Theory Generalized rate monotonic analysis (GRMA) is often used to predict schedulability for worst-case assumptions Key concepts include priority-based scheduling, priority inversion, jobs, periods, work, latency, … and various algebraic theorems relating these concepts GRMA is quite mature and widely used Illustration

64 © 2003 by Carnegie Mellon University Version 1.0 page 64 Controller Co-Refinement (basis) Construction Rules: no mutex pins no asynchronous pins no topological cycles Procedure: W worst case latency (w) no preemption restrict relax more utility less utility periodic clock (topology) execution duration (property) basic CCL basic RMA Illustration

65 © 2003 by Carnegie Mellon University Version 1.0 page 65 Controller Co-Refinement ( A ) +1 Construction Rules: mutex pin no synchronization allowed Procedure: A average case latency over hyperperiod restrict relax more utility less utility Illustration

66 © 2003 by Carnegie Mellon University Version 1.0 page 66 Controller Co-Refinement ( WB ) +2 Construction Rules: synchronization allowed super-ceiling protocol unique thread priority Analysis: WB worst case latency allow blocking restrict relax more utility less utility Illustration

67 © 2003 by Carnegie Mellon University Version 1.0 page 67 Controller Co-Refinement ( ABA ) +7 Construction Rules: asynchronous pins unordered asynchrony Analysis: WB average latency allow blocking allow asynchrony restrict relax more utility less utility Illustration

68 © 2003 by Carnegie Mellon University Version 1.0 page 68 Predictive Strength of ABA All Assemblies Tasks (i.e., samples (N)) Mean MRE (  AVGMRE) Confidence (  ) Population (  ) Upper Bound (UB) % 99.04% 90% < 2% % 99.29% 80% 1% % 99.69% 95% < 8% Goal 75 N/A 99% 80% 5% Out performs our 5% MRE Goal for All assemblies All Assemblies Illustration

69 © 2003 by Carnegie Mellon University Version 1.0 page 69 Validation: Empirical vs. Formal Validation is the process for trusting the soundness of a property theory Formal Validation is the technique to prove asserted claims about a property theory Empirical Validation is the technique to measure the correlation of predicted and observed properties Introduction Empirical evidence direct observation plausible Formal evidence mathematical proof demonstrable

70 © 2003 by Carnegie Mellon University Version 1.0 page 70 Empirical Validation in Short 1.We predict properties of assemblies 1.We measure the actual behavior of assemblies 2.We compare prediction and actual behavior 3.Then we can trust predicted behavior for assemblies in general

71 © 2003 by Carnegie Mellon University Version 1.0 page 71 Lambda* Performance Validation Predicted property: latency of a task Known property: fixed execution time of a component We don’t want to validate assembly A or B, we want to validate the PECT

72 © 2003 by Carnegie Mellon University Version 1.0 page 72 Outline Setting the Context Empirical Validation Workflow More About Statistical Labels Summary

73 © 2003 by Carnegie Mellon University Version 1.0 page 73 Step 1: Obtain Components Options: Use existing Create manually Specify (CCL) and generate code automatically Component specs (ccl) CCL code generator Components (dll) Write code

74 © 2003 by Carnegie Mellon University Version 1.0 page 74 Step 1: Obtain Components Synthetic components Simulate various execution loads Specified in CCL C code automatically generated Have both synchronous and asynchronous interfaces Sporadic Server components Implements the Sporadic Server algorithm Extension of the Synthetic components -Only supports asynchronous interface

75 © 2003 by Carnegie Mellon University Version 1.0 page 75 Step 2: Measure Components Equivalent to “certification” of components Mean and standard deviation are annotated back in the CCL specs Component specs (ccl) CCL code generator Components (dll) Measure component Specs are annotated with measures Write code

76 © 2003 by Carnegie Mellon University Version 1.0 page 76 Step 2: Measure Components  r s  C 2: Enter 3: Leave 1: Enter 4: Leave R s = s  r  R s R s = s  r  r  s  R s t1t1 t2t2 t3t3 t4t4 t5t5 t6t6 t 1 = s – s t 2 = r – s t 3 = r – s t 4 = r – r t 5 = s – r t 6 = s – r Component execution time: t 1

77 © 2003 by Carnegie Mellon University Version 1.0 page 77 Step 3: Create Random Assemblies We don’t want to validate prediction for assembly A or B We want to validate the PECT Ability to create predictions for assemblies in general We use a stratified sample space of assemblies

78 © 2003 by Carnegie Mellon University Version 1.0 page 78 Step 3: Create Random Assemblies Component specs (ccl) CCL code generator Components (dll) Measure component Specs are annotated with measures Assembly generator Assembly specs (ccl) Sample space definition (xml) Write code

79 © 2003 by Carnegie Mellon University Version 1.0 page 79 Step 3: Create Random Assemblies # clocks # periodics exec. time clock period comm. type Assembly id A mixed ::::::: ::::::: harmonic? yes random clock id distribution interval mean aper.exec.time priority rule rep. period 1exp200–10highest100 ::::::: ::::::: budget 10 interval std.dev. : : Columns: variation points Rows: strata characterization Optional configuration of aperiodic tasks } } connections/src

80 © 2003 by Carnegie Mellon University Version 1.0 page 80 Step 4: Interpretation Component specs (ccl) Write code CCL code generator Components (dll) Measure component Specs are annotated with measures Assembly generator Assembly specs (ccl) Sample space definition (xml) Interpre- tation Tasks, subtasks

81 © 2003 by Carnegie Mellon University Version 1.0 page 81 a a Step 5: Prediction Component specs (ccl) Write code CCL code generator Components (dll) Measure component Specs are annotated with measures Assembly generator Sample space definition (xml) Prediction Predictions (csv) Assembly specs (ccl) Interpre- tation Tasks, subtasks

82 © 2003 by Carnegie Mellon University Version 1.0 page 82 Step 6: Assembly Code Generation Component specs (ccl) Write code CCL code generator Components (dll) Measure component Specs are annotated with measures Assembly generator Sample space definition (xml) Assemblies (exe) Assembly specs (ccl) Interpre- tation Tasks, subtasks a a Prediction Predictions (csv)

83 © 2003 by Carnegie Mellon University Version 1.0 page 83 Step 7: Run and Measure Assemblies Component specs (ccl) Write code CCL code generator Components (dll) Measure component Specs are annotated with measures Assembly generator Sample space definition (xml) Assemblies (exe) Measure assembly Actual mea- sures (csv) Assembly specs (ccl) Interpre- tation Tasks, subtasks a a Prediction Predictions (csv)

84 © 2003 by Carnegie Mellon University Version 1.0 page 84 Step 7: Run and Measure Assemblies Measurement Apparatus For all assemblies, each task is executed in the runtime while observing actual execution latency for each job Measurement apparatus computes and records average job latency ABA Validation Win2K RTX Pin Runtime (RTX Process) a0: E0 Pin Measurement (RTX Process) Shared Memory EventsTraces

85 © 2003 by Carnegie Mellon University Version 1.0 page 85 Step 7: Run and Measure Assemblies Latency for Task n = C3.s 2 – clock.r 1 Start Stop  r1r1  r2r2  s2s2 s1s1  C1C1  r1r1  r2r2  s2s2 s1s1  C2C2  r1r1  r2r2  s2s2 s1s1  C3C3  clock r1r1 ABA Validation Defining Task Latency

86 © 2003 by Carnegie Mellon University Version 1.0 page 86 Step 8: Statistical Analysis Component specs (ccl) Write code CCL code generator Components (dll) Measure component Specs are annotated with measures Assembly generator Sample space definition (xml) Assemblies (exe) Statistical analysis Assembly specs (ccl) Interpre- tation Tasks, subtasks a a Prediction Predictions (csv) Measure assembly Actual mea- sures (csv)

87 © 2003 by Carnegie Mellon University Version 1.0 page 87 Step 8: Statistical Analysis Results checked against validation goal If not acceptable, investigate and repeat process -Does the property theory need to be refined? -Problems in the measurement apparatus? -Mismatched assumptions about test environment? If acceptable, the output is a statistical label Statistical Label Property Theory  Version 1 Mean MRE † (  MRE )0.5% Confidence level (  ) Proportion* (  )80% 99.29% Upper Bound ( UB )1% *Goal:  =99%,  =80%, UB=5% † Values based on sample of 75 assemblies, 150 tasks, and 2,952 jobs

88 © 2003 by Carnegie Mellon University Version 1.0 page 88 Outline Setting the Context Empirical Validation Workflow More About Statistical Labels Summary

89 © 2003 by Carnegie Mellon University Version 1.0 page 89 Validation Goal Example: 8 out of 10 predictions for task latency will fall within 5% MRE, and we will have a 99% confidence in this upper bound Validation goal defines statistical confidence Norms selected as a plot device for our validation Based on model problems posed by [Preiss 01] ABA Validation Confidence level (  ) 99% Proportion (  ) 80% Upper Bound (UB)5% Preiss, O. & Wegmann, A. “Towards a Composition Model Problem Based on IEC61850.”.

90 © 2003 by Carnegie Mellon University Version 1.0 page 90 Component Measure repeataverage a measurement an item sample Mean component latency component Labeling Property Theories True value of latency is not obtainable Mean of a sample is an estimator of population mean

91 © 2003 by Carnegie Mellon University Version 1.0 page 91 Assembly Measures Again, mean value is considered the true value Assemblies with aperiodic tasks require much more time average sample Mean assembly latency (observed) repeat Assembly (# of components) a measurement an assembly Labeling Property Theories

92 © 2003 by Carnegie Mellon University Version 1.0 page 92 Assembly Predictions Prediction of an assembly property is the result of a property theory Aggregate or derivative of component properties Difference between observed and predicted values indicates predictive power of the property theory apply property theory Property theory Mean assembly latency prediction Assembly (# of components) Labeling Property Theories

93 © 2003 by Carnegie Mellon University Version 1.0 page 93 Property Theory Labels (1) Compare results to assess “goodness” of property theory Error Relative error Magnitude of relative error Goal Predictions close observed -small MRE Trust in predictions Mean assembly latency (observed) Mean assembly latency prediction Labeling Property Theories

94 © 2003 by Carnegie Mellon University Version 1.0 page 94 Property Theory Labels (2) Correlations allow us to talk about the performance of the property theory for the sample Linear (or Pearson’s) Spearman Labeling Property Theories Inferential intervals allow us to infer about future performance of the property theory One-tailed Two-tailed

95 © 2003 by Carnegie Mellon University Version 1.0 page 95 Generating A Statistical Label Statistical label is the output of the empirical validation process Describes the predictive powers of a PECT Property theory specific May contain both descriptive and inferential values average sample repeat MRE assembly MRE sample confidence interval correlation analysis Statistical label empirical validation

96 © 2003 by Carnegie Mellon University Version 1.0 page 96 Final Validation Data (by Job) ABA Validation

97 © 2003 by Carnegie Mellon University Version 1.0 page 97 Reported Statistical Label That is 8 out of 10 predictions will fall within 1% MRE, and we have a 99% confidence in this upper bound. for  =90%, UB MRE is 2%,  >99% Out performs our 5% MRE Goal for all assemblies ABA Validation Statistical Label Property Theory  Version 1 Mean MRE † (  MRE )0.5% Confidence level (  ) Proportion* (  )80% 99.29% Upper Bound ( UB )1% *Goal:  =99%,  =80%, UB=5% † Values based on sample of 75 assemblies, 150 tasks, and 2,952 jobs

98 © 2003 by Carnegie Mellon University Version 1.0 page 98 Outline Setting the Context Empirical Validation Workflow More About Statistical Labels Summary

99 © 2003 by Carnegie Mellon University Version 1.0 page 99 Summary Validation is one step in developing a PECT Supports co-refinement Validation is the process for trusting the soundness of a property theory Formal Validation Empirical Validation A statistical label is the end product of empirical validation for a property theory How PECTs are labeled Empirical validation for PECT follows 6 general steps Define validation goal Define measures Define sampling procedure Define measurement apparatus Collect validation data Analyze results The empirical validation workflow enforces Rigor Repeatability


Download ppt "Sponsored by the U.S. Department of Defense © 2003 by Carnegie Mellon University Version 1.0 page 1 Pittsburgh, PA 15213-3890 Six Lectures on Predictable."

Similar presentations


Ads by Google