Presentation is loading. Please wait.

Presentation is loading. Please wait.

Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of.

Similar presentations


Presentation on theme: "Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of."— Presentation transcript:

1 Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of T Supervisor: Eric Yu

2 2 Introduction 2 nd Year Ph.D. student in the Department of Computer Science. Research Interests: Requirements Modeling, Intentional (Goal) Modeling, Intentional Model Analysis/Evaluation, Model Scalability. PhD Topic (in progress): Analysis of i* models. This presentation will briefly outline current and past research topics potentially relevant to the definition of GRL and URN

3 3 Outline Background i* Analysis –jUCMNav Evaluation –Interactive Forward Analysis of i* Models –Interactive Backward Analysis of i* Models –Using Satisfaction Arguments in i* Analysis Scalability –i* Scalability –Reusable i* Technology “Patterns” Conclusions

4 4 Background URN: User Requirements Notation –Connecting business processes with requirements concepts –Combines two notations: Use Case Maps (UCM) Goal-Oriented Requirement Language (GRL) –GRL is based on the i* and NFR Frameworks –Agent-Oriented, Intentional modeling framework (“why?” as well as “what?” and “how?”) –Captures stakeholders, their goals, and how these goals are achieved, including dependencies amongst stakeholders.

5 5 i* Syntax and Example

6 6 Analysis/Evaluation of i* Models The creation of i* models in and of itself a useful activity: –explicitly capture reasoning process –aids in communication –helps discover requirements. Can make further use of models by evaluating them. Purpose is to determine to what degrees stakeholder goals will be satisfied or denied, given a particular situation or scenario. Claim: Evaluation is an important part of i* modeling, it allows users to determine whether goals are met and guides revision of the model and design, improving quality. Other modeling techniques (process, data modes) do not have this capability.

7 7 jUCMNav Evaluation Overview Quantitative links between business processes in UCM and business goals in GRL. Initial satisfaction levels in GRL evaluation define strategies. Key Performance Indicators (KPIs) are used as part of Business Activity Monitoring to measure how well a process satisfies goals. –Four main dimensions of indicators: time, cost, quality and flexibility A KPI model connects business process monitors to GRL goals using quantified contribution links.

8 8 jUCMNav Evaluation Overview KPI models are kept separate from GRL models, individual KPI models are defined for different stakeholders. KPI values are mapped to evaluation levels: –Target Value, Threshold Value, Worst Value Figure From: A. Pourshahid, D. Amyot, P. Chen, M. Weiss, A. J. Forester, “Business Process Monitoring and Alignment: An Approach Based on the User Requirements Notation and Business Intelligence Tools”, IX Workshop on Requirements Engineering, WER’06, 2006.

9 9 Analysis/Evaluation of i* Models Our approach: Qualitative Forward/Backward, Interactive i* Evaluation Can evaluate models in two directions: –Forwards (bottom-up) Asking “What if…?” questions –Backwards (top-down) Asking “Is this possible?” “What is needed to achieve… ?” Our approach to analysis is interactive and qualitative.

10 10 Interactive Analysis of i* Models Our approach to analysis is interactive. –The procedure prompts the user for input at various points in the procedure. –Because i*/GRL models attempt to capture the desires and interactions between stakeholders, they are inherently incomplete. –Decisions must be supplemented by expert knowledge. –We aim to make models which are “complete enough” to facilitate understanding and useful analysis.

11 11 Qualitative Analysis of i* Models Our approach to analysis is qualitative. –At the early requirements stage (“Early-RE”) where i* models are used, concrete quantitative information is often not available, yet we want to be able to model, understand and analyze the domain. –We use course-grained qualitative analysis values, based on evaluation from the NFR Framework. –Our approach does not exclude expansion to quantitative measures, if available.

12 12 Qualitative Analysis of i* Models Approximate mapping from jUCMNav Evaluation values to Qualitative values

13 13 Forwards, Qualitative, Interactive Evaluation of i* Models Evaluation procedure for i* models (MSc Work) developed, expanded and adapted from the evaluation procedure in the NFR Framework. Qualitative evaluation labels are propagated throughout the graph using a combination of propagation rules and human judgment. Human judgment is needed to resolve situations where conflicting evidence arrives at a softgoal. All other situations can be resolved through automatic rules.

14 14 Forward Evaluation of i* Models Demonstrate the procedure through an example: Simple Model depicting the Trusted Computing Domain Step 1: Formulate question –If the PC Product Provider decides to not Allow Peer-to-Peer Technology, what effect will this have on Sell PC Products for Profit? Step 2: Place Initial Labels reflecting Question

15 15 Interactive Evaluation of i* Models Step 3: Propagate labels Step 4: Resolve labels Iterate on step 3 and 4 until all labels have been propagated Human Intervention Affordable PC Products Receives the following Labels: Partially denied from Obtain PC Products from Data Pirate Partially Denied from Purchase PC Products Select Label… Select Denied Human Intervention Profit Receives the following Labels: Partially denied from Desirable PC Products Partially Satisficed from PC Users Abide by Licensing Regulations Select Label… Select Conflict

16 16 Interactive Evaluation of i* Models Step 5: Analyze result –Preventing the use of peer-to-peer technology will reduce piracy, but will also make products less desirable to users –The overall effect on profit for Sell PC Products for Profit is both positive and negative Step 6: repeat with new analysis question

17 17 Backwards, Qualitative, Interactive Evaluation of i* Models Work in progress, initial version completed for an Automated Verification Course Project. Based on work with implements Backwards Analysis for goal models. Qualitative evaluation labels are propagated throughout the graph in a top-down manner, again using a combination of propagation rules and human judgment. An i* model and propagation rules are converted to axioms in CNF. The axioms are used with a SAT solver in an iteractive procedure.

18 18 Backwards Evaluation of i* Models Demonstrate the procedure through an example: Simple Model depicting the Trusted Computing Domain Step 1: Formulate question –Is there an assignment of Target Labels such that Attract Users is Partially Satisficed? –Target: Attract Users Partially Satisficed –Input Elements: Restrict Structure of Password, Ask for Secret Question

19 19 Backwards Evaluation of i* Models Step 2: Iterative Procedure Begins –Procedure runs, determines that there may be an assignment in which Attract Users is Partially Satisficed (depending on human judgment) –Procedure prompts the user for human judgment on nodes with label conflicts, starting from “top” down. Human Judgment Attract Users must be Partially Satisficed. What combinations of the elements contributing to this element would produce this value? Input: Security and Usability Partially Satisficed Human Judgment Security must be Partially Satisficed. What combinations of the elements contributing to this element would produce this value? Input: Restrict Structure of Password and Ask for Secret Question Satisfied Human Judgment Usability must be Partially Satisficed. What combinations of the elements contributing to this element would produce this value? Input: Restrict Structure of Password Denied and Ask for Secret Question Satisfied Procedure Runs… More elements needing human judgment are found Procedure Runs… Conflict! Satisfying assignment not found. Go back to last round of human judgment

20 20 Backwards Evaluation of i* Models Step 2: Iterative Procedure Begins –Procedure runs, determines that there may be an assignemt in which Attract Users is Partially Satisficed (depending on human judgment) –Procedure prompts the user for human judgment on nodes with label conflicts, starting from “top” down. Human Judgment Security must be Partially Satisficed. What combinations of the elements contributing to this element would produce this value? Previous: Restrict Structure of Password and Ask for Secret Question Satisfied No New Input Human Judgment Usability must be Partially Satisficed. What combinations of the elements contributing to this element would produce this value? Previous: Restrict Structure of Password Denied and Ask for Secret Question Satisfied No New Input Human Judgment Attract Users must be Partially Satisficed. What combinations of the elements contributing to this element would produce this value? Previous: Security and Usability Partially Satisficed New Input: Security Satisfied and Usability has a Conflict Value Human Judgment Usability must have a conflict value. What combinations of the elements contributing to this element would produce this value? Input: Restrict Structure of Password and Ask for Secret Question Satisfied

21 21 Backwards Evaluation of i* Models Step 3: Satisfying Assignment Found/Not Found –When a satisying assignment that no longer requires human judgment is found, the procedure ends, reporting needed target values. –If a satisfying assignment is not found, the procedure ends, reporting that the target is not possible.

22 22 Analysis/Evaluation of i* Models Claims: –Evaluation increases the modeler’s knowledge of the domain. –Evaluation leads modeler to make changes to model, improving the quality of the model. In thesis these claims are backed up for Forward Analysis by several examples. Forward Procedure was implemented in a depreciated version of the OpenOME tool, will be implemented again, along with Backward Procedure in the new version of OpenOME using EMF. Future work will include further applying both procedures to several case studies, evaluating their usefulness in real-life situations.

23 23 Using Satisfaction Arguments in the Analysis/Evaluation of i* Models Based on work by N. Maiden’s Group in the U.K. –Maiden, Neil, Lockerbie, James, Randall, Debbie, Jones, Sean, Bush, David, “Using Satisfaction Arguments to Enhance i* Modelling of an Air Traffic Management System”, 15th IEEE International Requirements Engineering Conference, 2007. M. Jackson’s notion of a satisfaction argument: –D, S |- R –Properties of the Domain, along with the Specification, can be used to show that one or more Requirements hold. This has been incorporated into i* modeling by adding structured, textual satisfaction arguments to justify satisfaction through mean-ends and contribution links. Motivated by an inability to capture the justifications behind model structure elicited during stakeholder workshops. Future Work: Incorporation Satisfaction (or “Evaluation”) Arguments into i* Evaluation – capturing justifications for relationships and human judgment.

24 24 Analysis/Evaluation of i* Models: Other i* and Related Procedures There are several other procedure for i* analysis introduced by other research groups, for example: Work by X. Franch in Spain uses the structure of i* models as a means to measure desired properties such as security and predictability. –X.Franch, “On the Quantitative Analysis of Agent-Oriented Models:, CAiSE’06, 2006, pp. 495-509. Work with in Italy facilitates either qualitative or quantitative propagation through goal models. –P. Giorgini, J. Mylopoulos, E. Nicchiarelli, R. Sebastiani, “Reasoning with Goal Models”, In Proceedings of 21st International Conference on Conceptual Modeling (ER 2002), pp. 167-181, 2002. Work in the U.K. by N. Maiden’s Group analyzes compliance of elements based on existing requirements and uses overall evaluation values for each actor –Maiden, Neil, Lockerbie, James, Randall, Debbie, Jones, Sean, Bush, David, “Using Satisfaction Arguments to Enhance i* Modelling of an Air Traffic Management System”, 15th IEEE International Requirements Engineering Conference, 2007.

25 25 Analysis/Evaluation of i* Models: Incorporating into GRL Definition? How do we account for the existence of i*/GRL evaluation and analysis in the URN Standard? Should we? How do we leave our consideration of i*/GRL evaluation and analysis open enough to facilitate various approaches at analysis?

26 26 Model Scalability i* Models can grow to unmanageable sizes How do we deal with i* scalability? How does our concern for scalability effect the URN standard? Example i* Model from Kids Help Phone Project, University Toronto, 2005

27 27 Model Scalability What are some of the mechanisms we can employ to deal with Scalability? –Model subsets/Modular view? –Tabular views? –Queries? –Slicing? –Layers? –Patterns? –Etc. Effective tool support is needed How do we account for or not exclude the possibility of these techniques within the Standard?

28 28 Reusable i* Technology “Patterns” Work with Markus Strohmaier, Jorge Aranda, Steve Easterbrook, Eric Yu Claim: Developing generalized models representing specific technology types (ex: wikis, discussion forums, chat rooms, etc), can help to aid the process of analyzing the appropriateness of technologies for specific contextual situations. What effects could pattern use have on scalability?

29 29 Reusable i* Technology “Patterns” General Methodology: –Develop contextualized model –Integrate pre-existing technology pattern into model, adapt pattern as necessary –Evaluate effectiveness of technology in contextual situation (using i* evaluation method) –Repeat steps 2 and 3 for each promising technology –Come up with a solution

30 30 Reusable i* Technology “Patterns” We have used this method in the Kids Help Phone study to test the viability of various technologies used for knowledge management. We have collected the data for an experiment testing the utility of technology patterns using students from a course. Additional Claims: –The resulting model is more detailed and of a higher quality than if the pattern were not used. –The process of integrating the pattern is easier than developing the model from scratch Patterns do not help with scalability of the overall model, but may help by modularizing modeling steps (creation, understanding, etc.) Submitted paper, REFSQ’08 –“Can Patterns improve i* Modeling? Two Exploratory Studies”

31 31 Conclusions i*/GRL Analysis/Evaluation is an important capability of the modeling language. How do we account for this in the standard while still being open to different approaches? Scalability is an important issue in relation to i*/GRL usage. Should this issue be accounted for in GRL standardization? How?

32 32 Thank you! jenhork@cs.utoronto.ca eric@cs.utoronto.ca

33 33 References J. Horkofff, M.Sc. Thesis, Using i* Models for Evaluation, Department of Computer Science, University of Toronto, (2006)Using i* Models for Evaluation J. Horkoff, E. Yu, L. Liu. (2006) Analyzing Trust in Technology Strategies. International Conference on Privacy, Security and Trust (PST 2006), Markham, Ontario, Canada. 12 pages. Accepted June 27, 2006.Analyzing Trust in Technology Strategies Chung, L., Nixon, B.A., Yu, E., Mylopoulos, J., Non-Functional Requirements in Software Engineering, Kluwer Academic Publishers, 2000. “OpenOME, an open-source requirements engineering tool”, Retrieved August 2007, from www.cs.toronto.edu/km/openome/ www.cs.toronto.edu/km/openome/ Jackson M., 1995, ‘Software Requirements and Specifications’, Addison-Wesley. Maiden, Neil, Lockerbie, James, Randall, Debbie, Jones, Sean, Bush, David, “Using Satisfaction Arguments to Enhance i* Modelling of an Air Traffic Management System”, 15th IEEE International Requirements Engineering Conference, 2007. A. Pourshahid, D. Amyot, P. Chen, M. Weiss, A. J. Forester, “Business Process Monitoring and Alignment: An Approach Based on the User Requirements Notation and Business Intelligence Tools”, IX Workshop on Requirements Engineering, WER’06, 2006. X.Franch, “On the Quantitative Analysis of Agent-Oriented Models:, CAiSE’06, 2006, pp. 495-509. P. Giorgini, J. Mylopoulos, E. Nicchiarelli, R. Sebastiani, “Reasoning with Goal Models”, In Proceedings of 21st International Conference on Conceptual Modeling (ER 2002), pp. 167- 181, 2002. J. Aranda, N. Ernst, J. Horkoff, S. Easterbrook. (2007) A Framework for Empirical Evaluation of Model Comprehensibility. Modeling in Software Engineering (MiSE) Workshop with ICSE 2007, Minneapolis, MN, May, 2007. A Framework for Empirical Evaluation of Model Comprehensibility M. Strohmaier, E. Yu, J. Horkoff, J. Aranda, S. Easterbrook. (2007) Analyzing Knowledge Transfer Effectiveness - An Agent-Oriented Approach. 40th Hawaii International Conference on Systems Science (HICSS-40 2007), HI, USA.

34 34 Other Topics Modeling and Analyzing Technology Strategies Strategy Analysis Using Goal Models i* Modeling for Knowledge Management Analysis (KTA Method) Framework for Assessing the Comprehensibility of Models

35 35 Modeling and Analyzing Technology Strategies As technology design becomes increasingly motivated by business strategy, technology users become wary of vendor intentions. Conversely, technology producers must discover strategies to gain the business of consumers. Both parties have a need to understand how business strategies shape technology design, and how such designs affect stakeholder goals. Claim: A goal-based methodology can be introduced which aids in the analysis of technology strategies, for both technology producers and consumers. Detailed Case Study: Trusted Computing J. Horkoff, E. Yu, L. Liu. (2006) Analyzing Trust in Technology Strategies. International Conference on Privacy, Security and Trust (PST 2006), Markham, Ontario, Canada. 12 pages. Accepted June 27, 2006.Analyzing Trust in Technology Strategies Working on a journal submission.

36 36 Modeling and Analyzing Technology Strategies Example: Trusted Computing Case Study (example model)

37 37 i* Modeling for Knowledge Management Analysis (KTA Method) Work with Markus Strohmaier (first author), Jorge Aranda, Eric Yu, Steve Easterbrook. Claim: The analysis powers offered by i* modeling would compliment the analysis of knowledge transfer studied within the field of knowledge management. Developed the Knowledge Transfer Agent (KTA) Method where the means of knowledge transfer are envisioned and modeled as intentional agents within an i* model. Can analyze the feasibility of different knowledge transfer mechanisms from a goal point of view before they are implemented: –Are stakeholder goals satisfied? –Is the knowledge transfer mechanism a success according to its ascribed goals?

38 38 i* Modeling for Knowledge Management Analysis (KTA Method) Have been working with the Kids Help Phone organization for 2+ years on a strategic requirements analysis project. Used this setting to develop and test the KTA method. M. Strohmaier, E. Yu, J. Horkoff, J. Aranda, S. Easterbrook. (2007) Analyzing Knowledge Transfer Effectiveness - An Agent-Oriented Approach. 40th Hawaii International Conference on Systems Science (HICSS-40 2007), HI, USA. Currently working on a journal paper.

39 39 Framework for Assessing the Comprehensibility of Models Work with Jorge Aranda, Neil Ernst, Steve Easterbrook Original Intention: Design an experiment to “prove” that i* models aid in comprehensibility. Lead to an overview of related experiments and the discovery of several issues which make such experiments difficult to design. Revised Purpose: Propose a framework for evaluation the comprehensibility of models in general. Framework includes suggestions and guidelines involving: –Measuring comprehensibility –Articulating modeling framework theories –Developing hypothesis –Using existing theories –Choosing domains, participants, etc. J. Aranda, N. Ernst, J. Horkoff, S. Easterbrook. (2007) A Framework for Empirical Evaluation of Model Comprehensibility. Modeling in Software Engineering (MiSE) Workshop with ICSE 2007, Minneapolis, MN, May, 2007.A Framework for Empirical Evaluation of Model Comprehensibility Future work: Apply and revise framework! (First candidate: i* models)

40 40 Other Interests Model “Presentability” to stakeholders. –How can stakeholders understand models? –How much training is needed? –Are they willing? Verification/Validation for conceptual models. –How can such (potentially large) models be verified? –By stakeholders?


Download ppt "Evaluation and Scalability of Goal Models URN Meeting Ottawa, January 16-18, 2008 Jennifer Horkoff PhD Candidate, Department of Computer Science, U of."

Similar presentations


Ads by Google