Trustworthy Service Selection and Composition CHUNG-WEI HANG MUNINDAR P. Singh A. Moini.

Slides:



Advertisements
Similar presentations
Feedback Control Real-Time Scheduling: Framework, Modeling, and Algorithms Chenyang Lu, John A. Stankovic, Gang Tao, Sang H. Son Presented by Josh Carl.
Advertisements

Autonomic Scaling of Cloud Computing Resources
Naïve Bayes. Bayesian Reasoning Bayesian reasoning provides a probabilistic approach to inference. It is based on the assumption that the quantities of.
Overcoming Limitations of Sampling for Agrregation Queries Surajit ChaudhuriMicrosoft Research Gautam DasMicrosoft Research Mayur DatarStanford University.
Dynamic Bayesian Networks (DBNs)
Supervised Learning Recap
Improving Forecast Accuracy by Unconstraining Censored Demand Data Rick Zeni AGIFORS Reservations and Yield Management Study Group May, 2001.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Jensen’s Inequality (Special Case) EM Theorem.
1 Monte Carlo Methods Week #5. 2 Introduction Monte Carlo (MC) Methods –do not assume complete knowledge of environment (unlike DP methods which assume.
Efficient Autoscaling in the Cloud using Predictive Models for Workload Forecasting Roy, N., A. Dubey, and A. Gokhale 4th IEEE International Conference.
GS 540 week 6. HMM basics Given a sequence, and state parameters: – Each possible path through the states has a certain probability of emitting the sequence.
Visual Recognition Tutorial
Multiple Criteria for Evaluating Land Cover Classification Algorithms Summary of a paper by R.S. DeFries and Jonathan Cheung-Wai Chan April, 2000 Remote.
Lecture 17: Supervised Learning Recap Machine Learning April 6, 2010.
Regulatory Network (Part II) 11/05/07. Methods Linear –PCA (Raychaudhuri et al. 2000) –NIR (Gardner et al. 2003) Nonlinear –Bayesian network (Friedman.
1 Unsupervised Learning With Non-ignorable Missing Data Machine Learning Group Talk University of Toronto Monday Oct 4, 2004 Ben Marlin Sam Roweis Rich.
1 Learning Entity Specific Models Stefan Niculescu Carnegie Mellon University November, 2003.
Ensemble Learning: An Introduction
Adapted by Doug Downey from Machine Learning EECS 349, Bryan Pardo Machine Learning Clustering.
Visual Recognition Tutorial
1 gR2002 Peter Spirtes Carnegie Mellon University.
Expectation-Maximization (EM) Chapter 3 (Duda et al.) – Section 3.9
Business Forecasting Chapter 5 Forecasting with Smoothing Techniques.
Determining the Significance of Item Order In Randomized Problem Sets Zachary A. Pardos, Neil T. Heffernan Worcester Polytechnic Institute Department of.
Incomplete Graphical Models Nan Hu. Outline Motivation K-means clustering Coordinate Descending algorithm Density estimation EM on unconditional mixture.
COPYRIGHT WESTED, 2010 Calipers II: Using Simulations to Assess Complex Science Learning Diagnostic Assessments Panel DRK-12 PI Meeting - Dec 1–3, 2010.
Active Learning for Class Imbalance Problem
Particle Filtering in Network Tomography
1 Naïve Bayes Models for Probability Estimation Daniel Lowd University of Washington (Joint work with Pedro Domingos)
Machine Learning1 Machine Learning: Summary Greg Grudic CSCI-4830.
Learning Structure in Bayes Nets (Typically also learn CPTs here) Given the set of random variables (features), the space of all possible networks.
EM and expected complete log-likelihood Mixture of Experts
1 Logistic Regression Adapted from: Tom Mitchell’s Machine Learning Book Evan Wei Xiang and Qiang Yang.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Partially Supervised Classification of Text Documents by Bing Liu, Philip Yu, and Xiaoli Li Presented by: Rick Knowles 7 April 2005.
Direct Message Passing for Hybrid Bayesian Networks Wei Sun, PhD Assistant Research Professor SFL, C4I Center, SEOR Dept. George Mason University, 2009.
Methodological Problems in Cognitive Psychology David Danks Institute for Human & Machine Cognition January 10, 2003.
Estimating Component Availability by Dempster-Shafer Belief Networks Estimating Component Availability by Dempster-Shafer Belief Networks Lan Guo Lane.
1 Demand for Repeated Insurance Contracts with Unknown Loss Probability Emilio Venezian Venezian Associates Chwen-Chi Liu Feng Chia University Chu-Shiu.
Learning With Bayesian Networks Markus Kalisch ETH Zürich.
A Passive Approach to Sensor Network Localization Rahul Biswas and Sebastian Thrun International Conference on Intelligent Robots and Systems 2004 Presented.
Slides for “Data Mining” by I. H. Witten and E. Frank.
Bayesian networks and their application in circuit reliability estimation Erin Taylor.
ECE 8443 – Pattern Recognition Objectives: Jensen’s Inequality (Special Case) EM Theorem Proof EM Example – Missing Data Intro to Hidden Markov Models.
Expectation-Maximization (EM) Algorithm & Monte Carlo Sampling for Inference and Approximation.
Human and Optimal Exploration and Exploitation in Bandit Problems Department of Cognitive Sciences, University of California. A Bayesian analysis of human.
Principled Probabilistic Inference and Interactive Activation Psych209 January 25, 2013.
Learning and Acting with Bayes Nets Chapter 20.. Page 2 === A Network and a Training Data.
Classification Ensemble Methods 1
Nonlinear differential equation model for quantification of transcriptional regulation applied to microarray data of Saccharomyces cerevisiae Vu, T. T.,
Generalized Point Based Value Iteration for Interactive POMDPs Prashant Doshi Dept. of Computer Science and AI Institute University of Georgia
Parameter Estimation. Statistics Probability specified inferred Steam engine pump “prediction” “estimation”
Naïve Bayes Classifier April 25 th, Classification Methods (1) Manual classification Used by Yahoo!, Looksmart, about.com, ODP Very accurate when.
Computational methods for inferring cellular networks II Stat 877 Apr 17 th, 2014 Sushmita Roy.
Computacion Inteligente Least-Square Methods for System Identification.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Jensen’s Inequality (Special Case) EM Theorem.
SUPERVISED AND UNSUPERVISED LEARNING Presentation by Ege Saygıner CENG 784.
Unsupervised Learning Part 2. Topics How to determine the K in K-means? Hierarchical clustering Soft clustering with Gaussian mixture models Expectation-Maximization.
Talal H. Noor, Quan Z. Sheng, Lina Yao,
Deep Feedforward Networks
Recommendation Based Trust Model with an Effective Defense Scheme for ManetS Adeela Huma 02/02/2017.
LECTURE 10: EXPECTATION MAXIMIZATION (EM)
CSCI 5822 Probabilistic Models of Human and Machine Learning
Probabilistic Models with Latent Variables
Filtering and State Estimation: Basic Concepts
Unsupervised Learning II: Soft Clustering with Gaussian Mixture Models
Topic Models in Text Processing
Written by Yoshihiko Hasegawa and Hitoshi Iba
Sofia Pediaditaki and Mahesh Marina University of Edinburgh
A Gentle Tutorial of the EM Algorithm and its Application to Parameter Estimation for Gaussian Mixture and Hidden Markov Models Jeff A. Bilmes International.
Presentation transcript:

Trustworthy Service Selection and Composition CHUNG-WEI HANG MUNINDAR P. Singh A. Moini

Content Content Service-oriented computing Preview (papers key idea) Probabilistic service selection & composition approaches Experimental results Summary

Service-Oriented Computing Service-Oriented Computing Every computing resource is packaged as a service Services are application building blocks unit of functionality unit of integration unit of composition Individual services can be composed to create more composite services Service have dependencies on other constituent services Consumer service does not have any knowledge of dependencies of services it consumes

Service-Oriented Computing Service-Oriented ComputingChallenges Services composition (binding) is a design time activity based on functional properties meeting consumer requirements, not quality attributes functional properties: service types, published as WSDL contract quality attributes: throughput, response time, availability.. Service quality varies by service instance and over time service quality may change unpredictably

Contributions Contributions Probabilistic trusted-aware service selection and composition model takes into account service consumers requirements (e.g. qualities of service) takes into account service composition patterns considers service qualities as they apply to service instances quality component service may affect the whole composition Example: reliability rewards & punishes constituent services dynamically

Service Composition Patterns Service Composition Patterns (BEPL Primitives) SWITCH chooses exactly one component based on some criteriaMAX composes quality by inheriting from child with highest quality valueMIN composes quality by inheriting from child with lowest quality throughput for sequenceSUM yields composite quality value as sum of quality values obtained from all constituent servicesPRODUCT yields composite quality value as product of quality values obtained from all constituent services.

BEPL Services Diagram BEPL Services Diagram

Trust-Aware Service Selection Model Trust-Aware Service Selection Model

Trustworthiness of a service is estimated based on direct experience previous QoS received from service Consumer maintains its own local model to determine if to reward or penalize services based on direct experience selects services and composes them into a composite service evaluates composite service with respect to service quality attributes applies a learning method to update its model for the services Special case: when selecting atomic service, consumer has less information to learn from

Trust-Aware Service Selection Models Trust-Aware Service Selection Models Two Alternatives Bayesian Model models compositions via Bayesian networks in partially observable settings captures dependency among composite and constituent services adaptively updates trust to reflect the most recent quality uses online learning to track service behavior and shows how composite services quality depends upon its constituents quality Beta mixture Model can learn not only distribution of composite quality, but also responsibility of a constituent service in composite quality without actually observing the constituents performance. learns quality distribution of the services provides how much each constituent service contributes to overall composition

Trust-Aware Service Selection Models Trust-Aware Service Selection Models Two Alternatives

Service Composition Bayesian Model P(T) Probability of obtaining satisfactory quality from service T Trust CompositeService atomic

Service Composition Bayesian Model Conditional probability table associated with each node provides a basis for determining how much responsibility to assign to constituent services Conditional probabilities represent level of trust consumer places in constituent services in composition

Service Composition Dealing with Incomplete Data model variables may not be observable data is often incomplete Variables w/o data considered latent variable Expectation Maximization (EM) is used to optimally estimate distribution parameters which are then used to calculate the expected values of latent variables

Service Composition Dealing with Incomplete Data

Service Composition Beta-Mixture Model

Service Composition Beta-Mixture Model Mixture distribution estimated by maximizing log- likelihood function using EM algorithm

Service Composition Beta-Mixture Model Estimation EM Algorithm Steps EM is a sequential online learning algorithm: it is repeated whenever the consumer makes new observations.

Experimental Evaluations

Composition Operator Service Quality Metrics and Interaction Types SWITCH chooses exactly one of its children based on a predefined multinomial distribution simulates composite quality based on one childrenMAX composes quality by inheriting from child with highest quality value relates to latency for flow.

SWITCH chooses exactly one children based on predefined multinomial distribution simulates composite quality based on one childrenMAX composes quality by inheriting from child w/ highest quality value represents latency for flowMIN composes quality by inheriting from child with lowest quality throughput for sequenceSUM yields composite quality value as sum of quality values obtained from all children relates to throughput for flowPRODUCT yields composite quality value as product of quality values obtained from all children. relates to failure for flow

Experimental Results Bayesian Appraoch

Composite Service C Trust Estimation SWITCH Operator

Composite Service C Conditional Trust (SWITCH Operator) Good ServiceBad Service

Bayesian vs. Naïve Prediction Errors (80% missing data)

Conditional Trust in Composite Service MAXMIN 40% data missing

Dealing with Dynamic Behavior

Random Walk Service Cheating Constituent Service

Actual and Estimated Parameters

Estimated Beta-mixture & Actual Distribution and samples of trust (SWITCH composition) Beta-mixture learns accurate distributions of both component services. One provides good service (left peak); the other provides bad service (right peak).

Kolmogorov-Smirnov Test FCM-MM vs. Beta-mixture

Prediction Error Nepal et al. vs. Beta-mixture

Powerful means of estimating quality distribution of a composite service w/o knowing quality of constituents Accurately estimates responsibilities of each constituent service Limitations Difficult to learn component distributions when composite distribution is unimodal. Accuracy may be improved if constituent services qualities are partially observable. Difficult to learn constituent services that rarely contribute due to lack of evidence; beta-mixture can correctly identify those services. Cannot track dynamic behavior. Beta Mixture Model

Limitations lack of unconditional trust in the constituent services assumption of a least partial observability Bayesian Model

Key features Two probabilistic models for trust-aware service selection and composition can handle variety of service composition patterns Can capture relationships between qualities of service offered by composite service and qualities offered by its constituents Trust is learned sequentially from directed observations then, combined with indirect evidence in terms of service qualities Can handle incomplete observations Summary

Key features Each consumer must monitor quality attributes of services it interacts with & maintain own model local knowledge Model evaluation technique: simulation Future research idea Apply Structural EM, instead of parameter estimation, to learn not only trust information but also service dependency graph structure: learned structure can be used as a basis for suggesting new service compositions Summary