Presentation is loading. Please wait.

Presentation is loading. Please wait.

Estimating Component Availability by Dempster-Shafer Belief Networks Estimating Component Availability by Dempster-Shafer Belief Networks Lan Guo Lane.

Similar presentations


Presentation on theme: "Estimating Component Availability by Dempster-Shafer Belief Networks Estimating Component Availability by Dempster-Shafer Belief Networks Lan Guo Lane."— Presentation transcript:

1 Estimating Component Availability by Dempster-Shafer Belief Networks Estimating Component Availability by Dempster-Shafer Belief Networks Lan Guo Lane Department of Computer Science & Electrical Engineering West Virginia University Morgantown, WV26506

2 Background  This work is based on the research of estimating component availability of a large, distributed network (Y. Yu and E. Stoker ISSRE’01)  The dataset was obtained from field observation over 18 months  Bayesian Belief Network (BBN) and traditional MTTR probability computation were used in the previous work  We would like to develop a novel, objective methodology to estimate component availability

3 Drawbacks of BBNs  Bayesian Belief Networks (BBNs) are subject to human biases and logical inconsistency  The structure of the BBNs is based on the subjective opinions of domain experts  The prior of the Bayes Theorem is subjective  Uniform prior is logically inconsistent  A BBN example: late slept-in traffic

4 Why D-S Belief Networks  Dempster-Shafer (D-S) Belief Network is a complete formalism of evidential reasoning  D-S inference scheme is a more general and robust theory than the Bayes Theorem  The D-S Belief Network and the D-S theory are objective and free of human biases

5 How the D-S Network Works  The Induction Algorithm builds the belief network automatically from the dataset  Belief for certain node(s) is dynamically updated based on evidence by the Dempster’s rule of combination  Updated belief is propagated through the whole network by the Belief Revision Algorithm

6 Improvement upon the Former Induction Algorithm  Drawbacks of the former Induction Algorithm:  The Induction Algorithm by Liu et al. is dramatically dependent on the sample size  It violates the assumption of the Binomial Distribution that the sample size must be constant  It gives erroneous results for the dataset  Our Induction Algorithm is based on a sound scheme: prediction logic

7 Our Induction Algorithm Begin Set a significance level  min and a minimal U min Set a significance level  min and a minimal U min For node p, p  [0, n max – 1] and node q, q  [ p + 1, n max ] (Note: n max is the total number of nodes) For node p, p  [0, n max – 1] and node q, q  [ p + 1, n max ] (Note: n max is the total number of nodes) For all empirical case samples N For all empirical case samples N Compute a contingency table Compute a contingency table M pq = M pq = For each relation type k out of the six cases find the solution to For each relation type k out of the six cases find the solution to Max U p Max U p Subject to Max U p > U min Subject to Max U p > U min  p   min  p   min  ij = 1 or 0 (if N ij corresponds to an error cell,  ij = 1; otherwise,  ij = 0)  ij = 1 or 0 (if N ij corresponds to an error cell,  ij = 1; otherwise,  ij = 0)  (b) >  (b’) if  (b) = 1 and  (b’) = 0  (b) >  (b’) if  (b) = 1 and  (b’) = 0 If the solution exists, then return a type k relation If the solution exists, then return a type k relationEnd N 11 N 12 N 21 N 22

8 Our Induction Algorithm  For a single error cell, if N ij is the number of error occurrences: U p = U ij = U p = U ij =  p =  ij = 1 -  p =  ij = 1 -  For multiple error cells: U p = U p = (  ij = 1 for error cells; otherwise,  ij = 0)  p =  p =

9 Experiment  We started with the Bayesian network for estimating component availability in the large distributed network.  Based on the node probability tables associated with the Bayesian network, we generated two sets of data samples:  one for constructing the D-S belief network with 1000 data points,  the other for validating the evidential reasoning scheme with 100 data points.  We applied our induction algorithm to induce the implication relationship between each pair of nodes.

10 Experiment  For the testing sample, we randomly selected an unobserved node and used its value as the new evidence and propagated the updated belief values to other reachable nodes.  For each of the unobserved nodes, we compared the belief value predicted and the value in the testing sample, and output the evaluation metrics. We continued these two steps until all nodes were observed.

11 Evaluation Metrics  The absolute difference between the actual value in the testing sample and the computed belief value:  X = | Bel emp (X) – Bel est (X)|  Mean estimate error:  Standard error of estimate:

12 Results (1)

13 Results (2)

14 Conclusions  Our Induction Algorithm is an efficient, sound, dynamic, and general means for automatically constructing the D-S belief networks.  The inducted belief network is free from human biases.  The implication method over the D-S network greatly reduced the prediction error.  This study is the first attempt to apply the D-S belief network to software reliability engineering.  Our future work includes employing the entropy notion for optimal inference of greater prediction accuracy over the whole network.


Download ppt "Estimating Component Availability by Dempster-Shafer Belief Networks Estimating Component Availability by Dempster-Shafer Belief Networks Lan Guo Lane."

Similar presentations


Ads by Google