Presentation is loading. Please wait.

Presentation is loading. Please wait.

Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss.

Similar presentations


Presentation on theme: "Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss."— Presentation transcript:

1 Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss

2 Outline  Last lecture: 1. Justification-based TMS 2. Assumption-based TMS Consistency-based diagnosis  Today’s lecture: 1. Generation of tests/probes 2. Measurement Selection 3. Probabilities of Diagnoses

3 Generation of tests/probes  Test: test vector that can be applied to the system assumption: the behavior of the component does not change between tests approaches to select the test that can discriminate between faults of different components (e.g. [Williams])  Probe: selection of the probe based on: predictions generated by each candidate on unknown measurable points cost/risk/benefits of the different tests/probes fault probability of the various components

4 Generation of tests/probes (II) Approach based on entropy [deKleer, 87, 92] A-priori probability of the faults (even a rough estimate) Given set D1, D2,... Dn of candidates to be discriminated 1. Generate predictions from each candidate 2. For each probe/test T, compute the a-posteriori probability p(Di|T(x)), for each possible outcome x of T 3. Select the test/probe for which the distribution p(Di|T(x)) has a minimal entropy (this is the test that on average best discriminates between the candidates)

5 A Motivating Example  Minimal diagnoses: {M 1 }, {A 1 }, {M 2, M 3 }, {M 2, A 2 }  Where to measure next? X,Y, or Z?  What measurement promises the most information?  Which values do we expect?  Minimal diagnoses: {M 1 }, {A 1 }, {M 2, M 3 }, {M 2, A 2 }  Where to measure next? X,Y, or Z?  What measurement promises the most information?  Which values do we expect? 10 12 X Y Z M1M1 M2M2 M3M3 * * * A1A1 A2A2 + + F G 2 A 2 3 3 BCDBCD E 3

6 Outline  Last lecture: 1. Justification-based TMS 2. Assumption-based TMS Consistency-based diagnosis  Today’s lecture: 1. Generation of tests/probes 2. Measurement Selection 3. Probabilities of Diagnoses

7 Measurement Selection - Discriminating Variables 10 12 X Y Z M1M1 M2M2 M3M3 * * * A1A1 A2A2 + + F G 2 A 2 3 3 BCDBCD E 3  Suppose: single faults are more likely than multiple faults  Probes that help discriminating {M 1 } and {A 1 } are most valuable  Suppose: single faults are more likely than multiple faults  Probes that help discriminating {M 1 } and {A 1 } are most valuable

8 ? Discriminating Variables - Inspect ATMS Labels! 6 {{M 1 }} {{}} { } 12 {{ }} 4 {{M 2, A 1 } {M 3, A 1, A 2 }} {{M 2 }} 6 {{M 1,A 1 }} 4 6 {{M 3 }} 8 {{M 1, A 1, A 2 }} 10 12 X Y Z M1M1 M2M2 M3M3 * * * A1A1 A2A2 + + F G 2 A 2 3 3 BCDBCD E 3  Observations: Facts - not based on any assumption: Node has empty environment (as the only minimal one):  always derivable  Note the difference: empty label  node not derivable!  Observations: Facts - not based on any assumption: Node has empty environment (as the only minimal one):  always derivable  Note the difference: empty label  node not derivable! {{M 3, A 2 }{M 2 }} 6 ? Justification :{A,C} Justification :{B,D} Justification :{C,E} Empty label A1=10 and M1=6  M2=4 A2=12 and M2=4  M3=8 A2=12 and M3=6  M2=6 A1=10 and M2=6  M1=4 A1=10 and M2(depends on M3 and A2)=6  M1=4

9 Fault Predictions  No fault models used  Nevertheless, fault hypotheses make predictions!  E.g. diagnosis {A 1 } implies OK(M 1 )  OK(M 1 ) implies x=6  No fault models used  Nevertheless, fault hypotheses make predictions!  E.g. diagnosis {A 1 } implies OK(M 1 )  OK(M 1 ) implies x=6 6 {{M 1 }} {{}} { } 12 {{ }} 4 {{M 2, A 1 } {M 3, A 1, A 2 }} {{M 2 }} 6 {{M 1,A 1 }} 4 6 {{M 3 }} 8 {{M 1, A 1, A 2 }} 10 12 X Y Z M1M1 M2M2 M3M3 * * * A1A1 A2A2 + + F G 2 A 2 3 3 BCDBCD E 3 {{M 3, A 2 }{M 2 }} 6 If we measure x and concludes x=6 then we can infer that A1 is the diagnosis rather than M1

10 Predictions of Minimal Fault Localizations  ATMS Labels:  X  6: M 1 is broken.  X = 6 : {A 1 } only single fault  Y or Z same for {A 1 }, {M 1 }  X best measurement.  X  6: M 1 is broken.  X = 6 : {A 1 } only single fault  Y or Z same for {A 1 }, {M 1 }  X best measurement. X=4  M1 is diagnosis, since it appears only in x=6

11 Outline  Last lecture: 1. Justification-based TMS 2. Assumption-based TMS Consistency-based diagnosis  Today’s lecture: 1. Generation of tests/probes 2. Measurement Selection 3. Probabilities of Diagnoses

12 Probabilities of Diagnoses  Fault probability of component(type)s: p f  For instance, p f (C i ) = 0.01 for all C i  {A 1, A 2, M 1, M 2, M 3 }  Normalization by  =  p(FaultLoc) FaultLoc  Fault probability of component(type)s: p f  For instance, p f (C i ) = 0.01 for all C i  {A 1, A 2, M 1, M 2, M 3 }  Normalization by  =  p(FaultLoc) FaultLoc 10 12 X Y Z M1M1 M2M2 M3M3 * * * A1A1 A2A2 + + F G 2 A 2 3 3 BCDBCD E 3

13 Probabilities of Diagnoses - Example  Assumption: independent faults  Heuristic: minimal fault localizations only  Assumption: independent faults  Heuristic: minimal fault localizations only Minimal fault localization p( FaultLoc )/  Prediction X Y Z {M 1 }.495 4 6 6 {A 1 }.495 6 6 6 {M 2, A 2 }.005 6 4 6 {M 2,M 3 }.005 6 4 8

14 Entropy-based Measurement Proposal Entropy of a Coin toss as a function of the probability of it coming up heads

15  The cost of locating a candidate with probability p i is log(1/p i ) (binary search through 1/p i objects).  Meaning, needed cuts to find an object. Example: p(x)=1/25  the number of cuts in binary search will be log(25) = 4.6 p(x)=1/2  the number of cuts in binary search will be log(2) = 1  p i is the probability of C i being actual candidate given a measurement outcome. The Intuition Behind the Entropy

16  The cost of identifying the actual candidate, by the measurement is: 1. p i  0  occur infrequently, expensive to find  p i log(1/p i )  0 2. p i  1  occur frequently, easy to find  p i log(1/p i )  0 3. p i in between  p i log(1/p i )  1 The Intuition Behind the Entropy Go over through the possible candidates The probability of candidate Ci to be faulted given an assignment to the measurement The cost of searching for this probability

17  The expected entropy by measuring Xi is:  Intuition: the expected entropy of X = ∑ the probability of Xi * entropy of Xi  This formula is an approximation of the above: The Intuition Behind the Entropy Go over through the possible outcomes of measurement Xi The probability of measurement Xi to be Vik The entropy if Xi=Vik m

18  This formula is an approximation of the above:  Where, U i is the set of candidates which do not predict any value for x i  The goal is to find measurement x i that minimizes the above function The Intuition Behind the Entropy m

19 Entropy-based Measurement Proposal - Example Proposal: Measure variable which minimizes the entropy: X x=6 under the diagnoses: {A1}, {M2,A2}, {M2,M3} 0.495+0.05+0.05=0.505 x=6 under the diagnoses: {A1}, {M2,A2}, {M2,M3} 0.495+0.05+0.05=0.505

20  How to update the probability of a candidate?  Given measurement outcome x i =u ik, the probability of a candidate is computed via Bayes’ rule:  Meaning: the probability that C l is the actual candidate given the measurement x i =u ik.  p(C l ) is known in advance. Computing Posterior Probability Normalization factor: The probability that x i = u ik : the sum of the probabilities of the Candidates consistent with this measurment

21  How to compute p(x i =u ik |C l )? Three cases: 1. If the candidate C l predicts the output x i =u ik then p(x i =u ik |C l )=1 2. If the candidate C l predicts the output x i ≠ u ik then p(x i =u ik |C l )=0 3. If the candidate C l predicts no output for x i then p(x i =u ik |C l )=1/m (m is number of possible values for x) Computing Posterior Probability

22 Example  Initial probability of failure in inverter is 0.01.  Assume the input a=1:  What is the best next measurement b or e?  Assume next measurement points on fault: Measuring closer to input produces less conflicts: b=1  A is faulty e=0  some components is faulty.

23 Example  On the other hand:  Measuring further away from the input is more likely to produce a discrepant value.  The large number of components the more likely that there is a fault.  the probability of finding a particular value outweighs the expected cost of isolating the candidate from a set.   the best next measurement is e.

24 Example H(b) = p(b = true | all diagnoses with observation a) log p(b = true | all diagnoses with observation a) + p(b = false | all diagnoses with observation a) log p(b = false | all diagnoses with observation a)

25 Example  Assume a=1 and e=0: Then the next best measurement is c. equidistant from previous measurements.  Assume a=1 and e=1 and p(A)=0.025: Then the next best measurement is b.


Download ppt "Measurements Meir Kalech Partially Based on slides of Brian Williams and Peter struss."

Similar presentations


Ads by Google