Presentation is loading. Please wait.

Presentation is loading. Please wait.

FT228/4 Knowledge Based Decision Support Systems

Similar presentations


Presentation on theme: "FT228/4 Knowledge Based Decision Support Systems"— Presentation transcript:

1 FT228/4 Knowledge Based Decision Support Systems
Uncertainty Management in Rule-Based Systems Bayesian Reasoning More on inference chaining Ref: Artificial Intelligence A Guide to Intelligent Systems Michael Negnevitsky – Aungier St. Call No

2 Uncertainty ? Approximate Reasoning Inexact Reasoning
Information available to human expert Incomplete Inconsistent Uncertain All of the above Information of this nature is unsuitable to solving problems Have already said that experts can cope with these effects and can usually make judgements and right decisions Therefore expert systems have to able to handle uncertainty and do the same So how do we do that Expert system needs to be able to manage uncertainties because any real-world domain contains inexact knowledge and needs to cope with incomplete, inconsistent or even missing data

3 Sources of uncertainty in expert systems
Weak Implications Often vague associations between IF and THEN parts of rules Need to be able to include certainty factors to indicate a degree of correlation Imprecise Language Natural language inherently ambiguous and imprecise e.g. often, sometimes, never Difficult to express as IF THEN rules Quantifying meaning of terms enables expert systems to establish appropriate matching of antecedents to facts in database

4 Sources of uncertainty in expert systems
Unknown Data In case of incomplete or missing data must accept the value ‘unknown’ and proceed to approximate reasoning with this value Combining Views of different experts Experts seldom reach same conclusions Have contradictory opinions and produce conflicting rules Weight must be attached to each expert and this factored into conclusion No systematic method to obtain weights Large systems usually combine the knowledge and expertise of a number of experts There are a number of numeric methods developed to deal with uncertainty in rule-based systems Going to look at most popular – bayesian reasoning and certainty factors But first need to look at basic probability theory

5 Statistics Using probability theory we can determine the chances of event occuring Probability Likelihood of being realized Proportion of cases in which event occurs Situations when probability is appropriate Genuinly random world e.g. cards Normal world, impossible to measure all causes and effects Exceptions to normal relationships Basis for learning

6 Probability Theory Can be expressed mathematically as a numerical index with a range Zero (absolute impossibility) Unity (absolute certainty) Most events have probability index strictly between 1 and 0 Each event has at least 2 possible outcomes Success or failure Concept of probability long history – words like probably, maybe, possibly perhaps introduced into spoken language Mathematical theory of probability started in 17th century Tossing a coin or throwing a dice

7 Probability Theory p(success) = The number of successes
The number of possible outcomes p(failure) = The number of failures The number of possible outcomes p(s) = p = s s + f Throwing a coin : probability of getting head is equal to probability of getting tail S=f=1 p(s)=0.5 Throwing dice : probability of getting 6 from a single throw Assume 6 is only possible successful outcome The s=1, f=5 Probability of getting 6 = 1/1+5 = Probability of not getting 6 is 5/6=0.8333 Here we are concerned with events that are independent and mutually exclusive. However there are events that are not independent and may affect likelihood of one or other occurring. p(f) = q = f s + f p + q = 1

8 Probability Theory – dependent events
Let A & B be events A & B occur conditionally on the occurrence of the other Probability that A will occur if B occurs is called conditional probability p(A|B) where |denotes GIVEN Reads as probability that A will occur given that B has occurred p(A|B) = The number of times A and B can occur The number of times B can occur

9 Probability Theory – dependent events
Probability that A and B can occur is called joint probability p(A B) U p(B U A) p(B|A) = p(A) p(A U B) p(A|B) = p(B) p(B U A)= p(B|A) x p(A) p(A U B)= p(B|A) x p(A) p(A|B) = p(B) p(B|A) x p(A)

10 Probability Theory – dependent events
Bayesian rule p(A|B) = p(B) p(B|A) x p(A) p(A|B) = p(A U B) Named after Thomas Bayes – 18th century mathematician who introduced the rule Principle can be extended to event a being dependent on multiple events B1 – Bn If summed over an exhaustive list of events B p(B) n i=1 p(A U Bi) = p(A|Bi) x p(Bi) n i=1 p(A U Bi) = p(A)

11 Bayesian Reasoning Assuming a random sampling of events, Bayesian theory supports the calculation of more complex probabilities from previously known results E.g. in card game of four people where all cards are equally distributed, if I do not have Queen of Hearts, each other person has a 1/3 probability of having it, and also a 1/9 probability of having the Queen and the Ace of Hearts, assuming having both cards are independent events probability (A & B) = probability(A) x probability(B), given that A & B are independent Based on formal probability theory, used extensively in pattern matching

12 Bayesian Reasoning Prior probability Posterior probability
or unconditioned probability, of an event is the probability assigned to an event in the absence of knowledge supporting its occurance or absence, i.e. the probability of an event prior to any evidence : p(event) Posterior probability or after the fact probability, or conditional probability, of an event is the probability of an event given some evidence : p(event|evidence) The prior probability of a person having a disease is the number of people in the domain, divided into the number of people with the disease.

13 Bayesian Reasoning The posterior probability of a person having a disease d with symptom s is: p(d|s) = |d Λ s| |s| where || indicates the number of elements in the set i.e. the number of people having both disease d and symptom s divided by the total number of people with symptom s Bayes, calculates p(d|s) as p(d|s) = p(d) x p(s|d) p(s)

14 p(d|s1 & s2 &…& sn) = p(d)p(s1 & s2 &…& sn | d)
Bayesian Reasoning The numbers on the right hand side of the equation are easy to come by e.g. it is much easier to determine the number of meningitis patients who suffer headaches, than it is to discove the number of headache sufferers who suffer from meningitis. Also, not many numbers are needed. Trouble begins when you consider multiple diseases and multiple symptoms p(d|s1 & s2 &…& sn) = p(d)p(s1 & s2 &…& sn | d) p(s1 & s2 &…& sn) Large number of probabilities required

15 Bayesian Reasoning In many diagnostic situations we must also deal with negative information e.g. when a patient does not have a given symptom, we require p(not S) = 1 – p(S) p(not d|s) = 1 – p(d|s)

16 Bayesian Reasoning Suppose all rules in a knowledge base are expressed as follows IF E IS TRUE THEN H IS TRUE {WITH PROBABILITY p} What if event E has occurred but we don’t know whether H has occurred ? Can we compute a probability that event H has occurred as well ?

17 Bayesian Reasoning Instead of using events A and B use Hypothesis H and Evidence E P(H|E) = P(E|H) x P(H) P(E|H) x P(H) + p(E|H’) x p(H’) Where: P(H|E) is the probability that H is true given E P(H) is the probability that H is true overall P(E|H) is the probability of observing E when H is true P(H’) is the probability of H being false P(E|H’) is the probability of observing E even when H is false

18 Bayes Theorem p(H|E) = p(E|H) x p(H) p(E|H) x p(H) + p(E|H’) x p(H’)
p(H) is the prior probability of hyphothesis H being true p(E|H) is the probability of hyphothesis H being true will result in evidence E p(H’) is the prior probability of hyphothesis H being false p(E|H’) is the probabilityof finding evidence E even when hyphothesis H is false

19 Example I Assume the following probabilities for product failure to levels of contamination in manufacturing p(failure) Level of Contamination 0.1 High 0.01 Medium 0.001 Low In a particular run, 20% of the chips are subjected to high levels of contamination, 30% to medium and 50% to low levels of contamination If a semiconductor chip in the product fails, what is the probability that the chip was exposed to high levels of contamination?

20 Example I p(H|F) = p(F|H)p(H)/p(F) (0.1)(0.2)/p(F)
p(F) = p(F|H)p(H) + p(F|M)p(M) + p(F|L)p(L) = (0.1)(0.2) + (0.01)(0.3) + (0.001)(0.5) = p(H|F) = (0.1)(0.2)/(0.0235) = 0.85

21 Example II A medical procedure has been shown to be highly effective in early detection of an illness. The probability that the test correctly identifies someone with the illness as positive is 0.99. The probability that the test correctly identifies someone without the illness as negative is 0.95. The incidence of the illness in the population is You take the test and the result is positive. What is the probability you have the illness?

22 Example II Let D denote the event you have the illness
Let S denote the event that the test signals positive. You have to establish p(D|S). The probability that the test correctly signals someone without the illness as negative is 0.95, therefore, the probability of a positive test without the illness is 0.05 p(S|D’) = 0.05 From Bayes… p(D|S) = p(S|D)p(D) / [p(S|D)p(D) + p(S|D’)p(D’) = (.99)(.0001) / [(.99)(.0001) + (.05)(1 – .0001) = 1/506 = 0.02

23 Multiple Hypotheses What if expert, based on single evidence E, cannot choose between single hypotheses H1 – Hn ? Or if given multiple evidences E1- En the expert can also produce multiple hypotheses ? Can extend theorem to include series But this computation puts enormous burden on expert and makes task practically impossible Therefore must express subtleties of evidence and assume conditional independence among different evidence

24 Bayes Theorem Bayes provides a way of computing the probability of an hypothesis Hi, following from a particular piece of evidence, given only the probabilities with which the evidence follows from actual causes (hypotheses) P(Hi|E) = P(E|Hi) x P(Hi) P(E|Hk) x P(Hk) Where: P(Hi|E) is the probability that Hi is true given E P(Hi) is the probability that Hi is true overall P(E|Hi) is the probability of observing E when Hi is true n is the number of hypotheses Can extend theorem to include series But this computation puts enormous burden on expert and makes task practically impossible Therefore must express subtleties of evidence and assume conditional independence among different evidence Single evidence E and multiple hypotheses 1-n This would require us to obtain conditional probabilities of all possible combinations of evidences for all hypotheses which places an enormous burden on the expert so instead of having this unworkable situation we simplify it

25 P(E1|Hi) x P(E2|Hi)x…xP(En |Hi)
Multiple Hypotheses P(Hi|E1E2…En) = P(E1|Hi) x P(E2|Hi)x…xP(En |Hi) P(E1|Hi) x P(E2|Hi)x…xP(En |Hi) Suppress subtlies of evidence and assume conditional indepdence among different evidences

26 Multiple Hypotheses How does expert system compute and rank all potentially true hypotheses ? Given the prior probabilities Determine the conditional probabilities Calculate the posterior probabilities Rank posteriors

27 Bayesian Reasoning Two major requirements for Bayes’ theorem
All the probabilities on the relationships of evidence with the various hypotheses must be known, as well as the probabilistic relationships among the pieces of evidence All relationships between hypotheses and evidence, P(E|Hk) must be independent. This assumption of independence must be justified, which is difficult. Most expert systems rely on heuristics to augment Bayes’ Theorem.

28 Bayesian method Requires probability values as primary inputs
Values usually involves human judgement Humans cannot elicit probabilities consistent with Bayesian rules Or are just really bad at it Domain experts do not deal easily with conditional probabilities Often deny existence of hidden implicit probabilities

29 Bayesian Reasoning Non expert view of symptoms and causes s1 s2 s3 s4
sn cause1 cause2 causem Non expert view of symptoms and causes

30 Bayesian Reasoning Expert view, related symptoms grouped together
sn I1 I2 I3 cause1 cause2 causem Expert view, related symptoms grouped together Into intermediate, pathological states Makes inference more manageable


Download ppt "FT228/4 Knowledge Based Decision Support Systems"

Similar presentations


Ads by Google