Presentation is loading. Please wait.

Presentation is loading. Please wait.

Possibility Theory and its applications: a retrospective and prospective view D. Dubois, H. Prade IRIT-CNRS, Université Paul Sabatier.

Similar presentations


Presentation on theme: "Possibility Theory and its applications: a retrospective and prospective view D. Dubois, H. Prade IRIT-CNRS, Université Paul Sabatier."— Presentation transcript:

1 Possibility Theory and its applications: a retrospective and prospective view
D. Dubois, H. Prade IRIT-CNRS, Université Paul Sabatier TOULOUSE FRANCE

2 Outline Basic definitions Pioneers Qualitative possibility theory
Quantitative possibility theory

3 Possibility theory is an uncertainty theory devoted to the handling of incomplete information.
similar to probability theory because it is based on set-functions. differs by the use of a pair of dual set functions (possibility and necessity measures) instead of only one. it is not additive and makes sense on ordinal structures. The name "Theory of Possibility" was coined by Zadeh in 1978

4 The concept of possibility
Feasibility: It is possible to do something (physical) Plausibility: It is possible that something occurs (epistemic) Consistency : Compatible with what is known (logical) Permission: It is allowed to do something (deontic)

5 POSSIBILITY DISTRIBUTIONS (uncertainty)
S: frame of discernment (set of "states of the world") x : ill-known description of the current state of affairs taking its value on S L: Plausibility scale: totally ordered set of plausibility levels ([0,1], finite chain, integers,...) A possibility distribution πx attached to x is a mapping from S to L : s, πx(s)  L, such that s, πx(s) = 1 (normalization) Conventions: πx(s) = 0 iff x = s is impossible, totally excluded πx(s) = 1 iff x = s is normal, fully plausible, unsurprizing

6 EXAMPLE : x = AGE OF PRESIDENT
If I do not know the age of the president, I may have statistics on presidents ages… but generally not, or they may be irrelevant. partial ignorance : 70 ≤ x ≤ 80 (sets, intervals) a uniform possibility distribution π(x) = 1 x  [70, 80] = 0 otherwise partial ignorance with preferences : May have reasons to believe that 72 > 71  73 > 70  74 > 75 > 76 > 77

7 EXAMPLE : x = AGE OF PRESIDENT
Linguistic information described by fuzzy sets: “ he is old ” : π = µOLD If I bet on president's age: I may come up with a subjective probability ! But this result is enforced by the setting of exchangeable bets (Dutch book argument). Actual information is often poorer.

8 π(s0) = 1 ; π(s) = 0 otherwise
A possibility distribution is the representation of a state of knowledge: a description of how we think the state of affairs is. π' more specific than π in the wide sense if and only if π' ≤ π In other words: any value possible for π' should be at least as possible for π that is, π' is more informative than π COMPLETE KNOWLEDGE : The most specific ones π(s0) = 1 ; π(s) = 0 otherwise IGNORANCE : π(s) = 1,  s  S

9 POSSIBILITY AND NECESSITY OF AN EVENT
A possibility distribution on S (the normal values of x) an event A How confident are we that x  A  S ? (A) = maxuA π(s); The degree of possibility that x  A N(A) = 1 – (Ac)=min uA 1 – π(s) The degree of certainty (necessity) that x  A

10 Proposition p = "x > " to be checked
Comparing the value of a quantity x to a threshold when the value of x is only known to belong to an interval [a, b]. In this example, the available knowledge is modeled by p(x) = 1 if  x  [a, b], 0 otherwise. Proposition p = "x > " to be checked i) a > : then x >  is certainly true : N(x >  ) = P(x >  ) = 1. ii) b < : then x >  is certainly false ; N(x >  ) = P(x >  ) = 0. iii) a ≤  ≤ b: then x >  is possibly true or false; N(x >  ) = 0; P(x >  ) = 1.

11 Basic properties (A) = to what extent at least one element in A is consistent with π (= possible) N(A) = 1 – (Ac) = to what extent no element outside A is possible = to what extent π implies A (A  B) = max((A), (B)); N(A  B) = min(N(A), N(B)). Mind that most of the time : (A  B) < min((A), (B)); N(A B) > max(N(A), N(B) Corollary N(A) > 0  (A) = 1

12 Pioneers of possibility theory
In the 1950’s, G.L.S. Shackle called "degree of potential surprize" of an event its degree of impossibility. Potential surprize is valued on a disbelief scale, namely a positive interval of the form [0, y*], where y* denotes the absolute rejection of the event to which it is assigned. The degree of surprize of an event is the degree of surprize of its least surprizing realization. He introduces a notion of conditional possibility

13 Pioneers of possibility theory
In his 1973 book, the philosopher David Lewis considers a relation between possible worlds he calls "comparative possibility". He relates this concept of possibility to a notion of similarity between possible worlds for defining the truth conditions of counterfactual statements. for events A, B, C, A  B C  A  C  B. The ones and only ordinal counterparts to possibility measures

14 Pioneers of possibility theory
The philosopher L. J. Cohen considered the problem of legal reasoning (1977). "Baconian probabilities" understood as degrees of provability. It is hard to prove someone guilty at the court of law by means of pure statistical arguments. A hypothesis and its negation cannot both have positive "provability" Such degrees of provability coincide with necessity measures.

15 Pioneers of possibility theory
Zadeh (1978) proposed an interpretation of membership functions of fuzzy sets as possibility distributions encoding flexible constraints induced by natural language statements. relationship between possibility and probability: what is probable must preliminarily be possible. refers to the idea of graded feasibility ("degrees of ease") rather than to the epistemic notion of plausibility. the key axiom of "maxitivity" for possibility measures is highlighted (also for fuzzy events).

16 Qualitative vs. quantitative possibility theories
comparative: A complete pre-ordering ≥π on U A well-ordered partition of U: E1 > E2 > … > En absolute: πx(s)  L = finite chain, complete lattice... Quantitative: πx(s)  [0, 1], integers... One must indicate where the numbers come from. All theories agree on the fundamental maxitivity axiom (A  B) = max((A), (B)) Theories diverge on the conditioning operation

17 Ordinal possibilistic conditioning
A Bayesian-like equation: A) = min(A), ) A) is the maximal solution to this equation. (B | A) = 1 if A, B ≠ Ø, (A) = (A  B) > = (A  B) if (A) > (A  B) N(B | A) = 1 – (Bc| A) • Independence (B | A) = (B) implies A) = min(), ) Not the converse!!!!

18 QUALITATIVE POSSIBILISTIC REASONING
The set of states of affairs is partitioned via π into a totally ordered set of clusters of equally plausible states E1 (normal worlds) > E2 >... En+1 (impossible worlds) ASSUMPTION: the current situation is normal. By default the state of affairs is in E1 N(A) > 0 iff P(A) > P(Ac) iff A is true in all the normal situations Then, A is accepted as an expected truth Accepted events are closed under deduction

19 A CALCULUS OF PLAUSIBLE INFERENCE
(B) ≥(C) means « Comparing propositions on the basis of their most normal models » ASSUMPTION for computing (B): the current situation is the most normal where B is true. PLAUSIBLE REASONING = “ reasoning as if the current situation were normal” and jumping to accepted conclusions obtained from the normality assumption. DIFFERENT FROM PROBABILISTIC REASONING BASED ON AVERAGING

20 ACCEPTANCE IS DEFEASIBLE
• If B is learned to be true, then the normal situations become the most plausible ones in B, and the accepted beliefs are revised accordingly Accepting A in the context where B is true: P(AB) > P(Ac B) iff N(A | B) > 0 (conditioning) • One may have N(A) > 0 , N(Ac | B) > 0 : non-monotony

21 PLAUSIBLE INFERENCE WITH A POSSIBILITY DISTRIBUTION
Given a non-dogmatic possibility distribution π on S (π(s) > 0, s) Propositions A, and B A |=π B iff (A  B) > (A Bc) It means that B is true in the most plausible worlds where A is true This is a form of inference first proposed by Shoham in nonmonotonic reasoning

22 PLAUSIBLE INFERENCE WITH A POSSIBILITY DISTRIBUTION
(in A)

23 Exa mple (continued) can be expressed by constraints
Pieces of knowledge like ∆ = {b f, p  b, p  ¬f} can be expressed by constraints (b  f) > ( b ¬f) (p  b) > (p  ¬b) (p  ¬f) > (p  f) the minimally specific π* ranks normal situations first: ¬p  b  f, ¬p ¬b then abnormal situations: ¬f  b Last, totally absurd situations f  p , ¬b p

24 Example (back to possibilistic logic)
 = material implication Ranking of rules: b f has less priority that others according to p*: N*(b f ) = N*(p  b) > N*(b f) Possibilistic base : K = {(b f ), (p  b), (p  ¬f)}, with  < 

25 Applications of qualitative possibility theory
Exception-tolerant Reasoning in rule bases Belief revision and inconsistency handling in deductive knowledge bases Handling priority in constraint-based reasoning Decision-making under uncertainty with qualitative criteria (scheduling) Abductive reasoning for diagnosis under poor causal knowledge (satellite faults, car engine test-benches)

26 ABSOLUTE APPROACH TO QUALITATIVE DECISION
A set of states S; A set of consequences X. A decision = a mapping f from S to X f(s) is the consequence of decision f when the state is known to be s. Problem : rank-order the set of decisions in XS when the state is ill-known and there is a utility function on X. This is SAVAGE framework.

27 ABSOLUTE APPROACH TO QUALITATIVE DECISION
Uncertainty on states is possibilistic a function π: S  L L is a totally ordered plausibility scale Preference on consequences: a qualitative utility function µ: X  U µ(x) = 0 totally rejected consequence µ(y) > µ(x) y preferred to x µ(x) = 1 preferred consequence

28 Possibilistic decision criteria
Qualitative pessimistic utility (Whalen): UPES(f) = minsS max(n(π(s)), µ(f(s))) where n is the order-reversing map of V Low utility : plausible state with bad consequences Qualitative optimistic utility (Yager): UOPT(f) = maxsS min(π(s), µ(f(s))) High utility: plausible states with good consequences

29 in fuzzy expert systems:
The pessimistic and optimistic utilities are well-known fuzzy pattern-matching indices in fuzzy expert systems: µ = membership function of rule condition π = imprecision of input fact in fuzzy databases µ = membership function of query π = distribution of stored imprecise data in pattern recognition µ = membership function of attribute template π = distribution of an ill-known object attribute

30 criterion of Wald under ignorance
Assumption: plausibility and preference scales L and U are commensurate There exists a common scale V that contains both L and U, so that confidence and uncertainty levels can be compared. (certainty equivalent of a lottery) If only a subset E of plausible states is known π = E UPES(f) = minsE µ(f(s)) (utility of the worst consequence in E) criterion of Wald under ignorance UOPT(f)= maxsE µ(f(s))

31 On a linear state space

32 Pessimistic qualitative utility of binary acts xAy, with µ(x) > µ(y):
xAy (s) = x if A occurs = y if its complement Ac occurs UPES(xAy) = median {µ(x), N(A), µ(y)} Interpretation: If the agent is sure enough of A, it is as if the consequence is x: UPES(f) = µF(x) If he is not sure about A it is as if the consequence is y: UPES(f) = µF(y) Otherwise, utility reflects certainty: UPES(f) = N(A) WITH UOPT(f) : replace N(A) by (A)

33 Representation theorem for pessimistic possibilistic criteria
Suppose the preference relation a on acts obeys the following properties: (XS, a) is a complete preorder. there are two acts such that f a g.  A, f, x, y constant, x  a y  xAf  yAf if f >a h and g >a h imply f g >a h if x is constant, h >a x and h >a g imply h >a xg then there exists a finite chain L, an L-valued necessity measure on S and an L-valued utility function u, such that a is representable by the pessimistic possibilistic criterion UPES(f).

34 Merits and limitations of qualitative decision theory
Provides a foundation for possibility theory Possibility theory is justified by observing how a decision-maker ranks acts Applies to one-shot decisions (no compensations/ accumulation effects in repeated decision steps) Presupposes that consecutive qualitative value levels are distant from each other (negligibility effects)

35 Quantitative possibility theory
Membership functions of fuzzy sets Natural language descriptions pertaining to numerical universes (fuzzy numbers) Results of fuzzy clustering Semantics: metrics, proximity to prototypes Upper probability bound Random experiments with imprecise outcomes Consonant approximations of convex probability sets Semantics: frequentist, subjectivist (gambles)...

36 Quantitative possibility theory
Orders of magnitude of very small probabilities degrees of impossibility k(A) ranging on integers k(A) = n iff P(A) = en Likelihood functions (P(A| x), where x varies) behave like possibility distributions P(A| B) ≤ maxx  B P(A| x)

37 POSSIBILITY AS UPPER PROBABILITY
Given a numerical possibility distribution p, define P(p) = {Probabilities P | P(A) ≤ (A) for all A} Then, generally it holds that (A) = sup {P(A) | P  P(p)} N(A) = inf {P(A) | P  P(p)} So p is a faithful representation of a family of probability measures.

38 From confidence sets to possibility distributions
Consider a nested family of sets E1  E2 …  En a set of positive numbers a1 …an in [0, 1] and the family of probability functions P = {P | P(Ei) ≥ ai for all i}. P is always representable by means of a possibility measure. Its possibility distribution is precisely πx = mini max(µEi, 1 – ai)

39 Random set view Let mi = i – i+1 then m1 +… + mn = 1
A basic probability assignment (SHAFER) π(s) = ∑i: sAi mi (one point-coverage function) Only in the consonant case can m be recalculated from π

40 CONDITIONAL POSSIBILITY MEASURES
A Coxian axiom (A C) = (A |C)*(C), with * = product Then: (A |C) = (A C)/ (C) N(A| C) = 1 – (Ac | C) Dempster rule of conditioning (preserves s-maxitivity) For the revision of possibility distributions: minimal change of  when N(C) = 1. It improves the state of information (reduction of focal elements)

41 Bayesian possibilistic conditioning
(A |b C) = sup{P(A|C), P ≤ , P(C) > 0} N(A |b C) = inf{P(A|C), P ≤ , P(C) > 0} It is still a possibility measure π(s |b C) = π(s)max(1, 1 /( π(s) + N(C))) It can be shown that: (A |b C) = (A C)/ ((A C) + N(Ac C)) N(A|b C) = N(A C) / (N(A C) + P(Ac C)) = 1 – (Ac |b C) For inference from generic knowledge based on observations

42 Possibility-Probability transformations
Why ? fusion of heterogeneous data decision-making : betting according to a possibility distribution leads to probability. Extraction of a representative value Simplified non-parametric imprecise probabilistic models

43 Elementary forms of probability-possibility transformations exist for a long time
POSS PROB: Laplace indifference principle  “ All that is equipossible is equiprobable ” = changing a uniform possibility distribution into a uniform probability distribution PROB POSS: Confidence intervals Replacing a probability distribution by an interval A with a confidence level c. It defines a possibility distribution π(x) = 1 if x  A, = 1 – c if x  A

44 Possibility-Probability transformations : BASIC PRINCIPLES
Possibility probability consistency: P ≤  Preserving the ordering of events : P(A) ≥ P(B) (A) ≥ (B) or elementary events only (x) > (x') if and only if p(x) > p(x') (order preservation) Informational criteria: from  to P: Preservation of symmetries (Shapley value rather than maximal entropy) from P to : optimize information content (Maximization or minimisation of specificity

45 From OBJECTIVE probability to possibility :
Rationale : given a probability p, try and preserve as much information as possible Select a most specific element of the set PI(P) = {:  ≥ P} of possibility measures dominating P such that  (x) >  (x') iff p(x) > p(x') may be weakened into : p(x) > p(x') implies  (x) >  (x') The result is i = j=i,…n pi (case of no ties)

46 From probability to possibility : Continuous case
The possibility distribution  obtained by transforming p encodes then family of confidence intervals around the mode of p. The a-cut of  is the (1- a)-confidence interval of p The optimal symmetric transform of the uniform probability distribution is the triangular fuzzy number The symmetric triangular fuzzy number (STFN) is a covering approximation of any probability with unimodal symmetric density p with the same mode. In other words the a-cut of a STFN contains the (1- a)-confidence interval of any such p.

47 From probability to possibility : Continuous case
IL = {x, p(x) ≥ } = [aL, aL+ L] is the interval of length L with maximal probability The most specific possibility distribution dominating p is π such that L > 0, π(aL) = π(aL+ L) = 1 – P(IL). b

48 Possibilistic view of probabilistic inequalities
Chebyshev inequality defines a possibility distribution that dominates any density with given mean and variance. The symmetric triangular fuzzy number (STFN) defines a possibility distribution that optimally dominates any symmetric density with given mode and bounded support.

49 From possibility to probability
Idea (Kaufmann, Yager, Chanas): Pick a number  in [0, 1] at random Pick an element at random in the -cut of π. a generalized Laplacean indifference principle : change alpha-cuts into uniform probability distributions. Rationale : minimise arbitrariness by preserving the symmetry properties of the representation. The resulting probability distribution is: The centre of gravity of the polyhedron P(p) The pignistic transformation of belief functions (Smets) The Shapley value of the unanimity game N in game theory.

50 SUBJECTIVE POSSIBILITY DISTRIBUTIONS
Starting point : exploit the betting approach to subjective probability A critique: The agent is forced to be additive by the rules of exchangeable bets. For instance, the agent provides a uniform probability distribution on a finite set whether (s)he knows nothing about the concerned phenomenon, or if (s)he knows the concerned phenomenon is purely random. Idea : It is assumed that a subjective probability supplied by an agent is only a trace of the agent's belief.

51 SUBJECTIVE POSSIBILITY DISTRIBUTIONS
Assumption 1: Beliefs can be modelled by belief functions (masses m(A) summing to 1 assigned to subsets A). Assumption 2: The agent uses a probability function induced by his or her beliefs, using the pignistic transformation (Smets, 1990) or Shapley value. Method : reconstruct the underlying belief function from the probability provided by the agent by choosing among the isopignistic ones.

52 SUBJECTIVE POSSIBILITY DISTRIBUTIONS
There are clearly several belief functions with a prescribed Shapley value. Consider the least informative of those, in the sense of a non-specificity index (expected cardinality of the random set) I(m) = ∑  m(A)card(A). RESULT : The least informative belief function whose Shapley value is p is unique and consonant.

53 SUBJECTIVE POSSIBILITY DISTRIBUTIONS
The least specific belief function in the sense of maximizing I(m) is characterized by i = j=1,n min(pj, pi). It is a probability-possibility transformation, previously suggested in 1983: This is the unique possibility distribution whose Shapley value is p. It gives results that are less specific than the confidence interval approach to objective probability.

54 Applications of quantitative possibility
Representing incomplete probabilistic data for uncertainty propagation in computations (but fuzzy interval analysis based on the extension principle differs from conservative probabilistic risk analysis) Systematizing some statistical methods (confidence intervals, likelihood functions, probabilistic inequalities) Defuzzification based on Choquet integral (linear with fuzzy number addition)

55 Applications of quantitative possibility
Uncertain reasoning : Possibilistic nets are a counterpart to Bayesian nets that copes with incomplete data. Similar algorithmic properties under Dempster conditioning (Kruse team) Data fusion : well suited for merging heterogeneous information on numerical data (linguistic, statistics, confidence intervals) (Bloch) Risk analysis : uncertainty propagation using fuzzy arithmetics, and random interval arithmetics when statistical data is incomplete (Lodwick, Ferson) Non-parametric conservative modelling of imprecision in measurements (Mauris)

56 Perspectives Quantitative possibility is not as well understood as probability theory. Objective vs. subjective possibility (a la De Finetti) How to use possibilistic conditioning in inference tasks ? Bridge the gap with statistics and the confidence interval literature (Fisher, likelihood reasoning) Higher-order modes of fuzzy intervals (variance, …) and links with fuzzy random variables Quantitative possibilistic expectations : decision-theoretic characterisation ?

57 Conclusion Possibility theory is a simple and versatile tool for modeling uncertainty A unifying framework for modeling and merging linguistic knowledge and statistical data Useful to account for missing information in reasoning tasks and risk analysis A bridge between logic-based AI and probabilistic reasoning

58 Properties of inference |=
A |=π A if A ≠ Ø (restricted reflexivity) if A ≠ Ø, then A |=π Ø never holds (consistency preservation) The set {B: A |= π B} is deductively closed -If A  B and C |=π A then C |=π B (right weakening rule RW) -If A |=π B and A |=π C then A |=π B C (Right AND)

59 Properties of inference |=
If A |=π C ; B |=π C then A  B |=π C (Left OR) If A |=π B and A  B |=π C then A |=π C (cut, weak transitivity ) (But if A normally implies B which normally implies C, then A may not imply C) If A |=π B and if A |=π Cc is false, then A  C |=π B (rational monotony RM) If B is normally expected when A holds,then B is expected to hold when both A and C hold, unless it is that A normally implies not C

60 REPRESENTATION THEOREM FOR POSSIBILISTIC ENTAILMENT
Let |= be a consequence relation on 2S x 2S Define an induced partial relation on subsets as A > B iff A  B |= Bc for A ≠  Theorem: If |= satisfies restricted reflexivity, right weakening, rational monotony, Right AND and Left OR, then A > B is the strict part of a possibility relation on events. So a consequence relation satisfying the above properties is representable by possibilistic inference, and induces a complete plausibility preordering on the states.

61 A POSSIBILISTIC APPROACH TO MODELING RULES
A generic rule « if A then B » is modelled by P(AB) > P(Ac B). This is a constraint that delimits a set of possibility distributions on the set of interpretations of the language Applying the minimal specificity principle: P(AB) = P(ABc ) = P(Ac Bc ) > P(Ac B).

62 MODELLING A SET OF DEFAULT RULES as a POSSIBILITY DISTRIBUTION
∆ = {Ai  Bi, i = 1,n} ∆ defines a set of constraints on possibility distributions (Ai  Bi) > (Ai  ¬Bi), i = 1,…n • (∆) = set of feasible π's with respect to ∆ • One may compute * : the least specific possibility distribution in (∆)

63 Plausible inference from a set of default rules
What « ∆ implies A  B » means Cautious inference ∆ = A  B iff For all P  (∆), P(AB) > P(Ac B). Possibilistic inference ∆ =* A  B iff *(AB) > *(Ac B) for the least specific possibility measure in (∆). Leads to a stratification of ∆ according to N*(Ac B)

64 Possibilistic logic A possibilistic knowledge base is an ordered set of propositional or 1st order formulas pi K = {(pi i), i = 1,n} where i > 0 is the level of priority or validity of pi i = 1 means certainty. i = 0 means ignorance Captures the idea of uncertain knowledge in an ordinal setting

65 Possibilistic logic Axiomatization:
All axioms of classical logic with weight 1 Weighted modus ponens {(p ), (¬p  q )} |- (q min(,)) OLD! Goes back to Aristotle school Idea: the validity of a chain of uncertain deductions is the validity of its weakest link Syntactic inference K |-(p ) is well-defined

66 Possibilistic logic Inconsistency becomes a graded notion inc(K) = sup{, K |- (,)} Refutation and resolution methods extend K |- (p ) iff K {(p 1)} |- (,) Inference with a partially inconsistent knowledge base becomes non-trivial and nonmonotonic K |-nt p iff K |- (p ) and  > inc(K)

67 Semantics of possibilistic logic
A weighted formula has a fuzzy set of models . If A = [p] is the set of models of p (subset of S), |-(p a) means N(A) ≥  The least specific possibility distribution induced by |-(p a) is: π(p a)(s) = max(µA(s), 1 – ) = 1 if p is true in state s = 1 –  if p is false in state s

68 Semantics of possibilistic logic
The fuzzy set of models of K is the intersection of the fuzzy sets of models of {(pi i), i = 1,n} πK(s)= mini=1,n {1 – i | s  [pi]} determined by the highest priority formula violated by s The p. d. πK is the least informed state of partial knowledge compatible with K

69 Soundness and completeness
Monotonic semantic entailment follows Zadeh’s entailment principle K |= (p, ) stands for πK ≤ π(p a) Theorem: K |- (p, ) iff K |= (p ) For the non-trivial inference under inconsistency: {(p 1)}  K |-nt q iff (q  p) > (¬q  p)

70 Possibilistic vs. fuzzy logics
Possibilistic logic Formulas are Boolean Truth is 2-valued Weighted formulas have fuzzy sets of models Validity is many-valued degrees of validity are not compositional except for conjunctions Represents uncertainty Fuzzy logic (Pavelka) Formulas are non-Boolean Truth is many-valued Weighted formulas have crisp sets of models (cuts) Validity is Boolean degrees of truth are compositional represents real functions by means of logical formulas

71 Example: IF BIRD THEN FLY; IF PENGUIN THEN BIRD; IF PENGUIN THEN NOT-FLY
• K = {b  f, p  b, p  ¬f}  = material implication K  {b} |- f; K  {p} |-  contradiction using possibilistic logic:  < min(,) K = {(b  f ), (p  b ), (p  ¬f )} then K  {(b, 1)} |- (f ) and K  {(b, 1)} |-nt f Inc(K{(p, 1), (b, 1)} =  K  {(p, 1), (b, 1)} |- (¬f, min(,)) Hence K  {(p, 1), (b, 1)} |-nt ¬f


Download ppt "Possibility Theory and its applications: a retrospective and prospective view D. Dubois, H. Prade IRIT-CNRS, Université Paul Sabatier."

Similar presentations


Ads by Google