Presentation is loading. Please wait.

Presentation is loading. Please wait.

CSM6120 Introduction to Intelligent Systems

Similar presentations


Presentation on theme: "CSM6120 Introduction to Intelligent Systems"— Presentation transcript:

1 CSM6120 Introduction to Intelligent Systems
Propositional and Predicate Logic 2

2 Syntax and semantics Syntax Semantics
Rules for constructing legal sentences in the logic Which symbols we can use (English: letters, punctuation) How we are allowed to combine symbols Semantics How we interpret (read) sentences in the logic Assigns a meaning to each sentence Example: “All lecturers are seven foot tall” A valid sentence (syntax) And we can understand the meaning (semantics) This sentence happens to be false (there is a counterexample)

3 Propositional logic Syntax Semantics (Classical/Boolean)
Propositions, e.g. “it is wet” Connectives: and, or, not, implies, iff (equivalent) Brackets (), T (true) and F (false) Semantics (Classical/Boolean) Define how connectives affect truth “P and Q” is true if and only if P is true and Q is true Use truth tables to work out the truth of statements

4 Important concepts Soundness, completeness, validity (tautologies)
Logical equivalences Models/interpretations Entailment Inference Clausal forms (CNF, DNF)

5 Resolution algorithm Given formula in conjunctive normal form, repeat:
Find two clauses with complementary literals, Apply resolution, Add resulting clause (if not already there) If the empty clause results, formula is not satisfiable Must have been obtained from P and NOT(P) Otherwise, if we get stuck (and we will eventually), the formula is guaranteed to be satisfiable If we get stuck, why is it satisfiable? Consider the final set of clauses C Construct satisfying assignment as follows: Assign truth values to variables in order x1, x2, …, xn If xj is the last chance to satisfy a clause (i.e., all the other variables in the clause came earlier and were set the wrong way), then set xj to satisfy it Otherwise, doesn’t matter how it’s set Suppose this fails (for the first time) at some point, i.e., xj must be set to true for one last-chance clause and false for another These two clauses would have resolved to something involving only up to xj-1 (not to the empty clause, of course), which must be satisfied But then one of the two clauses must also be satisfied - contradiction

6 Example Our knowledge base: Can we infer SprinklersOn? We add:
1) FriendWetBecauseOfSprinklers 2) NOT(FriendWetBecauseOfSprinklers) OR SprinklersOn Can we infer SprinklersOn? We add: 3) NOT(SprinklersOn) From 2) and 3), get 4) NOT(FriendWetBecauseOfSprinklers) From 4) and 1), get empty clause = SpinklersOn entailed

7 Horn clauses A Horn clause is a CNF clause with at most one positive literal The positive literal is called the head, negative literals are the body Prolog: head:- body1, body2, body3 … English: “To prove the head, prove body1, …” Implication: If (body1, body2 …) then head Horn clauses form the basis of forward and backward chaining The Prolog language is based on Horn clauses Modus ponens: complete for Horn KBs Deciding entailment with Horn clauses is linear in the size of the knowledge base A definite clause with no negative literals is also called a fact Try to figure out whether some xj is entailed Simply follow the implications (modus ponens) as far as you can, see if you can reach xj xj is entailed if and only if it can be reached (can set everything that is not reached to false) Can implement this more efficiently by maintaining, for each implication, a count of how many of the left-hand side variables have been reached

8 Reasoning with Horn clauses
Chaining: simple methods used by most inference engines to produce a line of reasoning Forward chaining (FC) For each new piece of data, generate all new facts, until the desired fact is generated Data-directed reasoning Backward chaining (BC) To prove the goal, find a clause that contains the goal as its head, and prove the body recursively (Backtrack when the wrong clause is chosen) Goal-directed reasoning Simple methods used by most inference engines to produce a line of reasoning Forward chaining: the engine begins with the initial content of the workspace and proceeds toward a final conclusion Backward chaining: the engine starts with a goal and finds knowledge to support that goal

9 Forward chaining algorithm
Read the initials facts Begin Filter_phase: find the fired rules (that match facts) While fired rules not empty AND not end DO Choice_phase: Analyse the conflicts, choose most appropriate rule Apply the chosen rule End do End Given database of true facts Apply all rules that match facts in database Add conclusions to database Repeat until a goal is reached, OR repeat until no new facts added

10 Forward chaining example (1)
Suppose we have three rules: R1: If A and B then D (= A ˄ B → D) R2: If B then C R3: If C and D then E If facts A and B are present, we infer D from R1 and infer C from R2 With D and C inferred, we now infer E from R3

11 Forward chaining example (2)
Rules Facts R1: IF hot AND smoky THEN fire R2: IF alarm-beeps THEN smoky R3: IF fire THEN switch-sprinkler alarm-beeps hot switch-sprinkler Third cycle: R3 holds smoky First cycle: R2 holds fire Second cycle: R1 holds Action

12 Forward chaining example (3)
Fire any rule whose premises are satisfied in the KB Add its conclusion to the KB until the query is found

13 Forward chaining AND-OR Graph
Multiple links joined by an arc indicate conjunction – every link must be proved Multiple links without an arc indicate disjunction – any link can be proved If we want to apply forward chaining to this graph we first process the known leaves A and B and then allow inference to propagate up the graph as far as possible.

14 Forward chaining If we want to apply the forward chaining to this graph we first process the known leaves A and B and then allow inference to propagate up the graph as far as possible Numbers denote how many variables are involved currently

15 Forward chaining

16 Forward chaining

17 Forward chaining

18 Forward chaining

19 Forward chaining

20 Forward chaining Proof of completeness
FC derives every atomic sentence that is entailed by KB FC reaches a fixed point where no new atomic sentences are derived Consider the final state as a model m, assigning true/false to symbols Every clause in the original KB is true in m Hence m is a model of KB If KB╞ q, q is true in every model of KB, including m

21 Backward chaining Idea - work backwards from the query q Avoid loops
To prove q by BC, Check if q is known already, or Prove by BC all premises of some rule concluding q (i.e. try to prove each of that rule’s conditions) Avoid loops Check if new subgoal is already on the goal stack Avoid repeated work: check if new subgoal Has already been proved true, or Has already failed Goal driven reasoning top down Search from hypothesis and finds supporting facts To prove goal G: If G is in the initial facts, it is proven. Otherwise, find a rule which can be used to conclude G, and try to prove each of that rule’s conditions. Filter_phase IF set of selected rules is empty THEN Ask the user ELSE WHILE not end AND we have rules DO Choice_phase (choose a rule) Add the conditions of the rules IF the condition not solved THEN put the condition as a goal to solve END WHILE

22 Backward chaining example
The same three rules: R1: If A and B then D R2: If B then C R3: If C and D then E If E is known (or is our hypothesis), then R3 implies C and D are true R2 thus implies B is true (from C) and R1 implies A and B are true (from D)

23 Example Rules Hypothesis Evidence Facts R1: IF hot AND smoky THEN fire
alarm-beeps hot Facts Rules Hypothesis R1: IF hot AND smoky THEN fire R2: IF alarm-beeps THEN smoky R3: IF fire THEN switch-sprinkler Should I switch the sprinklers on? IF fire Use R3 IF hot  IF smoky Use R1 IF alarm-beeps  Use R2 Evidence

24 Backward chaining

25 Backward chaining

26 Backward chaining

27 Backward chaining

28 Backward chaining

29 Backward chaining

30 Backward chaining

31 Backward chaining

32 Backward chaining

33 Backward chaining

34 Comparison Forward chaining Backward chaining
From facts to valid conclusions Good when: Less clear hypothesis Very large number of possible conclusions True facts known at start Backward chaining From hypotheses to relevant facts Good when: Limited number of (clear) hypotheses Determining truth of facts is expensive Large number of possible facts, mostly irrelevant

35 Forward vs. backward chaining
FC is data-driven Automatic, unconscious processing E.g. routine decisions May do lots of work that is irrelevant to the goal BC is goal-driven Appropriate for problem solving E.g. “Where are my keys?”, “How do I start the car?” The complexity of BC can be much less than linear in size of the KB

36 Application Wide use in expert systems
Backward chaining: diagnosis systems Start with set of hypotheses and try to prove each one, asking additional questions of user when fact is unknown Forward chaining: design/configuration systems See what can be done with available components

37 Checking models Sometimes we just want to find any model that satisfies the KB Propositional satisfiability: Determine if an input propositional logic sentence (in CNF) is satisfiable We use a backtracking search to find a model DPLL, WalkSAT, etc – lots of algorithms out there! This is similar to finding solutions in constraint satisfaction problems More about CSPs in a later module

38 Discrete Math - Module #1 - Logic
2017/4/21 Topic #3 – Predicate Logic Predicate logic Predicate logic is an extension of propositional logic that permits concisely reasoning about whole classes of entities Also termed Predicate Calculus or First Order Logic The language Prolog is built on a subset of this Propositional logic treats simple propositions (sentences) as atomic entities In contrast, predicate logic distinguishes the subject of a sentence from its predicate… It is the formal notation for writing perfectly clear, concise, and unambiguous mathematical definitions, axioms, and theorems for any branch of mathematics Predicate logic with function symbols, the “=” operator, and a few proof-building rules is sufficient for defining any conceivable mathematical system, and for proving anything that can be proved within that system! (c) , Michael P. Frank

39 Subjects and predicates
Discrete Math - Module #1 - Logic 2017/4/21 Topic #3 – Predicate Logic Subjects and predicates In the sentence “The dog is sleeping”: The phrase “the dog” denotes the subject - the object or entity that the sentence is about The phrase “is sleeping” denotes the predicate- a property that is true of the subject In predicate logic, a predicate is modelled as a function P(·) from objects to propositions i.e. a function that returns TRUE or FALSE P(x) = “x is sleeping” (where x is any object) or, Is_sleeping(dog) Tree(a) is true if a = oak, false if a = daffodil (c) , Michael P. Frank

40 Discrete Math - Module #1 - Logic
2017/4/21 Topic #3 – Predicate Logic More about predicates Convention: Lowercase variables x, y, z... denote objects/entities; uppercase variables P, Q, R… denote propositional functions (predicates) Keep in mind that the result of applying a predicate P to an object x is the proposition P(x) But the predicate P itself (e.g. P=“is sleeping”) is not a proposition (not a complete sentence) E.g. if P(x) = “x is a prime number”, P(3) is the proposition “3 is a prime number” (c) , Michael P. Frank

41 Propositional functions
Discrete Math - Module #1 - Logic 2017/4/21 Topic #3 – Predicate Logic Propositional functions Predicate logic generalizes the grammatical notion of a predicate to also include propositional functions of any number of arguments, each of which may take any grammatical role that a noun can take E.g. let P(x,y,z) = “x gave y the grade z”, then if x=“Mike”, y=“Mary”, z=“A” P(x,y,z) = “Mike gave Mary the grade A” (c) , Michael P. Frank

42 Reasoning KB: (1) and (2) are rules, (3) and (4) are facts
(1) student(S)∧studies(S,ai) → studies(S,prolog) (2) student(T)∧studies(T,expsys) → studies(T,ai) (3) student(joe) (4) studies(joe,expsys) (1) and (2) are rules, (3) and (4) are facts With the information in (3) and (4), rule (2) can fire (but rule (1) can't), by matching (unifying) joe with T This gives a new piece of knowledge, studies(joe, ai) With this new knowledge, rule (1) can now fire joe is unified with S

43 Reasoning KB: We can apply modus ponens to this twice (FC), to get
(1) student(S)∧studies(S,ai) → studies(S,prolog) (2) student(T)∧studies(T,expsys) → studies(T,ai) (3) student(joe) (4) studies(joe,expsys) We can apply modus ponens to this twice (FC), to get studies(joe, prolog) This can then be added to our knowledge base as a new fact With the information in (3) and (4), rule (2) can fire (but rule (1) can't), by matching (unifying) joe with T – This gives a new piece of knowledge, studies(joe, ai) ● With this new knowledge, rule (1) can now fire – joe is unified with S

44 Clause form We can express any predicate calculus statement in clause form This enables us to work with just simple OR (disjunct: ∨) operators rather than having to deal with implication (→) and AND (∧) thus allowing us to work towards a resolution proof

45 Example Let's put our previous example in clause form:
(1) ¬student(S) ∨ ¬studies(S, ai) ∨ studies(S,prolog) (2) ¬student(T) ∨ ¬studies(T,expsys) ∨ studies(T,ai) (3) student(joe) (4) studies(joe,expsys) Is there a solution to studies(S, prolog)? = “is there someone who studies Prolog?” Negate it... ¬studies(S, prolog)

46 Example (1) ¬student(S) ∨ ¬studies(S, ai) ∨ studies(S,prolog)
(2) ¬student(T) ∨ ¬studies(T,expsys) ∨ studies(T,ai) (3) student(joe) (4) studies(joe,expsys) Resolve with clause (1): ¬student(S) ∨ ¬studies(S,ai) Resolve with clause (2) (S = T): ¬student(S) ∨ ¬studies(S,expsys) Resolve with clause (4) (S = joe): ¬student(joe) Finally, resolve this with clause (3), and we have nothing left – the empty clause ¬studies(S, prolog) (1) ¬student(S) ∨ ¬studies(S, ai) ∨ studies(S,prolog) (2) ¬student(T) ∨ ¬studies(T,expsys) ∨ studies(T,ai) (3) student(joe) (4) studies(joe,expsys)

47 Example These are all the same logically...
(1) student(S) ∧ studies(S,ai) → studies(S,prolog) (2) student(T) ∧ studies(T,expsys) → studies(T,ai) (3) student(joe) (4) studies(joe,expsys) (1) ¬student(S) ∨ ¬studies(S, ai) ∨ studies(S,prolog) (2) ¬student(T) ∨ ¬studies(T,expsys) ∨ studies(T,ai) (3) student(joe) (4) studies(joe,expsys) (1) studies(S,prolog) ← student(S) ∧ studies(S,ai) (2) studies(T,ai) ← student(T) ∧ studies(T,expsys) (3) student(joe) ← (4) studies(joe,expsys) ←

48 Note the last two... (3) student(joe) ← (4) studies(joe,expsys) ← Joe is a student, and is studying expsys, so it's not dependent on anything – it's a statement/fact So there is nothing to the right of the ←

49 Universes of discourse
Discrete Math - Module #1 - Logic 2017/4/21 Topic #3 – Predicate Logic Universes of discourse The power of distinguishing objects from predicates is that it lets you state things about many objects at once E.g., let P(x)=“x+1>x”. We can then say, “For any number x, P(x) is true” instead of (0+1>0)  (1+1>1)  (2+1>2)  ... The collection of values that a variable x can take is called x’s universe of discourse (or simply ‘universe’) (c) , Michael P. Frank

50 Quantifier expressions
Discrete Math - Module #1 - Logic 2017/4/21 Topic #3 – Predicate Logic Quantifier expressions Quantifiers provide a notation that allows us to quantify (count) how many objects in the universe of discourse satisfy a given predicate “” is the FORLL or universal quantifier x P(x) means for all x in the universe, P holds “” is the XISTS or existential quantifier x P(x) means there exists an x in the universe (that is, 1 or more) such that P(x) is true (c) , Michael P. Frank

51 The universal quantifier 
Discrete Math - Module #1 - Logic 2017/4/21 Topic #3 – Predicate Logic The universal quantifier  Example: Let the universe of x be parking spaces at AU Let P(x) be the predicate “x is full” Then the universal quantification of P(x), x P(x), is the proposition: “All parking spaces at AU are full” i.e., “Every parking space at AU is full” i.e., “For each parking space at AU, that space is full” (c) , Michael P. Frank

52 The existential quantifier 
Discrete Math - Module #1 - Logic 2017/4/21 Topic #3 – Predicate Logic The existential quantifier  Example: Let the universe of x be parking spaces at AU Let P(x) be the predicate “x is full” Then the existential quantification of P(x), x P(x), is the proposition: “Some parking space at AU is full” “There is a parking space at AU that is full” “At least one parking space at AU is full” (c) , Michael P. Frank

53 Nesting of quantifiers
Discrete Math - Module #1 - Logic 2017/4/21 Topic #3 – Predicate Logic Nesting of quantifiers Example: Let the universe of x & y be people Let L(x,y)=“x likes y” Then y L(x,y) = “There is someone whom x likes” Then x (y L(x,y)) = “Everyone has someone whom they like” (c) , Michael P. Frank

54 What does this mean? H (harp(H) ∧ plays(P,H)))
C (owned-by(C,O) ∧ cat(C) → contented(C)) P (person(P) ∧ lives-in(P, Wales) → H (harp(H) ∧ plays(P,H))) So, if I know that there is a person called Delyth, and that Delyth lives in Wales, I can infer that Delyth plays the harp

55 Discrete Math - Module #1 - Logic
2017/4/21 Topic #3 – Predicate Logic Quantifier exercise If R(x,y)=“x relies upon y,” (x and y are people) express the following in unambiguous English: x(y R(x,y))= y(x R(x,y))= x(y R(x,y))= y(x R(x,y))= x(y R(x,y))= Everyone has someone to rely on There’s a person whom everyone relies upon (including himself)! There’s some needy person who relies upon everybody (including himself) Everyone has someone who relies upon them Everyone relies upon everybody, (including themselves)! (c) , Michael P. Frank

56 More fun with sentences
“Every dog chases its own tail”  d, Chases(d, Tail-of (d)) Alternative Statement:  d,  t, Tail-of(t, d)  Chases(d, t) “Every dog chases its own (unique) tail”  d, 1 t, Tail-of(t, d)  Chases(d, t)   d,  t, Tail-of(t, d)  Chases(d, t)  [ t’, Chases(d, t’)  t’ = t] “Only the wicked flee when no one pursues”  x, Flees(x)  [¬ y, Pursues(y, x)]  Wicked(x) Alternative :  x, [ y, Flees(x, y)]  [¬ z, Pursues(z, x)]  Wicked(x)

57 Propositional vs First-Order Inference
Inference rules for quantifiers Infer the following sentences: King(John) ∧ Greedy(John) → Evil(John) King(Richard) ∧ Greedy(Richard) → Evil(Richard) King(Father(John)) ∧ Greedy(Father(John)) → Evil(Father(John))

58 Inference in First-Order Logic
Need to add new logic rules above those in propositional logic Universal Elimination Substitute Liz for x Existential Elimination (Person1 does not exist elsewhere in KB) Existential Introduction

59 Example of inference rules
“It is illegal for students to copy music” “Joe is a student” “Every student copies music” Is Joe a criminal? Knowledge Base:

60 Example cont... Universal Elimination Existential Elimination
Modus Ponens

61 Human reasoning Analogical Nonmonotonic/default Temporal Commonsense
We solved one problem one way – so maybe we can solve another one the same (or similar) way Nonmonotonic/default Handling contradictions, retracting previous conclusions Temporal Things may change over time Commonsense E.g., we know that humans are generally less than 2.2m tall We have lots of this knowledge – computers don’t! Inductive Induce new knowledge from observations Human Reasoning - Analogical ● We solved one problem one way – so maybe we can solve another one the same (or similar) way ● I am hungry. Chestnuts from a sweet chestnut tree tasted nice and were filling – So I will try eating the chestnuts from a horse chestnut tree ● Unfortunately, horse chestnuts are poisonous... Human reasoning – Common sense ● How can a computer have common sense? ● We know, for example, that if you throw a stone into the air, it will come back down ● We know that humans are generally under 100 years old, and under 2.2 metres tall ● We have a huge amount of this knowledge – But this has to be programmed explicitly Human reasoning - Non-monotonic ● Classic monotonic reasoning cannot contain Contradictions ● However, we don't always reason like that – All trees have green leaves. – All beech trees have green leaves. ● But copper beech has red leaves ● Still, the rule “all trees have green leaves” is useful Human reasoning - temporal ● Changes over time ● Oak trees have green leaves – But by November, they will not ● In machine terms, we can overcome this by introducing the concept of time ● But of course this complicates things Inductive – This one is difficult for machines... – Observe: ● Pine trees have green leaves – Induce: ● All trees have green leaves – Unfortunately, that's not true ● But it is nonetheless useful Human reasoning - default ● This is a classic example: ● bird(X) → fly(X) – All birds fly ● But we need exceptions ● bird(X)∧penguin(X) → ¬fly(X) – Penguins don't fly ● If we know bird(tweety) then we conclude that tweety flies (even if we don't know what type of bird tweety is – it is a fairly safe assumption) ● But if we further know that tweety is a penguin, then the conclusion is that tweety doesn't fly

62 Beyond true and false Multi-valued logics Modal logics
More than two truth values e.g., true, false & unknown Fuzzy logic - truth value in [0,1] Modal logics Modal operators define mode for propositions Epistemic logics (belief) e.g. necessity, possibility Temporal logics (time) e.g. always, eventually

63 Types of Logic Language What exists Belief of agent
Propositional Logic Facts True/False/Unknown First-Order Logic Facts, Objects, Relations Temporal Logic Facts, Objects, Relations, Times Probability Theory Degree of belief 0..1 Fuzzy Logic Degree of truth


Download ppt "CSM6120 Introduction to Intelligent Systems"

Similar presentations


Ads by Google