CSM6120 Introduction to Intelligent Systems

Slides:



Advertisements
Similar presentations
Artificial Intelligence
Advertisements

Some Prolog Prolog is a logic programming language
Computer Science CPSC 322 Lecture 25 Top Down Proof Procedure (Ch 5.2.2)
Russell and Norvig Chapter 7
Inference Rules Universal Instantiation Existential Generalization
UIUC CS 497: Section EA Lecture #2 Reasoning in Artificial Intelligence Professor: Eyal Amir Spring Semester 2004.
Inference and Reasoning. Basic Idea Given a set of statements, does a new statement logically follow from this. For example If an animal has wings and.
Methods of Proof Chapter 7, second half.. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound)
Agents That Reason Logically Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Spring 2004.
Methods of Proof Chapter 7, Part II. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound) generation.
Logic CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Logic.
F22H1 Logic and Proof Week 7 Clausal Form and Resolution.
Knowledge Representation Methods
1 Predicate Logic Rosen 6 th ed., § Predicate Logic Predicate logic is an extension of propositional logic that permits concisely reasoning.
Outline Recap Knowledge Representation I Textbook: Chapters 6, 7, 9 and 10.
Proof methods Proof methods divide into (roughly) two kinds: –Application of inference rules Legitimate (sound) generation of new sentences from old Proof.
Inference and Resolution for Problem Solving
Knowledge Representation I (Propositional Logic) CSE 473.
Methods of Proof Chapter 7, second half.
Knoweldge Representation & Reasoning
CSM6120 Introduction to Intelligent Systems Propositional and Predicate Logic.
Propositional Logic Reasoning correctly computationally Chapter 7 or 8.
INFERENCE IN FIRST-ORDER LOGIC IES 503 ARTIFICIAL INTELLIGENCE İPEK SÜĞÜT.
Logical Equivalence & Predicate Logic
Artificial Intelligence
Notes for Chapter 12 Logic Programming The AI War Basic Concepts of Logic Programming Prolog Review questions.
February 20, 2006AI: Chapter 7: Logical Agents1 Artificial Intelligence Chapter 7: Logical Agents Michael Scherger Department of Computer Science Kent.
CPS 170: Artificial Intelligence Propositional Logic Instructor: Vincent Conitzer.
1.1 CompSci 102© Michael Frank Today’s topics Propositional equivalencesPropositional equivalences Predicate logicPredicate logic Reading: Sections Reading:
1 Knowledge Based Systems (CM0377) Lecture 4 (Last modified 5th February 2001)
Logical Inference 2 rule based reasoning
Pattern-directed inference systems
Logical Agents Logic Propositional Logic Summary
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Slide 1 Propositional Definite Clause Logic: Syntax, Semantics and Bottom-up Proofs Jim Little UBC CS 322 – CSP October 20, 2014.
Propositional Logic: Methods of Proof (Part II) This lecture topic: Propositional Logic (two lectures) Chapter (previous lecture, Part I) Chapter.
An Introduction to Artificial Intelligence – CE Chapter 7- Logical Agents Ramin Halavati
S P Vimal, Department of CSIS, BITS, Pilani
CS Introduction to AI Tutorial 8 Resolution Tutorial 8 Resolution.
Logical Agents Chapter 7. Knowledge bases Knowledge base (KB): set of sentences in a formal language Inference: deriving new sentences from the KB. E.g.:
Automated Reasoning Early AI explored how to automated several reasoning tasks – these were solved by what we might call weak problem solving methods as.
CS6133 Software Specification and Verification
For Wednesday Read chapter 9, sections 1-3 Homework: –Chapter 7, exercises 8 and 9.
CS 285- Discrete Mathematics Lecture 4. Section 1.3 Predicate logic Predicate logic is an extension of propositional logic that permits concisely reasoning.
© Copyright 2008 STI INNSBRUCK Intelligent Systems Propositional Logic.
Inference in First Order Logic. Outline Reducing first order inference to propositional inference Unification Generalized Modus Ponens Forward and backward.
Chapter 7. Propositional and Predicate Logic Fall 2013 Comp3710 Artificial Intelligence Computing Science Thompson Rivers University.
Dr. Shazzad Hosain Department of EECS North South Universtiy Lecture 04 – Part B Propositional Logic.
1 Propositional Logic Limits The expressive power of propositional logic is limited. The assumption is that everything can be expressed by simple facts.
Logical Agents Chapter 7. Outline Knowledge-based agents Propositional (Boolean) logic Equivalence, validity, satisfiability Inference rules and theorem.
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Proof Methods for Propositional Logic CIS 391 – Intro to Artificial Intelligence.
Knowledge Repn. & Reasoning Lecture #9: Propositional Logic UIUC CS 498: Section EA Professor: Eyal Amir Fall Semester 2005.
Artificial Intelligence Knowledge Representation.
March 3, 2016Introduction to Artificial Intelligence Lecture 12: Knowledge Representation & Reasoning I 1 Back to “Serious” Topics… Knowledge Representation.
Artificial Intelligence Logical Agents Chapter 7.
Logical Agents. Outline Knowledge-based agents Logic in general - models and entailment Propositional (Boolean) logic Equivalence, validity, satisfiability.
EA C461 Artificial Intelligence
Chapter 7. Propositional and Predicate Logic
Knowledge Representation and Reasoning
EA C461 – Artificial Intelligence Logical Agent
Logical Agents Chapter 7 Selected and slightly modified slides from
Discrete Math - Module #1 - Logic
Artificial Intelligence: Agents and Propositional Logic.
CS 416 Artificial Intelligence
Back to “Serious” Topics…
Chapter 7. Propositional and Predicate Logic
Methods of Proof Chapter 7, second half.
Presentation transcript:

CSM6120 Introduction to Intelligent Systems Propositional and Predicate Logic 2

Syntax and semantics Syntax Semantics Rules for constructing legal sentences in the logic Which symbols we can use (English: letters, punctuation) How we are allowed to combine symbols Semantics How we interpret (read) sentences in the logic Assigns a meaning to each sentence Example: “All lecturers are seven foot tall” A valid sentence (syntax) And we can understand the meaning (semantics) This sentence happens to be false (there is a counterexample)

Propositional logic Syntax Semantics (Classical/Boolean) Propositions, e.g. “it is wet” Connectives: and, or, not, implies, iff (equivalent) Brackets (), T (true) and F (false) Semantics (Classical/Boolean) Define how connectives affect truth “P and Q” is true if and only if P is true and Q is true Use truth tables to work out the truth of statements

Important concepts Soundness, completeness, validity (tautologies) Logical equivalences Models/interpretations Entailment Inference Clausal forms (CNF, DNF)

Resolution algorithm Given formula in conjunctive normal form, repeat: Find two clauses with complementary literals, Apply resolution, Add resulting clause (if not already there) If the empty clause results, formula is not satisfiable Must have been obtained from P and NOT(P) Otherwise, if we get stuck (and we will eventually), the formula is guaranteed to be satisfiable If we get stuck, why is it satisfiable? Consider the final set of clauses C Construct satisfying assignment as follows: Assign truth values to variables in order x1, x2, …, xn If xj is the last chance to satisfy a clause (i.e., all the other variables in the clause came earlier and were set the wrong way), then set xj to satisfy it Otherwise, doesn’t matter how it’s set Suppose this fails (for the first time) at some point, i.e., xj must be set to true for one last-chance clause and false for another These two clauses would have resolved to something involving only up to xj-1 (not to the empty clause, of course), which must be satisfied But then one of the two clauses must also be satisfied - contradiction

Example Our knowledge base: Can we infer SprinklersOn? We add: 1) FriendWetBecauseOfSprinklers 2) NOT(FriendWetBecauseOfSprinklers) OR SprinklersOn Can we infer SprinklersOn? We add: 3) NOT(SprinklersOn) From 2) and 3), get 4) NOT(FriendWetBecauseOfSprinklers) From 4) and 1), get empty clause = SpinklersOn entailed

Horn clauses A Horn clause is a CNF clause with at most one positive literal The positive literal is called the head, negative literals are the body Prolog: head:- body1, body2, body3 … English: “To prove the head, prove body1, …” Implication: If (body1, body2 …) then head Horn clauses form the basis of forward and backward chaining The Prolog language is based on Horn clauses Modus ponens: complete for Horn KBs Deciding entailment with Horn clauses is linear in the size of the knowledge base A definite clause with no negative literals is also called a fact Try to figure out whether some xj is entailed Simply follow the implications (modus ponens) as far as you can, see if you can reach xj xj is entailed if and only if it can be reached (can set everything that is not reached to false) Can implement this more efficiently by maintaining, for each implication, a count of how many of the left-hand side variables have been reached

Reasoning with Horn clauses Chaining: simple methods used by most inference engines to produce a line of reasoning Forward chaining (FC) For each new piece of data, generate all new facts, until the desired fact is generated Data-directed reasoning Backward chaining (BC) To prove the goal, find a clause that contains the goal as its head, and prove the body recursively (Backtrack when the wrong clause is chosen) Goal-directed reasoning Simple methods used by most inference engines to produce a line of reasoning Forward chaining: the engine begins with the initial content of the workspace and proceeds toward a final conclusion Backward chaining: the engine starts with a goal and finds knowledge to support that goal

Forward chaining algorithm Read the initials facts Begin Filter_phase: find the fired rules (that match facts) While fired rules not empty AND not end DO Choice_phase: Analyse the conflicts, choose most appropriate rule Apply the chosen rule End do End Given database of true facts Apply all rules that match facts in database Add conclusions to database Repeat until a goal is reached, OR repeat until no new facts added

Forward chaining example (1) Suppose we have three rules: R1: If A and B then D (= A ˄ B → D) R2: If B then C R3: If C and D then E If facts A and B are present, we infer D from R1 and infer C from R2 With D and C inferred, we now infer E from R3

Forward chaining example (2) Rules Facts R1: IF hot AND smoky THEN fire R2: IF alarm-beeps THEN smoky R3: IF fire THEN switch-sprinkler alarm-beeps hot switch-sprinkler Third cycle: R3 holds smoky First cycle: R2 holds fire Second cycle: R1 holds Action

Forward chaining example (3) Fire any rule whose premises are satisfied in the KB Add its conclusion to the KB until the query is found

Forward chaining AND-OR Graph Multiple links joined by an arc indicate conjunction – every link must be proved Multiple links without an arc indicate disjunction – any link can be proved If we want to apply forward chaining to this graph we first process the known leaves A and B and then allow inference to propagate up the graph as far as possible.

Forward chaining If we want to apply the forward chaining to this graph we first process the known leaves A and B and then allow inference to propagate up the graph as far as possible Numbers denote how many variables are involved currently

Forward chaining

Forward chaining

Forward chaining

Forward chaining

Forward chaining

Forward chaining Proof of completeness FC derives every atomic sentence that is entailed by KB FC reaches a fixed point where no new atomic sentences are derived Consider the final state as a model m, assigning true/false to symbols Every clause in the original KB is true in m Hence m is a model of KB If KB╞ q, q is true in every model of KB, including m

Backward chaining Idea - work backwards from the query q Avoid loops To prove q by BC, Check if q is known already, or Prove by BC all premises of some rule concluding q (i.e. try to prove each of that rule’s conditions) Avoid loops Check if new subgoal is already on the goal stack Avoid repeated work: check if new subgoal Has already been proved true, or Has already failed Goal driven reasoning top down Search from hypothesis and finds supporting facts To prove goal G: If G is in the initial facts, it is proven. Otherwise, find a rule which can be used to conclude G, and try to prove each of that rule’s conditions. Filter_phase IF set of selected rules is empty THEN Ask the user ELSE WHILE not end AND we have rules DO Choice_phase (choose a rule) Add the conditions of the rules IF the condition not solved THEN put the condition as a goal to solve END WHILE

Backward chaining example The same three rules: R1: If A and B then D R2: If B then C R3: If C and D then E If E is known (or is our hypothesis), then R3 implies C and D are true R2 thus implies B is true (from C) and R1 implies A and B are true (from D)

Example Rules Hypothesis Evidence Facts R1: IF hot AND smoky THEN fire alarm-beeps hot Facts Rules Hypothesis R1: IF hot AND smoky THEN fire R2: IF alarm-beeps THEN smoky R3: IF fire THEN switch-sprinkler Should I switch the sprinklers on? IF fire Use R3 IF hot  IF smoky Use R1 IF alarm-beeps  Use R2 Evidence

Backward chaining

Backward chaining

Backward chaining

Backward chaining

Backward chaining

Backward chaining

Backward chaining

Backward chaining

Backward chaining

Backward chaining

Comparison Forward chaining Backward chaining From facts to valid conclusions Good when: Less clear hypothesis Very large number of possible conclusions True facts known at start Backward chaining From hypotheses to relevant facts Good when: Limited number of (clear) hypotheses Determining truth of facts is expensive Large number of possible facts, mostly irrelevant

Forward vs. backward chaining FC is data-driven Automatic, unconscious processing E.g. routine decisions May do lots of work that is irrelevant to the goal BC is goal-driven Appropriate for problem solving E.g. “Where are my keys?”, “How do I start the car?” The complexity of BC can be much less than linear in size of the KB

Application Wide use in expert systems Backward chaining: diagnosis systems Start with set of hypotheses and try to prove each one, asking additional questions of user when fact is unknown Forward chaining: design/configuration systems See what can be done with available components

Checking models Sometimes we just want to find any model that satisfies the KB Propositional satisfiability: Determine if an input propositional logic sentence (in CNF) is satisfiable We use a backtracking search to find a model DPLL, WalkSAT, etc – lots of algorithms out there! This is similar to finding solutions in constraint satisfaction problems More about CSPs in a later module

Discrete Math - Module #1 - Logic 2017/4/21 Topic #3 – Predicate Logic Predicate logic Predicate logic is an extension of propositional logic that permits concisely reasoning about whole classes of entities Also termed Predicate Calculus or First Order Logic The language Prolog is built on a subset of this Propositional logic treats simple propositions (sentences) as atomic entities In contrast, predicate logic distinguishes the subject of a sentence from its predicate… It is the formal notation for writing perfectly clear, concise, and unambiguous mathematical definitions, axioms, and theorems for any branch of mathematics Predicate logic with function symbols, the “=” operator, and a few proof-building rules is sufficient for defining any conceivable mathematical system, and for proving anything that can be proved within that system! (c)2001-2002, Michael P. Frank

Subjects and predicates Discrete Math - Module #1 - Logic 2017/4/21 Topic #3 – Predicate Logic Subjects and predicates In the sentence “The dog is sleeping”: The phrase “the dog” denotes the subject - the object or entity that the sentence is about The phrase “is sleeping” denotes the predicate- a property that is true of the subject In predicate logic, a predicate is modelled as a function P(·) from objects to propositions i.e. a function that returns TRUE or FALSE P(x) = “x is sleeping” (where x is any object) or, Is_sleeping(dog) Tree(a) is true if a = oak, false if a = daffodil (c)2001-2002, Michael P. Frank

Discrete Math - Module #1 - Logic 2017/4/21 Topic #3 – Predicate Logic More about predicates Convention: Lowercase variables x, y, z... denote objects/entities; uppercase variables P, Q, R… denote propositional functions (predicates) Keep in mind that the result of applying a predicate P to an object x is the proposition P(x) But the predicate P itself (e.g. P=“is sleeping”) is not a proposition (not a complete sentence) E.g. if P(x) = “x is a prime number”, P(3) is the proposition “3 is a prime number” (c)2001-2002, Michael P. Frank

Propositional functions Discrete Math - Module #1 - Logic 2017/4/21 Topic #3 – Predicate Logic Propositional functions Predicate logic generalizes the grammatical notion of a predicate to also include propositional functions of any number of arguments, each of which may take any grammatical role that a noun can take E.g. let P(x,y,z) = “x gave y the grade z”, then if x=“Mike”, y=“Mary”, z=“A” P(x,y,z) = “Mike gave Mary the grade A” (c)2001-2002, Michael P. Frank

Reasoning KB: (1) and (2) are rules, (3) and (4) are facts (1) student(S)∧studies(S,ai) → studies(S,prolog) (2) student(T)∧studies(T,expsys) → studies(T,ai) (3) student(joe) (4) studies(joe,expsys) (1) and (2) are rules, (3) and (4) are facts With the information in (3) and (4), rule (2) can fire (but rule (1) can't), by matching (unifying) joe with T This gives a new piece of knowledge, studies(joe, ai) With this new knowledge, rule (1) can now fire joe is unified with S

Reasoning KB: We can apply modus ponens to this twice (FC), to get (1) student(S)∧studies(S,ai) → studies(S,prolog) (2) student(T)∧studies(T,expsys) → studies(T,ai) (3) student(joe) (4) studies(joe,expsys) We can apply modus ponens to this twice (FC), to get studies(joe, prolog) This can then be added to our knowledge base as a new fact With the information in (3) and (4), rule (2) can fire (but rule (1) can't), by matching (unifying) joe with T – This gives a new piece of knowledge, studies(joe, ai) ● With this new knowledge, rule (1) can now fire – joe is unified with S

Clause form We can express any predicate calculus statement in clause form This enables us to work with just simple OR (disjunct: ∨) operators rather than having to deal with implication (→) and AND (∧) thus allowing us to work towards a resolution proof

Example Let's put our previous example in clause form: (1) ¬student(S) ∨ ¬studies(S, ai) ∨ studies(S,prolog) (2) ¬student(T) ∨ ¬studies(T,expsys) ∨ studies(T,ai) (3) student(joe) (4) studies(joe,expsys) Is there a solution to studies(S, prolog)? = “is there someone who studies Prolog?” Negate it... ¬studies(S, prolog)

Example (1) ¬student(S) ∨ ¬studies(S, ai) ∨ studies(S,prolog) (2) ¬student(T) ∨ ¬studies(T,expsys) ∨ studies(T,ai) (3) student(joe) (4) studies(joe,expsys) Resolve with clause (1): ¬student(S) ∨ ¬studies(S,ai) Resolve with clause (2) (S = T): ¬student(S) ∨ ¬studies(S,expsys) Resolve with clause (4) (S = joe): ¬student(joe) Finally, resolve this with clause (3), and we have nothing left – the empty clause ¬studies(S, prolog) (1) ¬student(S) ∨ ¬studies(S, ai) ∨ studies(S,prolog) (2) ¬student(T) ∨ ¬studies(T,expsys) ∨ studies(T,ai) (3) student(joe) (4) studies(joe,expsys)

Example These are all the same logically... (1) student(S) ∧ studies(S,ai) → studies(S,prolog) (2) student(T) ∧ studies(T,expsys) → studies(T,ai) (3) student(joe) (4) studies(joe,expsys) (1) ¬student(S) ∨ ¬studies(S, ai) ∨ studies(S,prolog) (2) ¬student(T) ∨ ¬studies(T,expsys) ∨ studies(T,ai) (3) student(joe) (4) studies(joe,expsys) (1) studies(S,prolog) ← student(S) ∧ studies(S,ai) (2) studies(T,ai) ← student(T) ∧ studies(T,expsys) (3) student(joe) ← (4) studies(joe,expsys) ←

Note the last two... (3) student(joe) ← (4) studies(joe,expsys) ← Joe is a student, and is studying expsys, so it's not dependent on anything – it's a statement/fact So there is nothing to the right of the ←

Universes of discourse Discrete Math - Module #1 - Logic 2017/4/21 Topic #3 – Predicate Logic Universes of discourse The power of distinguishing objects from predicates is that it lets you state things about many objects at once E.g., let P(x)=“x+1>x”. We can then say, “For any number x, P(x) is true” instead of (0+1>0)  (1+1>1)  (2+1>2)  ... The collection of values that a variable x can take is called x’s universe of discourse (or simply ‘universe’) (c)2001-2002, Michael P. Frank

Quantifier expressions Discrete Math - Module #1 - Logic 2017/4/21 Topic #3 – Predicate Logic Quantifier expressions Quantifiers provide a notation that allows us to quantify (count) how many objects in the universe of discourse satisfy a given predicate “” is the FORLL or universal quantifier x P(x) means for all x in the universe, P holds “” is the XISTS or existential quantifier x P(x) means there exists an x in the universe (that is, 1 or more) such that P(x) is true (c)2001-2002, Michael P. Frank

The universal quantifier  Discrete Math - Module #1 - Logic 2017/4/21 Topic #3 – Predicate Logic The universal quantifier  Example: Let the universe of x be parking spaces at AU Let P(x) be the predicate “x is full” Then the universal quantification of P(x), x P(x), is the proposition: “All parking spaces at AU are full” i.e., “Every parking space at AU is full” i.e., “For each parking space at AU, that space is full” (c)2001-2002, Michael P. Frank

The existential quantifier  Discrete Math - Module #1 - Logic 2017/4/21 Topic #3 – Predicate Logic The existential quantifier  Example: Let the universe of x be parking spaces at AU Let P(x) be the predicate “x is full” Then the existential quantification of P(x), x P(x), is the proposition: “Some parking space at AU is full” “There is a parking space at AU that is full” “At least one parking space at AU is full” (c)2001-2002, Michael P. Frank

Nesting of quantifiers Discrete Math - Module #1 - Logic 2017/4/21 Topic #3 – Predicate Logic Nesting of quantifiers Example: Let the universe of x & y be people Let L(x,y)=“x likes y” Then y L(x,y) = “There is someone whom x likes” Then x (y L(x,y)) = “Everyone has someone whom they like” (c)2001-2002, Michael P. Frank

What does this mean? H (harp(H) ∧ plays(P,H))) C (owned-by(C,O) ∧ cat(C) → contented(C)) P (person(P) ∧ lives-in(P, Wales) → H (harp(H) ∧ plays(P,H))) So, if I know that there is a person called Delyth, and that Delyth lives in Wales, I can infer that Delyth plays the harp

Discrete Math - Module #1 - Logic 2017/4/21 Topic #3 – Predicate Logic Quantifier exercise If R(x,y)=“x relies upon y,” (x and y are people) express the following in unambiguous English: x(y R(x,y))= y(x R(x,y))= x(y R(x,y))= y(x R(x,y))= x(y R(x,y))= Everyone has someone to rely on There’s a person whom everyone relies upon (including himself)! There’s some needy person who relies upon everybody (including himself) Everyone has someone who relies upon them Everyone relies upon everybody, (including themselves)! (c)2001-2002, Michael P. Frank

More fun with sentences “Every dog chases its own tail”  d, Chases(d, Tail-of (d)) Alternative Statement:  d,  t, Tail-of(t, d)  Chases(d, t) “Every dog chases its own (unique) tail”  d, 1 t, Tail-of(t, d)  Chases(d, t)   d,  t, Tail-of(t, d)  Chases(d, t)  [ t’, Chases(d, t’)  t’ = t] “Only the wicked flee when no one pursues”  x, Flees(x)  [¬ y, Pursues(y, x)]  Wicked(x) Alternative :  x, [ y, Flees(x, y)]  [¬ z, Pursues(z, x)]  Wicked(x)

Propositional vs First-Order Inference Inference rules for quantifiers Infer the following sentences: King(John) ∧ Greedy(John) → Evil(John) King(Richard) ∧ Greedy(Richard) → Evil(Richard) King(Father(John)) ∧ Greedy(Father(John)) → Evil(Father(John)) …

Inference in First-Order Logic Need to add new logic rules above those in propositional logic Universal Elimination Substitute Liz for x Existential Elimination (Person1 does not exist elsewhere in KB) Existential Introduction

Example of inference rules “It is illegal for students to copy music” “Joe is a student” “Every student copies music” Is Joe a criminal? Knowledge Base:

Example cont... Universal Elimination Existential Elimination Modus Ponens

Human reasoning Analogical Nonmonotonic/default Temporal Commonsense We solved one problem one way – so maybe we can solve another one the same (or similar) way Nonmonotonic/default Handling contradictions, retracting previous conclusions Temporal Things may change over time Commonsense E.g., we know that humans are generally less than 2.2m tall We have lots of this knowledge – computers don’t! Inductive Induce new knowledge from observations Human Reasoning - Analogical ● We solved one problem one way – so maybe we can solve another one the same (or similar) way ● I am hungry. Chestnuts from a sweet chestnut tree tasted nice and were filling – So I will try eating the chestnuts from a horse chestnut tree ● Unfortunately, horse chestnuts are poisonous... Human reasoning – Common sense ● How can a computer have common sense? ● We know, for example, that if you throw a stone into the air, it will come back down ● We know that humans are generally under 100 years old, and under 2.2 metres tall ● We have a huge amount of this knowledge – But this has to be programmed explicitly Human reasoning - Non-monotonic ● Classic monotonic reasoning cannot contain Contradictions ● However, we don't always reason like that – All trees have green leaves. – All beech trees have green leaves. ● But copper beech has red leaves ● Still, the rule “all trees have green leaves” is useful Human reasoning - temporal ● Changes over time ● Oak trees have green leaves – But by November, they will not ● In machine terms, we can overcome this by introducing the concept of time ● But of course this complicates things Inductive – This one is difficult for machines... – Observe: ● Pine trees have green leaves – Induce: ● All trees have green leaves – Unfortunately, that's not true ● But it is nonetheless useful Human reasoning - default ● This is a classic example: ● bird(X) → fly(X) – All birds fly ● But we need exceptions ● bird(X)∧penguin(X) → ¬fly(X) – Penguins don't fly ● If we know bird(tweety) then we conclude that tweety flies (even if we don't know what type of bird tweety is – it is a fairly safe assumption) ● But if we further know that tweety is a penguin, then the conclusion is that tweety doesn't fly

Beyond true and false Multi-valued logics Modal logics More than two truth values e.g., true, false & unknown Fuzzy logic - truth value in [0,1] Modal logics Modal operators define mode for propositions Epistemic logics (belief) e.g. necessity, possibility Temporal logics (time) e.g. always, eventually

Types of Logic Language What exists Belief of agent Propositional Logic Facts True/False/Unknown First-Order Logic Facts, Objects, Relations Temporal Logic Facts, Objects, Relations, Times Probability Theory Degree of belief 0..1 Fuzzy Logic Degree of truth