Russell and Norvig Chapter 7

Slides:



Advertisements
Similar presentations
Methods of Proof Chapter 7, second half.. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound)
Advertisements

Agents That Reason Logically Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Spring 2004.
Methods of Proof Chapter 7, Part II. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound) generation.
Propositional Logic Russell and Norvig: Chapter 6 Chapter 7, Sections 7.1—7.4 Slides adapted from: robotics.stanford.edu/~latombe/cs121/2003/home.htm.
Logic.
Artificial Intelligence Knowledge-based Agents Russell and Norvig, Ch. 6, 7.
Proof methods Proof methods divide into (roughly) two kinds: –Application of inference rules Legitimate (sound) generation of new sentences from old Proof.
Knowledge Representation & Reasoning.  Introduction How can we formalize our knowledge about the world so that:  We can reason about it?  We can do.
1 Problem Solving CS 331 Dr M M Awais Representational Methods Formal Methods Propositional Logic Predicate Logic.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Computing & Information Sciences Kansas State University Lecture 11 of 42 CIS 530 / 730 Artificial Intelligence Lecture 11 of 42 William H. Hsu Department.
Knowledge Representation & Reasoning (Part 1) Propositional Logic chapter 6 Dr Souham Meshoul CAP492.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Logical Agents Chapter 7.
Methods of Proof Chapter 7, second half.
INTRODUÇÃO AOS SISTEMAS INTELIGENTES Prof. Dr. Celso A.A. Kaestner PPGEE-CP / UTFPR Agosto de 2011.
Knoweldge Representation & Reasoning
Knowledge Representation & Reasoning (Part 1) Propositional Logic chapter 5 Dr Souham Meshoul CAP492.
Logical Agents Chapter 7 Feb 26, Knowledge and Reasoning Knowledge of action outcome enables problem solving –a reflex agent can only find way from.
Rutgers CS440, Fall 2003 Propositional Logic Reading: Ch. 7, AIMA 2 nd Ed. (skip )
Logical Agents Chapter 7 (based on slides from Stuart Russell and Hwee Tou Ng)
Logical Agents. Knowledge bases Knowledge base = set of sentences in a formal language Declarative approach to building an agent (or other system): 
February 20, 2006AI: Chapter 7: Logical Agents1 Artificial Intelligence Chapter 7: Logical Agents Michael Scherger Department of Computer Science Kent.
Logical Agents (NUS) Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
‘In which we introduce a logic that is sufficent for building knowledge- based agents!’
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Logical Agents Logic Propositional Logic Summary
Dr. Shazzad Hosain Department of EECS North South Universtiy Lecture 04 – Part A Knowledge Representation and Reasoning.
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
1 Problems in AI Problem Formulation Uninformed Search Heuristic Search Adversarial Search (Multi-agents) Knowledge RepresentationKnowledge Representation.
An Introduction to Artificial Intelligence – CE Chapter 7- Logical Agents Ramin Halavati
S P Vimal, Department of CSIS, BITS, Pilani
Logical Agents Chapter 7. Knowledge bases Knowledge base (KB): set of sentences in a formal language Inference: deriving new sentences from the KB. E.g.:
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
1 Logical Agents Chapter 7. 2 A simple knowledge-based agent The agent must be able to: –Represent states, actions, etc. –Incorporate new percepts –Update.
Logical Agents Chapter 7. Outline Knowledge-based agents Logic in general Propositional (Boolean) logic Equivalence, validity, satisfiability.
1 Logical Inference Algorithms CS 171/271 (Chapter 7, continued) Some text and images in these slides were drawn from Russel & Norvig’s published material.
1 The Wumpus Game StenchBreeze Stench Gold Breeze StenchBreeze Start  Breeze.
© Copyright 2008 STI INNSBRUCK Intelligent Systems Propositional Logic.
Dr. Shazzad Hosain Department of EECS North South Universtiy Lecture 04 – Part B Propositional Logic.
1 Propositional Logic Limits The expressive power of propositional logic is limited. The assumption is that everything can be expressed by simple facts.
Computing & Information Sciences Kansas State University Wednesday, 13 Sep 2006CIS 490 / 730: Artificial Intelligence Lecture 10 of 42 Wednesday, 13 September.
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Logical Agents Chapter 7. Outline Knowledge-based agents Propositional (Boolean) logic Equivalence, validity, satisfiability Inference rules and theorem.
Logical Agents Russell & Norvig Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean)
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Propositional Logic Russell and Norvig: Chapter 6 Chapter 7, Sections 7.1—7.4.
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Logical Agents Chapter 7 Part I. 2 Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic.
Proof Methods for Propositional Logic CIS 391 – Intro to Artificial Intelligence.
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Logical Agents. Inference : Example 1 How many variables? 3 variables A,B,C How many models? 2 3 = 8 models.
CS666 AI P. T. Chung Logic Logical Agents Chapter 7.
Logical Agents. Outline Knowledge-based agents Logic in general - models and entailment Propositional (Boolean) logic Equivalence, validity, satisfiability.
Propositional Logic Russell and Norvig: Chapter 6 Chapter 7, Sections 7.1—7.4 CS121 – Winter 2003.
EA C461 – Artificial Intelligence Logical Agent
EA C461 – Artificial Intelligence Logical Agent
Logical Agents Chapter 7.
EE562 ARTIFICIAL INTELLIGENCE FOR ENGINEERS
Logical Agents Chapter 7 Selected and slightly modified slides from
Logical Agents Reading: Russell’s Chapter 7
Logical Agents Chapter 7.
Artificial Intelligence
Logical Agents Chapter 7.
Automated Reasoning in Propositional Logic
Knowledge Representation I (Propositional Logic)
Methods of Proof Chapter 7, second half.
Problems in AI Problem Formulation Uninformed Search Heuristic Search
Logical Agents Prof. Dr. Widodo Budiharto 2018
Presentation transcript:

Russell and Norvig Chapter 7 Propositional Logic Russell and Norvig Chapter 7

Knowledge-Based Agent sensors actuators environment agent Knowledge base ?

A simple knowledge-based agent The agent must be able to: Represent states, actions, etc. Incorporate new percepts Update internal representations of the world Deduce hidden properties of the world Deduce appropriate actions

Types of Knowledge Procedural, e.g.: functions Such knowledge can only be used in one way -- by executing it Declarative, e.g.: constraints It can be used to perform many different sorts of inferences

Logic Logic is a declarative language to: Assert sentences representing facts that hold in a world W (these sentences are given the value true) Deduce the true/false values to sentences representing other aspects of W

Wumpus World PEAS description Performance measure gold +1000, death -1000 -1 per step, -10 for using the arrow Environment Squares adjacent to wumpus are smelly Squares adjacent to pit are breezy Glitter iff gold is in the same square Shooting kills wumpus if you are facing it Shooting uses up the only arrow Grabbing picks up gold if in same square Releasing drops the gold in same square Sensors: Stench, Breeze, Glitter, Bump, Scream Actuators: Left turn, Right turn, Forward, Grab, Release, Shoot

Wumpus world characterization Fully Observable No – only local perception Deterministic Yes – outcomes exactly specified Episodic No – sequential at the level of actions Static Yes – Wumpus and Pits do not move Discrete Yes Single-agent? Yes – Wumpus is essentially a natural feature

Exploring a wumpus world

Exploring a wumpus world

Exploring a wumpus world

Exploring a wumpus world

Exploring a wumpus world

Exploring a wumpus world

Exploring a wumpus world

Exploring a wumpus world

Logic in general Logics are formal languages for representing information such that conclusions can be drawn Syntax defines the sentences in the language Semantics define the "meaning" of sentences; i.e., define truth of a sentence in a world

Connection World-Representation Sentences entail Sentences represent Facts about W represent World W Conceptualization hold Facts about W hold

Examples of Logics Propositional calculus A  B  C First-order predicate calculus ( x)( y) Mother(y,x) Logic of Belief B(John,Father(Zeus,Cronus))

Model A model of a sentence is an assignment of a truth value – true or false – to every atomic sentence such that the sentence evaluates to true.

Model of a KB Let KB be a set of sentences A model m is a model of KB iff it is a model of all sentences in KB, that is, all sentences in KB are true in m.

Satisfiability of a KB A KB is satisfiable iff it admits at least one model; otherwise it is unsatisfiable KB1 = {P, QR} is satisfiable KB2 = {PP} is satisfiable KB3 = {P, P} is unsatisfiable valid sentence or tautology

Logical Entailment KB : set of sentences  : arbitrary sentence KB entails  – written KB  – iff every model of KB is also a model of  Alternatively, KB  iff {KB,} is unsatisfiable KB   is valid

Inference Rule An inference rule {, }  consists of 2 sentence patterns  and  called the conditions and one sentence pattern  called the conclusion If  and  match two sentences of KB then the corresponding  can be inferred according to the rule 

Inference I: Set of inference rules KB: Set of sentences Inference is the process of applying successive inference rules from I to KB, each rule adding its conclusion to KB

Example: Modus Ponens {  ,  }   {, }  From {  ,  }  {, }   From Battery-OK  Bulbs-OK  Headlights-Work Battery-OK  Bulbs-OK Infer Headlights-Work

Inference   Connective symbol (implication) Logical entailment KB  iff KB   is valid 

Soundness An inference rule is sound if it generates only entailed sentences All inference rules previously given are sound, e.g.: modus ponens: {   , }  The following rule: {   , }  is unsound, which does not mean it is useless (an inference rule for abduction, outside scope of this course)  

Is each of the following a sound inference rule? {   , }  {   , }  {   , }   

Completeness A set of inference rules is complete if every entailed sentences can be obtained by applying some finite succession of these rules Modus ponens alone is not complete, e.g.: from A  B and B, we cannot get A

Proof The proof of a sentence  from a set of sentences KB is the derivation of  by applying a series of sound inference rules

Proof The proof of a sentence  from a set of sentences KB is the derivation of  by applying a series of sound inference rules Battery-OK  Bulbs-OK  Headlights-Work Battery-OK  Starter-OK  Empty-Gas-Tank  Engine-Starts Engine-Starts  Flat-Tire  Car-OK Headlights-Work Battery-OK Starter-OK Empty-Gas-Tank Car-OK Battery-OK  Starter-OK by 5,6 Battery-OK  Starter-OK  Empty-Gas-Tank by 9,7 Engine-Starts by 2,10 Engine-Starts  Flat-Tire by 3,8 Flat-Tire by 11,12

Inference Problem Given: Answer: KB: a set of sentence : a sentence

Deduction vs. Satisfiability Test KB  iff {KB,} is unsatisfiable Hence: Deciding whether a set of sentences entails another sentence, or not Testing whether a set of sentences is satisfiable, or not are closely related problems

Complementary Literals A literal is a either an atomic sentence or the negated atomic sentence, e.g.: P, P Two literals are complementary if one is the negation of the other, e.g.: P and P

Unit Resolution Rule Given two sentences: L1  …  Lp and M where Li,…, Lp and M are all literals, and M and Li are complementary literals Infer: L1  …  Li-1  Li+1  …  Lp

Examples From: Engine-Starts  Car-OK Engine-Starts Infer: Car-OK Modus ponens Engine-Starts  Car-OK From: Engine-Starts  Car-OK Car-OK Infer: Engine-Starts Modus tollens

Shortcoming of Unit Resolution From: Engine-Starts  Flat-Tire  Car-OK Engine-Starts  Empty-Gas-Tank we can infer nothing!

Full Resolution Rule Given two clauses: L1  …  Lp and M1  …  Mq where Li and Mj are complementrary Infer the clause: L1 … Li-1Li+1…LkM1 … Mj-1Mj+1…Mk

Example From: Engine-Starts  Flat-Tire  Car-OK Engine-Starts  Empty-Gas-Tank Infer: Empty-Gas-Tank  Flat-Tire  Car-OK

Example From: P  Q ( P  Q) Q  R ( Q  R) Infer: P  R ( P  R)

Not All Inferences are Useful! From: Engine-Starts  Flat-Tire  Car-OK Engine-Starts  Flat-Tire Infer: Flat-Tire  Flat-Tire  Car-OK

Not All Inferences are Useful! From: Engine-Starts  Flat-Tire  Car-OK Engine-Starts  Flat-Tire Infer: Flat-Tire  Flat-Tire  Car-OK tautology

Not All Inferences are Useful! From: Engine-Starts  Flat-Tire  Car-OK Engine-Starts  Flat-Tire Infer: Flat-Tire  Flat-Tire  Car-OK  True tautology

Example Battery-OK  Bulbs-OK  Headlights-Work Battery-OK  Starter-OK  Empty-Gas-Tank  Engine-Starts Engine-Starts  Flat-Tire  Car-OK Headlights-Work Battery-OK Starter-OK Empty-Gas-Tank Car-OK Flat-Tire We want to show Flat-Tire, given clauses 1-8. Using resolution, we can show that clauses 1-8 along with clause 9 deduce an empty clause. Can you trace the resolution steps?

Sentence  Clause Form Example: (A  B)  (C  D) 1. Eliminate  (A  B)  (C  D) 2. Reduce scope of  (A  B)  (C  D) 3. Distribute  over  (A  (C  D))  (B  (C  D)) (A  C)  (A  D)  (B  C)  (B  D) Set of clauses: {A  C , A  D , B  C , B  D}

Resolution Refutation Algorithm RESOLUTION-REFUTATION(KB,a) clauses  set of clauses obtained from KB and a new  {} Repeat: For each C, C’ in clauses do res  RESOLVE(C,C’) If res contains the empty clause then return yes new  new U res If new  clauses then return no clauses  clauses U new

Efficient Propositional Inference Two families of efficient algorithms for propositional inference: Complete backtracking search algorithms DPLL algorithm (Davis, Putnam, Logemann, Loveland) Incomplete local search algorithms WalkSAT algorithm

The DPLL algorithm Determine if an input propositional logic sentence (in CNF) is satisfiable. Improvements over truth table enumeration: Early termination A clause is true if any literal is true. A sentence is false if any clause is false. Pure symbol heuristic Pure symbol: always appears with the same "sign" in all clauses. e.g., In the three clauses (A  B), (B  C), (C  A), A and B are pure, C is impure. Make a pure symbol literal true. Unit clause heuristic Unit clause: only one literal in the clause The only literal in a unit clause must be true.

Horn Clauses Horn Clause A clause with at most one positive literal. KB: A Horn clause with one positive literal which can be written as α1  …  αn  β Query: A Horn clause without positive literal α1  …  αn I.e. ( α1  …  αn ) Horn clause logic is the basis for Logic Programming

Forward chaining for Horn Clauses Idea: fire any rule whose premises are satisfied in the KB, add its conclusion to the KB, until query is found

Backward chaining for Horn Clasues Idea: work backwards from the query q: to prove q by BC, check if q is known already, or prove by BC all premises of some rule concluding q Avoid loops: check if new subgoal is already on the goal stack Avoid repeated work: check if new subgoal has already been proved true, or has already failed

Summary Propositional Logic Model of a KB Logical entailment Inference rules Resolution rule Clause form of a set of sentences Resolution refutation algorithm DPLL algorithm Horn clauses