1 Knowledge Representation Propositional Logic Vumpus World Knowledge Representation Propositional Logic Vumpus World.

Slides:



Advertisements
Similar presentations
Russell and Norvig Chapter 7
Advertisements

Agents That Reason Logically Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Spring 2004.
Logic CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
3/30/00 Agents that Reason Logically by Chris Horn Jiansui Yang Xiaojing Wu.
Artificial Intelligence Knowledge-based Agents Russell and Norvig, Ch. 6, 7.
Logic in general Logics are formal languages for representing information such that conclusions can be drawn Syntax defines the sentences in the language.
Logical Agents Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Fall 2005.
Knowledge Representation & Reasoning.  Introduction How can we formalize our knowledge about the world so that:  We can reason about it?  We can do.
Class Project Due at end of finals week Essentially anything you want, so long as its AI related and I approve Any programming language you want In pairs.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 9 Jim Martin.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Computing & Information Sciences Kansas State University Lecture 11 of 42 CIS 530 / 730 Artificial Intelligence Lecture 11 of 42 William H. Hsu Department.
Knowledge Representation & Reasoning (Part 1) Propositional Logic chapter 6 Dr Souham Meshoul CAP492.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Knowledge in intelligent systems So far, we’ve used relatively specialized, naïve agents. How can we build agents that incorporate knowledge and a memory?
Logical Agents Chapter 7.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Carla P. Gomes Module: Intro to Logic (Reading R&N: Chapter.
CSCI 5582 Fall 2006 CSCI 5582 Artificial Intelligence Lecture 8 Jim Martin.
LOGICAL AGENTS Yılmaz KILIÇASLAN. Definitions Logical agents are those that can:  form representations of the world,  use a process of inference to.
Knowledge Representation & Reasoning (Part 1) Propositional Logic chapter 5 Dr Souham Meshoul CAP492.
Let remember from the previous lesson what is Knowledge representation
Logical Agents Chapter 7 Feb 26, Knowledge and Reasoning Knowledge of action outcome enables problem solving –a reflex agent can only find way from.
Logical Agents Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Spring 2005.
Logical Agents Copyright, 1996 © Dale Carnegie & Associates, Inc. Chapter 7 Spring 2008.
Rutgers CS440, Fall 2003 Propositional Logic Reading: Ch. 7, AIMA 2 nd Ed. (skip )
Propositional Logic: Logical Agents (Part I) This lecture topic: Propositional Logic (two lectures) Chapter (this lecture, Part I) Chapter 7.5.
Agents that Reason Logically Logical agents have knowledge base, from which they draw conclusions TELL: provide new facts to agent ASK: decide on appropriate.
Logical Agents Chapter 7 (based on slides from Stuart Russell and Hwee Tou Ng)
Artificial Intelligence Lecture No. 9 Dr. Asad Ali Safi ​ Assistant Professor, Department of Computer Science, COMSATS Institute of Information Technology.
Propositional Logic: Logical Agents (Part I) This lecture topic: Propositional Logic (two lectures) Chapter (this lecture, Part I) Chapter 7.5.
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Dr. Shazzad Hosain Department of EECS North South Universtiy Lecture 04 – Part A Knowledge Representation and Reasoning.
Logical Agents Chapter 7. Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic Equivalence,
Class Project Due at end of finals week Essentially anything you want, so long as its AI related and I approve Any programming language you want In pairs.
An Introduction to Artificial Intelligence – CE Chapter 7- Logical Agents Ramin Halavati
CS 4100 Artificial Intelligence Prof. C. Hafner Class Notes Jan 17, 2012.
Logical Agents Chapter 7. Knowledge bases Knowledge base (KB): set of sentences in a formal language Inference: deriving new sentences from the KB. E.g.:
1 Logical Agents CS 171/271 (Chapter 7) Some text and images in these slides were drawn from Russel & Norvig’s published material.
1 Logical Agents Chapter 7. 2 A simple knowledge-based agent The agent must be able to: –Represent states, actions, etc. –Incorporate new percepts –Update.
Knowledge Representation Lecture # 17, 18 & 19. Motivation (1) Up to now, we concentrated on search methods in worlds that can be relatively easily represented.
For Friday Read chapter 8 Homework: –Chapter 7, exercise 1.
Logical Agents Chapter 7. Outline Knowledge-based agents Logic in general Propositional (Boolean) logic Equivalence, validity, satisfiability.
Knowledge-based agents: they keep track of the world by means of an internal state which is constantly updated by the agent itself A Knowledge-Based Agent.
1 The Wumpus Game StenchBreeze Stench Gold Breeze StenchBreeze Start  Breeze.
1 Propositional logic (cont…) 命題論理 Syntax&Semantics of first-order logic 構文論と意味論 Deducing hidden properties Describing actions Propositional logic (cont…)
Wumpus World 1 Wumpus and 1 pile of gold in a pit-filled cave Starts in [1,1] facing right - Random cave Percepts: [smell, breeze, gold, bump, scream]
1 UNIT-3 KNOWLEDGE REPRESENTATION. 2 Agents that reason logically(Logical agents) A Knowledge based Agent The Wumpus world environment Representation,
Logical Agents Chapter 7 Part I. 2 Outline Knowledge-based agents Wumpus world Logic in general - models and entailment Propositional (Boolean) logic.
L12: A Wumpus World Project To be checked on 16 th January 11:10~12:40 In Lab1.
1 Knowledge Representation Logic and Inference Propositional Logic Vumpus World Knowledge Representation Logic and Inference Propositional Logic Vumpus.
Artificial Intelligence Logical Agents Chapter 7.
Logical Agents. Inference : Example 1 How many variables? 3 variables A,B,C How many models? 2 3 = 8 models.
LOGICAL AGENTS CHAPTER 7 AIMA 3. OUTLINE  Knowledge-based agents  Wumpus world  Logic in general - models and entailment  Propositional (Boolean)
Where is Wumpus Propositional logic (cont…) Reasoning where is wumpus
Propositional Logic: Logical Agents (Part I)
EA C461 – Artificial Intelligence Logical Agent
Learning and Knowledge Acquisition
Logical Agents Chapter 7 Selected and slightly modified slides from
Logical Agents Reading: Russell’s Chapter 7
Logical Agents Chapter 7.
Artificial Intelligence
Artificial Intelligence: Logic agents
Class #9– Thursday, September 29
Logical Agents Chapter 7.
Logical Agents Chapter 7 How do we reason about reasonable decisions
CS 416 Artificial Intelligence
Knowledge Representation I (Propositional Logic)
CMSC 471 Fall 2011 Class #10 Tuesday, October 4 Knowledge-Based Agents
Logical Agents Prof. Dr. Widodo Budiharto 2018
Presentation transcript:

1 Knowledge Representation Propositional Logic Vumpus World Knowledge Representation Propositional Logic Vumpus World

2 Logic What is a logic? Two elements constitute what we call a logic. a formal language in which knowledge can be expressed. a means of carrying out reasoning in such a language. e.g. a logic in natural language if the signal is red, then stop the car. a logic in logical sentence A 1,1  East A  W 2,1   Forward Knowledge base: It is a set of representations of facts about the world. Sentence: It is each individual representation. Knowledge representation language: It is used for expressing sentences.

3 Wumpus world The wumpus world is a grid of squares surrounded by walls, where each square can contain agents and objects. The agent always starts in the lower left corner, a square that we will label [1,1]. The agent’s task is to find the gold, return to [1,1] and climb out of the cave s p START g g w b A p p b b b b b s s A b g g p s w Agent Breeze 微風 Gold 金 Pit 穴 Smelly 臭い Wumpus 鬼

4 PAGE of Wumpus world Percepts: Breeze, Glitter, Smell Actions: Left turn, Right turn, Forward, Grab, Release, Shoot, Climb Goals: Get gold back to start without entering pit or wumpus square Environment: squares adjacent to wumpus are smelly (world) square adjacent to pit are breezy glitter if and only if gold is in the same square shooting kills the wumpus if you are facing it shooting uses up the only arrow grabbing pick up the gold if in the same square releasing drops the gold in the same square climbing leaves the cave A complete class of environment: 4 x 4 Wumpus world the agent starts in the square [1, 1] facing towards the right the location of the gold and the wumpus are chosen randomly with a uniform distribution (1/15 probability, excluding the start square) each square can be pit with probability 0.2 (excluding the start sqaure)

5 How an agent should act and reason In the knowledge level From the fact that the agent does not detect stench and breeze in [1,1], the agent can infer that [1,2] and [2,1] are free of dangers. They are marked OK to indicate this. A cautious agent will only move into a square that it knows is OK. So the agent can move forward to [2,1] or turn left 90 0 and move forward to [1,2]. Assuming that the agent first moves forward to [2,1], from the fact that the agent detects a breeze in [2,1], the agent can infer that there must be a pit in a neighboring square, either [2,2][3,1]. So the agent turns around (turn left 90 0, turn left 90 0 ) and moves back to [1,1]. The agent has to move towards to [2,1], from the fact that the agent detects a stench in [1,2], the agent can infer that there must be a wumpus nearby and it can not be in [2,2] (or the agent would have detected a stench when it was in [2,1]). So the agent can infer that the wumpus is in [1,3]. It is marked with W. The agent can also infer that there is no pit in [2,2] (or the agent would detect breeze in [1,2]). So the agent can infer that the pit must be in [3,1]. After a sequence of deductions, the agent knows [2,2] is unvisited safe square. So the agent moves to [2,2]. What is the next move?…… moves to [2, 3] or [3,2]??? Assuming that the agent moves to [2,3], from the fact that the agent detects glitter in [2, 3], the agent can infer that there is a gold in [2,3]. So the agent grabs the gold and goes back to the start square along the squares that are marked with OK.

6 How to represent beliefs that can make inferences initial facts  initial beliefs (knowledge)  inferences  actions new beliefs  new facts  How to represent a belief (knowledge)?  Knowledge representation The object of knowledge representation is to express knowledge in computer- tractable form, such that it can be used to help agents perform well. A knowledge representation language is defined by two aspects: Syntax – describes the possible configurations that can constitute sentences. defines the sentences in the language Semantics – determines the facts in the world to which the sentences refer. defines the “meaning ” of sentences Examples: x = y*z + 10; is a sentence of Java language but x =yz10 is not. x+2  y is a sentence of arithmetic language but x2+y > is not. “I am a student.” is a sentence in English but “I a student am.” is not

7 Logic and inference Logic A language is called a logic provided the syntax and semantics of the language are defined precisely. Inference From the syntax and semantics, an inference mechanism for an agent that uses the language can be derived. Facts are parts of the world, whereas their representations must be encoded in some way within an agent. All reasoning mechanisms must operate on representations of facts, rather than on the facts themselves. The connection between sentences and facts is provided by the semantics of the language. The property of one fact following some other facts is mirrored by the property of one sentence being entailed by some other sentences. Logical inference generates new sentences that are entailed by existing sentences. entail: 〈 … を〉必然的に伴う,

8 Entailment New sentences generated are necessarily true, given that the old sentences are true. This relation between sentences is called entailment. KB |=  Knowledge base KB entails sentence  if and only if  is true in all worlds where KB is true. Here, KB is a set of sentences in a formal language. For example, x > 0 and y > 0 |= x+y > 0

9 Propositional logic: Syntax Symbols represent whole propositions (facts). The symbols of propositional logic are the logic constants true and false. Logic connectives:  (not),  (and),  (or),  (implies), and  (equivalent) If S is a sentence,  S is a sentence. If S 1 and S 2 is a sentence, S 1  S 2 is a sentence. If S 1 and S 2 is a sentence, S 1  S 2 is a sentence. If S 1 and S 2 is a sentence, S 1  S 2 is a sentence. If S 1 and S 2 is a sentence, S 1  S 2 is a sentence. The order of the precedence in propositional logic is , , , ,  ( from highest to lowest )

10 Propositional logic: Semantics  S is true iff S is false S 1  S 2 is true iff S 1 is true and S 2 is true S 1  S 2 is true iff S 1 is true or S 2 is true S 1  S 2 is true iff S 1 is false or S 2 is true i.e., is false iff S 1 is true and S 2 is false S 1  S 2 is true iff S 1  S 2 is true and S 2  S 1 is true S1S1 S2S2 S1S1 S2S2 S1S1 S2S2 S1S1 S2S2 S 1  S 2 S 1  S 2 S 1  S 2 S 1  S 2 S 1 is true, then S 2 is true. S 1 is false, then S 2 is either true or false grey  true grey  true white  false white  false

11 Propositional inference: Enumeration method AB C A  CB   C KB  False True False True False True False True False True False True False True False True False True False True False True False True False True False True Let  = A  B and KB =(A  C)  (B   C) Is it the case that KB |=  ? Check all possible models -  must be true whenever KB is true.

12 The knowledge base Percept sentences: there is no smell in the square [1,1]   S 1,1 there is no breeze in the square [1,1]   B 1,1 there is no smell in the square [2,1]   S 2,1 there is breeze in the square [2,1]  B 2,1 there is smell in the square [1,2]  S 1,2 there is no breeze in the square [1,2]   B 1,2

13 knowledge sentences: If a square has no smell, then neither the square nor any of its adjacent squares can house a wumpus. R 1 :  S 1,1   W 1,1   W 1,2   W 2,1 R 2 :  S 2,1   W 1,1   W 2,1   W 2,2   W 3,1 I f there is smell in [1,2], then there must be a wumpus in [1,2] or in one or more of the neighboring squares. R 3 : S 1,2  W 1,3  W 1,2  W 2,2  W 1,1 The knowledge base If a square has no breeze, then neither the square nor any of its adjacent squares can have a pit. R 4 :  B 1,1   P 1,1   P 1,2   P 2,1 R 5 :  B 1,2   P 1,1   P 1,2  P 2,2   P 1,3 I f there is breeze in [2,1], then there must be a pit in [3,1] or in one or more of the neighboring squares. R 6 : B 2,1  P 3,1  P 2,1  P 2,2  P 1,1

14 Quiz. Why is  -  algorithm more efficient than Minimax algorithm, please explain it. Quiz. Why is  -  algorithm more efficient than Minimax algorithm, please explain it. A solution:  -  algorithm is a better version of the Minimax algorithm by cutting off some branches in the tree which are not necessary to be checked so as to reduce the complexity of the algorithm. Therefore,  -  algorithm is more efficient than Minimax algorithm.

15 knowledge 知識 logic 論理学 propositional logic 命題論理 mean 手段 representation 表現 infer 推論する inference 推論, 推理 syntax 文法 semantics 語義論 assuming 仮定して entail 〈 … を〉必然的に伴う breeze 微風 pit 落とし穴 smelly 強い[いやな]においのする Wumpus 鬼 adjacent 隣接した cave 洞穴 enumeration 一覧表 fact 事実 imply 〈 … を〉(必然的に)含む, 伴う, equivalent 同等の