Presentation is loading. Please wait.

Presentation is loading. Please wait.

Artificial Intelligence Chapter 23 Multiple Agents Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.

Similar presentations


Presentation on theme: "Artificial Intelligence Chapter 23 Multiple Agents Biointelligence Lab School of Computer Sci. & Eng. Seoul National University."— Presentation transcript:

1 Artificial Intelligence Chapter 23 Multiple Agents Biointelligence Lab School of Computer Sci. & Eng. Seoul National University

2 (C) 2000-2002 SNU CSE Biointelligence Lab2 Outline Interacting Agents Models of Other Agents A Modal Logic of Knowledge Additional Readings and Discussion

3 (C) 2000-2002 SNU CSE Biointelligence Lab3 23.1 Interacting Agents Agents’ objectives  To predict what another agent will do :  Need methods to model another  To affect what another agent will do :  Need methods to communicate with another Focus  Distributed artificial intelligence (DAI)

4 (C) 2000-2002 SNU CSE Biointelligence Lab4 23.2 Models of Other Agents Varieties of models  Need of model : to predict the behavior of other agents and processes  Model focused : high level model ( e.g. T-R model)  The model and its apparatus for using that model to select actions : cognitive structure  Cognitive structure often includes an agent’s goals and intentions  Our focus of cognitive structure : its model of its environment and of the cognitive structures of others agents

5 (C) 2000-2002 SNU CSE Biointelligence Lab5 23.2 Models of Other Agents (Cont’d)  Model strategies  Iconic based –Attempts to simulate relevant aspects of the environment  Feature based –Attempts to describe the environment

6 (C) 2000-2002 SNU CSE Biointelligence Lab6 23.2 Models of Other Agents (Cont’d) Simulation Strategies  Often useful but suffer from the difficulty representing ignorance or uncertainty Simulation Databases  Build a hypothetical DB of formulas presumed to be the same formulas that our agent thinks actually populate that other agent’s world model  It has the same deficiencies as iconic models  Uncertainty : whether our agent has a formula or other agent has the formula

7 (C) 2000-2002 SNU CSE Biointelligence Lab7 23.2 Models of Other Agents (Cont’d) The Intentional Stance  Capable of describing another agent’s knowledge and beliefs about the world  Describing another agent’s knowledge and beliefs : taking intentional stance  3 possibilities for constructing intentional-stance  1. Reify the other agent’s beliefs [McCarthy 1979] : Bel( On(A,B) )  2. Our agent assume that the other agent actually represents its beliefs about the world by predicate-calculus formulas in its DB –: Bel( Sam, On(A,B) )  3. Use modal operators

8 (C) 2000-2002 SNU CSE Biointelligence Lab8 23.3 A Modal Logic of Knowledge Modal Operators  Modal operator  to construct a formula whose intended meaning is that a certain agent knows a certain proposition  e.g) K( Sam, On(A,B) )  K(α,φ) or K α (φ)  knowledge and belief  Whereas an agent can believe a false proposition, it cannot know anything that is false.  logic of knowledge is simpler than logic of belief

9 (C) 2000-2002 SNU CSE Biointelligence Lab9 23.3 A Modal Logic of Knowledge (Cont’d) Modal first-order language  using the operator K  syntax 1. All of the wffs of ordinary first-order predicate calculus are also wwf of the modal language 2. If φ is a closed wff of the modal language, and if α is a ground term, then K(α, φ) is a wff of the modal language. 3. As usual, if φ and ψ are wffs, then so are any expressions that can be constructed from φ and ψ by the usual propositional connectives.

10 (C) 2000-2002 SNU CSE Biointelligence Lab10 23.3 A Modal Logic of Knowledge (Cont’d)  As examples, –K[Agent1, K[Agent2, On(A,B))] : Agent1 knows that Agent2 knows that A is on B. –K(Agent1, On(A,B))  K(Agnet1, On(A,C)) : Either Agent1 knows that A is on B or it knows that A is on C –K[Agent1, On(A,B)  On(A,C)] : Agent1 knows that either A is on B or that A is on C. –K(Agent1, On(A,B))  K(Agent1, ¬On(A,B)) : Agent1 knows whether or not A is on B. –¬K(Agent1, On(A,B)) : Agent1 doesn’t know that A is on B. –(  x)K(Agent1, On(x,B)) : illegal wwf

11 (C) 2000-2002 SNU CSE Biointelligence Lab11 23.3 A Modal Logic of Knowledge (Cont’d) Knowledge Axioms  ,  : compositional semantics  Semantics of K is not compositional. –truth value of K α [φ] is not depend on α and φ compositionally –φ  ψ, K α (φ)  K α (ψ) for all α : not necessary since α might not know that φ is equivalent to ψ.  axiom schemas  distribution axiom –[K α (φ)  K α (φ  ψ)]  K α (ψ) … (1) (  K α (φ  ψ)  [K α (φ)  K α (ψ)] … (2) )  knowledge axiom –K α (φ)  φ …(3) : An agent cannot possibly know something that is false.  positive-introspection axiom –K α (φ)  K α (K α (φ)) … (4)

12 (C) 2000-2002 SNU CSE Biointelligence Lab12 23.3 A Modal Logic of Knowledge (Cont’d)  negative-introspection axiom –K α (φ)  K α (¬K α (φ)) … (5)  epistemic necessitation –├ φ infer K α (φ) … (6)  logically omniscienct –φ ├ ψ and from K α (φ) infer K α (ψ) … (7) (  ├ (φ  ψ) infer K α (φ)  K α (ψ) … (8) –from logical omniscience, K(α, (φ  ψ))  K(α, φ)  K(α, ψ) … (9)

13 (C) 2000-2002 SNU CSE Biointelligence Lab13 23.3 A Modal Logic of Knowledge (Cont’d) Reasoning about Other Agents’ Knowledge  Our agent can carry out proofs of some statements about the knowledge of other agents using only the axioms of knowledge, epistemic necessitation, and its own reasoning ability (modus ponens, resolution).  e.g) Wise-Man puzzle –assumption : among three wise men, at least one has a white spot on his forehead. Each wise man can see the others’ foreheads but not his own. Two of them said, “I don’t know whether I have a white spot”. –proof of K(A, White(A)) (where, A is the third man.) 1. K A [¬White(A)  K B (¬White(A))] (given) 2. K A [K B (¬White(A)  White(B))] (given) 3. K A (¬K B (White(B))) (given) 4. ¬White(A)  K B (¬White(A)) (1, and axiom 3) 5. K B [¬White(A)  White(B)) (2, and axiom 2)

14 (C) 2000-2002 SNU CSE Biointelligence Lab14 23.3 A Modal Logic of Knowledge (Cont’d) 6. K B (¬White(A))  K B (White(B)) (5. and axiom 2) 7. ¬White(A)  K B (White(B)) (resolution on the clause forms of 4. and 6.) 8. ¬K B (White(B))  White(A) (contrapositive of 7.) 9. K A [¬K B (White(B))  White(A)] (1.- 5., 8., rule 7) 10. K A (¬K B (White(B)))  K A (White(A)) (axiom 2) 11. K A (White(A)) (modus ponens using 3. and 10.)

15 (C) 2000-2002 SNU CSE Biointelligence Lab15 23.3 A Modal Logic of Knowledge (Cont’d) Predicting Actions of Other Agents  in order to predict what another agent A1,  If A1 is not too complex, our agent may assume that A1’s action are controlled by a T-R program. Suppose the conditions in that program are  i, for i=1, …, k. To predict A1’s future actions, our agent needs to reason about how A1 will evaluate these conditions.  It is often appropriate for our agent to take an intentional stance toward A1 and attempt to establish whether or not K A1 (  i ) for i=1, …, k

16 (C) 2000-2002 SNU CSE Biointelligence Lab16 Additional Readings and Discussion [Shoham 1993] : “agent-oriented programming” [Minsky 1986] : Society of Mind [Hintikka 1962] : modal logic of knowledge [Kripke 1963 : possible-worlds semantics for modal logics [Moore 1985b] : possible-worlds semantics within first-order logic [Levesque 1984b, Fagin & Halpern 1985, Konoliege 1986] [Cohen & Levesque 1990] : modal logic for the relationship between intention and commitment [Bond & Gasser 1988] : DAI paper collection


Download ppt "Artificial Intelligence Chapter 23 Multiple Agents Biointelligence Lab School of Computer Sci. & Eng. Seoul National University."

Similar presentations


Ads by Google