Default and Cooperative Reasoning in Multi-Agent Systems Chiaki Sakama Wakayama University, Japan Programming Multi-Agent Systems based on Logic Dagstuhl.

Slides:



Advertisements
Similar presentations
Computer Science CPSC 322 Lecture 25 Top Down Proof Procedure (Ch 5.2.2)
Advertisements

1 Inductive Equivalence of Logic Programs Chiaki Sakama Wakayama University Katsumi Inoue National Institute of Informatics ILP
Coordination between Logical Agents Chiaki Sakama Wakayama University Katsumi Inoue National Institute of Informatics CLIMA-V September 2004.
On the logic of merging Sebasien Konieczy and et el Muhammed Al-Muhammed.
Methods of Proof Chapter 7, second half.. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound)
Answer Set Programming Overview Dr. Rogelio Dávila Pérez Profesor-Investigador División de Posgrado Universidad Autónoma de Guadalajara
Methods of Proof Chapter 7, Part II. Proof methods Proof methods divide into (roughly) two kinds: Application of inference rules: Legitimate (sound) generation.
Propositional Logic Russell and Norvig: Chapter 6 Chapter 7, Sections 7.1—7.4 Slides adapted from: robotics.stanford.edu/~latombe/cs121/2003/home.htm.
Logic.
CPSC 322 Introduction to Artificial Intelligence September 15, 2004.
CPSC 322, Lecture 23Slide 1 Logic: TD as search, Datalog (variables) Computer Science cpsc322, Lecture 23 (Textbook Chpt 5.2 & some basic concepts from.
Proof methods Proof methods divide into (roughly) two kinds: –Application of inference rules Legitimate (sound) generation of new sentences from old Proof.
Logic in general Logics are formal languages for representing information such that conclusions can be drawn Syntax defines the sentences in the language.
Constraint Logic Programming Ryan Kinworthy. Overview Introduction Logic Programming LP as a constraint programming language Constraint Logic Programming.
CPSC 322, Lecture 23Slide 1 Logic: TD as search, Datalog (variables) Computer Science cpsc322, Lecture 23 (Textbook Chpt 5.2 & some basic concepts from.
Auto-Epistemic Logic Proposed by Moore (1985) Contemplates reflection on self knowledge (auto-epistemic) Allows for representing knowledge not just about.
Logic and Proof. Argument An argument is a sequence of statements. All statements but the first one are called assumptions or hypothesis. The final statement.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Logical Agents Chapter 7. Why Do We Need Logic? Problem-solving agents were very inflexible: hard code every possible state. Search is almost always exponential.
Methods of Proof Chapter 7, second half.
Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa Pierangelo Dell’Acqua Dept. of Science and.
ASP vs. Prolog like programming ASP is adequate for: –NP-complete problems –situation where the whole program is relevant for the problem at hands èIf.
Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa, Portugal Pierangelo Dell’Acqua Dept. of Science and Technology.
Logical Agents Chapter 7 Feb 26, Knowledge and Reasoning Knowledge of action outcome enables problem solving –a reflex agent can only find way from.
CPSC 322, Lecture 22Slide 1 Logic: Domain Modeling /Proofs + Top-Down Proofs Computer Science cpsc322, Lecture 22 (Textbook Chpt 5.2) March, 8, 2010.
1 CILOG User Manual Bayesian Networks Seminar Sep 7th, 2006.
ASP vs. Prolog like programming ASP is adequate for: –NP-complete problems –situation where the whole program is relevant for the problem at hands èIf.
First Order Logic. This Lecture Last time we talked about propositional logic, a logic on simple statements. This time we will talk about first order.
Notes for Chapter 12 Logic Programming The AI War Basic Concepts of Logic Programming Prolog Review questions.
Slide 1 Logic: Domain Modeling /Proofs + Top-Down Proofs Jim Little UBC CS 322 – CSP October 22, 2014.
Learning by Answer Sets Chiaki Sakama Wakayama University, Japan Presented at AAAI Spring Symposium on Answer Set Programming, March 2001.
CPSC 322, Lecture 22Slide 1 Logic: Domain Modeling /Proofs + Top-Down Proofs Computer Science cpsc322, Lecture 22 (Textbook Chpt 5.2) Oct, 26, 2010.
Combining Answer Sets of Nonmonotonic Logic Programs Chiaki Sakama Wakayama University Katsumi Inoue National Institute of Informatics.
‘In which we introduce a logic that is sufficent for building knowledge- based agents!’
Steffen Staab Advanced Data Modeling 1 of 32 WeST Häufungspunkte Bifurkation: x n+1 = r x n (1-x n ) Startwert x 0 = 0,25.
Advanced Topics in Propositional Logic Chapter 17 Language, Proof and Logic.
Logical Agents Logic Propositional Logic Summary
Slide 1 Propositional Definite Clause Logic: Syntax, Semantics and Bottom-up Proofs Jim Little UBC CS 322 – CSP October 20, 2014.
Speculative Computation by Consequence Finding Katsumi Inoue Kobe University Koji Iwanuma Yamanashi University.
CS Introduction to AI Tutorial 8 Resolution Tutorial 8 Resolution.
Logical Agents Chapter 7. Knowledge bases Knowledge base (KB): set of sentences in a formal language Inference: deriving new sentences from the KB. E.g.:
A Logic of Partially Satisfied Constraints Nic Wilson Cork Constraint Computation Centre Computer Science, UCC.
Logic Programming Languages Session 13 Course : T Programming Language Concept Year : February 2011.
Logical Agents Chapter 7. Outline Knowledge-based agents Logic in general Propositional (Boolean) logic Equivalence, validity, satisfiability.
KR A Principled Framework for Modular Web Rule Bases and its Semantics Anastasia Analyti Institute of Computer Science, FORTH-ICS, Greece Grigoris.
Computer Science CPSC 322 Lecture 22 Logical Consequences, Proof Procedures (Ch 5.2.2)
First Order Logic Lecture 3: Sep 13 (chapter 2 of the book)
Inverse Entailment in Nonmonotonic Logic Programs Chiaki Sakama Wakayama University, Japan.
© Copyright 2008 STI INNSBRUCK Intelligent Systems Propositional Logic.
1 Reasoning with Infinite stable models Piero A. Bonatti presented by Axel Polleres (IJCAI 2001,
1 Propositional Logic Limits The expressive power of propositional logic is limited. The assumption is that everything can be expressed by simple facts.
Logical Agents Chapter 7. Outline Knowledge-based agents Propositional (Boolean) logic Equivalence, validity, satisfiability Inference rules and theorem.
Answer Extraction To use resolution to answer questions, for example a query of the form  X C(X), we must keep track of the substitutions made during.
Assumption-based Truth Maintenance Systems: Motivation n Problem solvers need to explore multiple contexts at the same time, instead of a single one (the.
Proof Methods for Propositional Logic CIS 391 – Intro to Artificial Intelligence.
On Abductive Equivalence Katsumi Inoue National Institute of Informatics Chiaki Sakama Wakayama University MBR
Logical Agents. Inference : Example 1 How many variables? 3 variables A,B,C How many models? 2 3 = 8 models.
Logical Agents. Outline Knowledge-based agents Logic in general - models and entailment Propositional (Boolean) logic Equivalence, validity, satisfiability.
Chapter 7. Propositional and Predicate Logic
EA C461 – Artificial Intelligence Logical Agent
Propositional Definite Clause Logic: Syntax, Semantics, R&R and Proofs
Semantics of Datalog With Negation
Logical Agents Chapter 7.
Logic: Top-down proof procedure and Datalog
Logical Agents Chapter 7.
Logic: Domain Modeling /Proofs + Computer Science cpsc322, Lecture 22
Chapter 7. Propositional and Predicate Logic
Methods of Proof Chapter 7, second half.
Representations & Reasoning Systems (RRS) (2.2)
Logical Inference 4 wrap up
Presentation transcript:

Default and Cooperative Reasoning in Multi-Agent Systems Chiaki Sakama Wakayama University, Japan Programming Multi-Agent Systems based on Logic Dagstuhl Seminar, November 2002

Incomplete Knowledge in Multi-Agent System (MAS) An individual agent has incomplete knowledge in an MAS. In AI a single agent performs default reasoning when its knowledge is incomplete. In a multi-agent environment, caution is requested to perform default reasoning based on an agent’s incomplete knowledge.

Default Reasoning by a Single Agent Let A be an agent and F a propositional sentence. When A  ≠ F F is not proved by A and ~ F is assumed by default (negation as failure).

Default Reasoning in Multi-Agent Environment Let A 1,…,A n be agents and F a propositinal sentence. When A 1  ≠ F (†) F is not proved by A 1 but F may be proved by other agents A 2,…,A n. ⇒ It is unsafe to conclude ~ F by default due to the evidence of (†).

Default Reasoning v.s. Cooperative Reasoning in MAS An agent can perform default reasoning if it is based on incomplete belief wrt an agent’s internal world. Else if an agent has incomplete knowledge about its external world, it is more appropriate to perform cooperative reasoning.

Purpose of this Research It is necessary to distinguish different types of incomplete knowledge in an agent. We consider a multi-agent system based on logic and provide a framework of default/cooperative reasoning in an MAS.

Problem Setting An MAS consists of a finite number of agents. Every agent has the same underlying language and shared ontology. An agent has a knowledge base written by logic programming.

Multi-Agent Logic Program (MLP) Given an MAS {A 1,…, A n } with agents A i (1 ≦ i ≦ n), a multi-agent logic program (MLP) is defined as a set { P 1,…, P n } where P i is the program of A i. P i is an extended logic program which consists of rules of the form: L 0 ← L 1,…, L m, not L m+1,…, not L n where L i is a literal and not represents negation as failure.

Terms / Notations Any predicate appearing in the head of no rule in a program is called external, otherwise, it is called internal. A literal with an external/internal predicate is called an external/internal literal . ground(P): ground instantiation of a program P. Lit(P): The set of all ground literals appearing in ground(P). Cn(P) = { L | L is a ground literal s.t. P |= L }.

Answer Set Semantics Let P be a program and S a set of ground literals satisfying the conditions: 1. P S is a set of ground rules s.t. L 0 ← L 1,…, L m is in P S iff L 0 ← L 1,…, L m, not L m+1,…, not L n is in ground(P) and { L m+1,…, L n }∩ S = φ. 2. S = Cn( P S ). Then, S is called an answer set of P.

Rational Agents A program P is consistent if P has a consistent answer set. An agent A i is called rational if it has a consistent program P i. We assume an MAS {A 1,…, A n } where each agent A i is rational.

Semantics of MLP Given an MLP {P 1,…, P n }, the program Π i is defined as (i) P i ⊆ Π i (ii) Π i is a maximal consistent subset of P 1 ∪ ・・・ ∪ P n A set S of ground literals is called a belief set of an agent A i if S=T∩Lit(P i ) where T is an answer set of Π i.

Belief Sets An agent has multiple belief sets in general. Belief sets are consistent and minimal under set inclusion. Given an MAS { A 1,…, A n }, an agent A i (1 ≦ i ≦ n) concludes a propositional sentence F (written A i |=F ) if F is true in every belief set of A i.

Example Suppose an MLP { P 1, P 2 } such that P 1 : travel( Date, Flight# ) ← date( Date ), not scheduled( Date ), reserve( Date, Flight# ). reserve( Date, Flight# ) ← flight( Date, Flight# ), not state( Flight#, full ). date( d1 )←. date( d2 )←. scheduled(d1)←. flight( d1, f123 )←. flight( d2, f456 )←. flight( d2, f789 )←. P 2 : state( f456, full ) ←.

Example (cont.) The agent A 1 has the single belief set { travel( d2, f789 ), reserve( d2, f789 ), date( d1 ), date( d2 ), scheduled(d1), flight( d1, f123 ), flight( d2, f456 ), flight( d2, f789 ), state( f456, full ) }

Example Suppose an MLP { P 1, P 2, P 3 } such that P 1 : go_cinema ← interesting, not crowded ¬ go_cinema ← ¬ interesting P 2 : interesting ← P 3 : ¬ interesting ← The agent A 1 has two belief sets: { go_cinema, interesting } { ¬ go_cinema, ¬ interesting }

Abductive Logic Programs ・ An abductive logic program is a tuple 〈 P,A 〉 where P is a program and A is a set of hypotheses (possibly containing rules). ・〈 P,A 〉 has a belief set S H if S H is a consistent answer set of P ∪ H where H ⊆ A. ・ A belief set S H is maximal (with respect to A) if there is no belief set T K such that H ⊂ K.

Abductive Characterization of MLP Given an MLP {P 1,…, P n }, an agent A i has a belief set S iff S=T H ∩Lit(P i ) where T H is a maximal belief set of the abductive logic program 〈 P i ; P 1 ∪・・・∪ P i - 1 ∪ P i+1 ∪・・・∪ P n 〉.

Problem Solving in MLP We consider an MLP {P 1,…, P n } where each P i is a stratified normal logic program. Given a query ← G, an agent solves the goal in a top-down manner. Any internal literal in a subgoal is evaluated within the agent. Any external literal in a subgoal is suspended and the agent asks other agents whether it is proved or not.

Simple Meta-Interpreter solve(Agent, (Goal1,Goal2)):- solve(Agent,Goal1), solve(Agent,Goal2). solve(Agent, not(Goal)):- not(solve(Agent, Goal)). solve(Agent, int(Fact)):- kb(Agent, Fact). solve(Agent, int(Goal)):- kb(Agent, (Goal:-Subgoal)), solve(Agent, Subgoal). solve(Agent, ext(Fact)):- kb(AnyAgent, Fact). solve(Agent, ext(Goal)):- kb(AnyAgent, (Goal:-Subgoal)), solve(AnyAgent, Subgoal).

Example Recall the MLP { P 1, P 2 } with P 1 : travel( Date, Flight# ) ← date( Date ), not scheduled( Date ), reserve( Date, Flight# ). reserve( Date, Flight# ) ← flight( Date, Flight# ), not state( Flight#, full ). date( d1 )←. date( d2 )←. scheduled(d1)←. flight( d1, f123 )←. flight( d2, f456 )←. flight( d2, f789 )←. P 2 : state( f456, full ) ←.

Example (cont.) Goal: ← travel( Date, Flight# ). P 1 : travel( Date, Flight# ) ← date( Date ), not scheduled( Date ), reserve( Date, Flight# ). G: ← date( Date ), not scheduled( Date ), reserve( Date, Flight# ).

Example (cont.) G: ← date( Date ), not scheduled( Date ), reserve( Date, Flight# ). P 1 : date(d1)←, date(d2) ← G: ← not scheduled(d1), reserve(d1,Flight# ). P 1 : scheduled(d1)← fail

Example (cont.) ! Backtrack G: ← date( Date ), not scheduled( Date ), reserve( Date, Flight# ). P 1 : date(d1)←, date(d2) ← G: ← not scheduled(d2), reserve(d2, Flight# ). G: ← reserve(d2, Flight# ).

Example (cont.) G: ← reserve(d2, Flight# ). P 1 : reserve( Date, Flight# ) ← flight( Date, Flight# ), not state( Flight#, full ). G: ← flight( d2, Flight# ), not state( Flight#, full ). P 1 : flight( d2, f456 )←. flight( d2, f789 )←. G: ← not state( f456, full ). ! Suspend the goal and ask P 2 whether state( f456, full ) holds or not.

Example (cont.) G: ← not state( f456, full ). P 2 : state( f456, full ) ← fail ! Backtrack G: ← flight( d2, Flight# ), not state( Flight#, full ). P 1 : flight( d2, f456 )←. flight( d2, f789 )←. G: ← not state( f789, full ).

Example (cont.) G: ← not state( f789, full ). ! Suspend the goal and ask P 2 whether state( f789, full ) holds or not. The goal ← state( f789, full ) fails in P 2 then G succeeds in P 1. As a result, the initial goal ← travel( Date, Flight# ) has the unique solution Date=d2 and Flight# = f789.

Correctness Let {P 1,…, P n } be an MLP where each P i is a stratified normal logic program. If an agent A i solves a goal G with an answer substitution θ, then A i |= Gθ, i.e., Gθ is true in the belief set of A i.

Further Issue The present system suspends the goal with an external predicate and waits a response from other agents. When an expected response is known, speculative computation [Satoh et al, 2000] would be useful to proceed computation without waiting for responses.

Further Issue The present system asks every other agent about the provability of external literals and has no strategy for adopting responses. When the source of information is known, it is effective to designate agents to be asked or to discriminate agents based on their reliability.

Summary A declarative semantics of default and cooperative reasoning in an MAS is provided. Belief sets characterize different types of incompleteness of an agent in an MAS. A proof procedure for query-answering in an MAS is provided. It is sound under the belief set semantics when an MLP is stratified.