MINERVA – A Dynamic Logic Programming Agent Architecture João Alexandre Leite, José Júlio Alferes, Luís Moniz Pereira Centro de Inteligência Artificial.

Slides:



Advertisements
Similar presentations
Computer Science CPSC 322 Lecture 25 Top Down Proof Procedure (Ch 5.2.2)
Advertisements

Updates plus Preferences Luís Moniz Pereira José Júlio Alferes Centro de Inteligência Artificial Universidade Nova de Lisboa Portugal JELIA’00, Málaga,
The Logic of Intelligence Pei Wang Department of Computer and Information Sciences Temple University.
Situation Calculus for Action Descriptions We talked about STRIPS representations for actions. Another common representation is called the Situation Calculus.
Title: Intelligent Agents A uthor: Michael Woolridge Chapter 1 of Multiagent Systems by Weiss Speakers: Tibor Moldovan and Shabbir Syed CSCE976, April.
1 An Event-Condition- Action Logic Programming Language J. J. Alferes F. Banti A. Brogi.
Reasoning System.  Reasoning with rules  Forward chaining  Backward chaining  Rule examples  Fuzzy rule systems  Planning.
Background information Formal verification methods based on theorem proving techniques and model­checking –to prove the absence of errors (in the formal.
1 Chapter 16 Planning Methods. 2 Chapter 16 Contents (1) l STRIPS l STRIPS Implementation l Partial Order Planning l The Principle of Least Commitment.
Artificial Intelligence Knowledge-based Agents Russell and Norvig, Ch. 6, 7.
Introduction To System Analysis and Design
Software Testing and Quality Assurance
Concrete architectures (Section 1.4) Part II: Shabbir Ssyed We will describe four classes of agents: 1.Logic based agents 2.Reactive agents 3.Belief-desire-intention.
Luís Moniz Pereira CENTRIA, Departamento de Informática Universidade Nova de Lisboa Pierangelo Dell’Acqua Dept. of Science and Technology.
Models -1 Scientists often describe what they do as constructing models. Understanding scientific reasoning requires understanding something about models.
Luís Moniz Pereira CENTRIA, Departamento de Informática Universidade Nova de Lisboa Pierangelo Dell’Acqua Dept. of Science and Technology.
José Júlio Alferes Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa, Portugal Pierangelo Dell’Acqua Dept. of.
A Logic-Based Approach to Model Supervisory Control Systems Pierangelo Dell’Acqua Anna Lombardi Dept. of Science and Technology - ITN Linköping University,
Academic Advisor: Prof. Ronen Brafman Team Members: Ran Isenberg Mirit Markovich Noa Aharon Alon Furman.
Reductio ad Absurdum Argumentation in Normal Logic Programs Luís Moniz Pereira and Alexandre Miguel Pinto CENTRIA – Centro de Inteligência Artificial,
Knowledge Evolution Up to now we have not considered evolution of the knowledge In real situations knowledge evolves by: –completing it with new information.
Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa Pierangelo Dell’Acqua Dept. of Science and.
Combining Societal Agents’ Knowledge João Alexandre Leite José Júlio Alferes Luís Moniz Pereira CENTRIA – Universidade Nova de Lisboa Universidade de Évora,
Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa, Portugal Pierangelo Dell’Acqua Dept. of Science and Technology.
João Alexandre Leite Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa { jleite, lmp Pierangelo.
Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa, Portugal Pierangelo Dell’Acqua Dept. of Science and Technology.
Luís Moniz Pereira Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa, Portugal Pierangelo Dell’Acqua Aida Vitória Dept. of Science.
Principle of Functional Verification Chapter 1~3 Presenter : Fu-Ching Yang.
Chapter 5 Models and theories 1. Cognitive modeling If we can build a model of how a user works, then we can predict how s/he will interact with the interface.
Katanosh Morovat.   This concept is a formal approach for identifying the rules that encapsulate the structure, constraint, and control of the operation.
GENERAL CONCEPTS OF OOPS INTRODUCTION With rapidly changing world and highly competitive and versatile nature of industry, the operations are becoming.
Features, Policies and Their Interactions Joanne M. Atlee Department of Computer Science University of Waterloo.
Introduction To System Analysis and Design
A Language for Updates with Multiple Dimensions João Alexandre Leite 1 José Júlio Alferes 1 Luís Moniz Pereira 1 Halina Przymusinska 2 Teodor Przymusinski.
Approaching a Problem Where do we start? How do we proceed?
Modelling Adaptive Controllers with Evolving Logic Programs Pierangelo Dell’Acqua Anna Lombardi Dept. of Science and Technology - ITN Linköping University,
Actions Planning and Defeasible Reasoning Guillermo R. Simari Alejandro J. García Marcela Capobianco Dept. of Computer Science and Engineering U NIVERSIDAD.
Preference Revision via Declarative Debugging Pierangelo Dell’Acqua Dept. of Science and Technology - ITN Linköping University, Sweden EPIA’05, Covilhã,
EEL 5937 Agent models. EEL 5937 Multi Agent Systems Lecture 4, Jan 16, 2003 Lotzi Bölöni.
EVOlving L ogic P rograms J.J. Alferes, J.A. Leite, L.M. Pereira (UNL) A. Brogi (U.Pisa)
MINERVA A Dynamic Logic Programming Agent Architecture João Alexandre Leite José Júlio Alferes Luís Moniz Pereira ATAL’01 CENTRIA – New University of Lisbon.
Languages of Updates João Alexandre Leite New University of Lisbon Dagstuhl, 25 November, 2002Dagstuhl Seminar
ARTIFICIAL INTELLIGENCE [INTELLIGENT AGENTS PARADIGM] Professor Janis Grundspenkis Riga Technical University Faculty of Computer Science and Information.
Object-Oriented Modeling: Static Models. Object-Oriented Modeling Model the system as interacting objects Model the system as interacting objects Match.
1 Knowledge Acquisition and Learning by Experience – The Role of Case-Specific Knowledge Knowledge modeling and acquisition Learning by experience Framework.
L. M. Pereira, J. J. Alferes, J. A. Leite Centro de Inteligência Artificial - CENTRIA Universidade Nova de Lisboa, Portugal P. Dell’Acqua Dept. of Science.
KR A Principled Framework for Modular Web Rule Bases and its Semantics Anastasia Analyti Institute of Computer Science, FORTH-ICS, Greece Grigoris.
AI Lecture 17 Planning Noémie Elhadad (substituting for Prof. McKeown)
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2007.
Some Thoughts to Consider 8 How difficult is it to get a group of people, or a group of companies, or a group of nations to agree on a particular ontology?
Rational Agency CSMC Introduction to Artificial Intelligence January 8, 2004.
Winter 2007SEG2101 Chapter 31 Chapter 3 Requirements Specifications.
What’s Ahead for Embedded Software? (Wed) Gilsoo Kim
Formal Verification. Background Information Formal verification methods based on theorem proving techniques and model­checking –To prove the absence of.
Presented by: Belgi Amir Seminar in Distributed Algorithms Designing correct concurrent algorithms Spring 2013.
Our views on the Future of Logic Based Agents Luís Moniz Pereira José Júlio Alferes & Joint WorkshopLondon, 8 March 1999.
Operational Semantics Mooly Sagiv Tel Aviv University Sunday Scrieber 8 Monday Schrieber.
- Laboratoire d'InfoRmatique en Image et Systèmes d'information LIRIS UMR 5205 CNRS/INSA.
Intelligent Agents Chapter 2. How do you design an intelligent agent? Definition: An intelligent agent perceives its environment via sensors and acts.
Artificial Intelligence Logical Agents Chapter 7.
Service-Oriented Computing: Semantics, Processes, Agents
Knowledge Representation
Service-Oriented Computing: Semantics, Processes, Agents
Computer Programming.
Logical architecture refinement
Artificial Intelligence: Logic agents
Service-Oriented Computing: Semantics, Processes, Agents
Chapter 22 Object-Oriented Systems Analysis and Design and UML
Chapter 12: Building Situated Robots
Presentation transcript:

MINERVA – A Dynamic Logic Programming Agent Architecture João Alexandre Leite, José Júlio Alferes, Luís Moniz Pereira Centro de Inteligência Artificial (CENTRIA) Universidade Nova de Lisboa, Portugal Presented by Ian Strascina3/8/04

Agents Agents commonly implemented by imperative languages – efficiency Efficiency not always critical, but clear specification and correctness is Thus Logic Programming and Non-Monotonic Reasoning (LPNMR) are being (re)evaluated for implementation

LPNMR Provide abstract, generalized solutions to accommodate different problem domains Strong Declarative & Procedural Semantics – bridges gap between theory and practice. Several powerful concepts: belief revision, inductive learning, preferences, etc. The combination of these can allow for a mixture of agents with reactive and rational behaviours

LPNMR Drawback: LP usually represents static environments Conflict – Agents typically dynamic, acting in dynamic environments To get around this: Dynamic Logic Programming (DLP) Represent/Integrate knowledge from different sources which may evolve over time Multi-Dimensional Dynamic Logic Programming (MDLP) – more expressive “Language for Dynamic Updates” (LUPS) – specifying changes over time

MINERVA Agent Architecture design based on dynamic representation of several system aspects and a evolving them via state transitions Named after the Roman goddess Minerva goddess of wisdom (amongst other things)

Topics Background of (M)DLP and LUPS MINERVA overall architecture Labouring sub-agents

Dynamic Logic Programming Sequence of logic programs each has knowledge about a given state – different time periods, priorities, points of view, etc. Try to define declarative & procedural semantics given relationships between different states Declarative semantics – stable models of program consisting of all ‘valid’ rules in that state Property of inertia r1. a :- b, c. p(X) :- q(X). r1. a :- b, c. p(X) :- q(X). d1 :- d2, e1. r1. p(X) :- f(Y,X), g(X,Z). d1 :- d2, e1.

Dynamic Logic Programming DLP  Situation Calculus MDLP – generalized, more expressive “societal” view – inter- and intra- agent relationships Transitioning between states??? r1. a :- b, c. p(X) :- q(X). r1. a :- b, c. p(X) :- q(X). d1 :- d2, e1. r1. p(X) :- f(Y,X), g(X,Z). d1 :- d2, e1.

LUPS LUPS – “Language for dynamic updates” language to declaratively specify changes to logic programs sequentially updates logic program’s KB The declarative meaning of a sequence of sets of update command in LUPS is defined by the semantics of the DLP generated by those commands

LUPS A sentence U in LUPS – set of simultaneous update commands (actions), that given an existing sequence of logic programs (MDLP), produces a new MDLP with one more logic program A LUPS program is a sequence of this type of sentence semantics are defined by the DLP generated by the sequence of commands

LUPS “Interpretation update” – no good ex. program – only stable model is M={free} free ← not jail. jail ← abortion. Suppose U={ abortion ← } Only update of M by U would be {free, abortion} Doesn’t make sense according to the update Inertia should be applied to rules, not individual literals

LUPS – commands Simplest command to add a rule to current state assert L ← L 1,…,L k when L k+1,…,L m If preconditions L k+1,…,L m are true in current state, add the rule L ← L 1,…,L k to the successor knowledge state Rule will then remain indefinitely by inertia, unless retracted or defeated by a future update

LUPS – commands Sometimes we don’t want inertia example: wake_up ← alarm_rings If the alarm rings we will wake up Want to stay awake if not alarm becomes true (alarm stops ringing) alarm_ring should not persist by inertia One-time events assert event L ← L 1,…,L k when L k+1,…,L m

LUPS – commands To delete rules, we use the retract command retract [event] L ← L 1,…,L k when L k+1,…,L m This deletes the rule from the next state and continuing onward If event is specified, the rule is temporarily deleted in the next state

LUPS – commands Assertions – newly incoming information Effects remain by inertia (unless); assert command itself does not May want certain update commands to remain in successive consecutive updates Persistent update commands – “Laws” always [event] L ← L 1,…,L k when L k+1,…,L m  Cancel a persistent update cancel L ← L 1,…,L k when L k+1,…,L m cancel and retract – not the same

Overall Architecture

Common KB Contains knowledge about the agent and others Components – MDLP or LUPS Capabilities Intentions Goals Plans Reactions Object Knowledge Base Internal Behaviour Rules Internal Clock (???)

Object KB (MDLP) Main component containing knowledge about the world Represented as a DAG Sequence of nodes for each sub-agent of agent α evolution in time sub-agents manipulate its own node Sequence of nodes for other agents in system represent α’s view of their evolution in time Dialoguer sub-agent – interactions w/ other agents

Capabilities (LUPS) Describes actions and effects possible of agent Easy to describe since LUPS describes states and transitions Typically, for each action ω: always L ← L 1,…,L k when L k+1,…,L m,ω effect preconditions action

Capabilities (LUPS) 3 main types of actions: Adding a new fluent: ω causes F if … always F when F 1,…,F k,ω (F, F i ’s are fluents)  Rule update always L ← L 1,…,L k when L k+1,…,L m,ω  Actions that, when performed in parallel, have different results  translate into 3 update commands always L 1 when L k+1,…,L m,ω a,not ω b always L 1 when L k+1,…,L m,not ω a,ω b always L 2 when L k+1,…,L m,ω a,ω b  “ω a (x)or ω b cause L 1 if the preconditions hold, and cause L 2 in parallel”

Internal Behaviour Rules (LUPS) Specify agent’s reactive internal epistemic state transitions Form is assert L ← L 1,…,L k when L k+1,…,L m Assert this rule if these conditions are true now. example assert jail(X) ← abortion(X) when gov(repub) assert not jail(X) ← abortion(X) when gov(dem)

Goals (DLP) Each state in DLP contains goals that agent needs to accomplish goals are of the form goal(Goal,Time,Agent,Priority) Goal – conj. of literals Time – time state;related to internal clock??? Agent – agent where goal originated Priority – priority of goal Any sub-agent can assert a goal Sub-agent Goal-manager can manipulate goals

Plans (MDLP) Action update – set of update commands of the form {assert event ω} ; ω is an action name Asserting an event is conceptually the same as executing an action must be an event since action do not persist Example – to achieve Goal 1 at time T, a plan might be: U T-3 = {assert event ω 1 ; assert event ω 2 } U T-1 = {assert event ω 4 ; assert event ω 5 } Strength of LUPS allows conditional events (when) Example – to achieve Goal 2 at time T, a plan might be: U T-3 = {assert event ω 3 when L; assert event ω 6 when not L} U T-2 = {assert event ω 1 ; assert event ω 2 }

Plans (MDLP) Each plan for a goal (Goal) has preconditions (Conditions) Planner sub-agent generates plans for goal Asserts plans into Common KB as plan(Goal,Plan,Conditions) Scheduler sub-agent uses plans and reactions to produce agent intentions

Reactions (MDLP) Simple MDLP – rules just facts denoting actions ω, or negation of actions not ω. Contains sequence of nodes for every sub-agent capable of reacting Hierarchy of reactions Sub-agents have a set of LUPS commands of the form assert ω when L k+1,…,L m  LUPS allows a form of action ‘blockage’ – prevent an action from being executed assert not ω when L k+1,…,L m  Deny any assert by lower ranked sub-agent

Intentions (MDLP) Actions agent has committed to Compiled by Scheduler from plans and reactions Form is intention(Action,Conditions,Time) perform action Action at time Time if the Conditions are true Actuator sub-agent executes intentions Previous example: (c is current time state) intention(ω 3, L, c+1); intention(ω 1, -, c+3); intention(ω 4, -, c+5); intention(ω 6, not L, c+1); intention(ω 2,-, c+3); intention(ω 5,-, c+5);

Sub-Agents

Evaluate and manipulate Common KB Can interface with environment, other agents Different specialities provides modularity Each has a LUPS program describing its behaviour Meta-interpreter to execute program Execution produces states – nodes of the Object KB Allow private procedure calls – extend LUPS to call them in when statement of a command assert X ← L 1,…,L k when L k+1,…,L m,ProcCall(L m+1,…,L n,X)  Can read Common KB, but not change

Sub-Agents Present Sensor Dialoguer Actuator Effector Reactor Planner Goal Manager Scheduler Learner Contradiction Remover Others

Sensor Gets input from environment Procedure Call SensorProc(Rule) Can assert into Object KB by assert Rule when SensorProc(Rule) Can act as a filter – decide what input to accept

Dialoguer Similar to sensor Gets inputs from other agents Updates other agents’ nodes in Object KB Generate new goals, reply messages, etc. based on received message Example assert goal(Goal,Time,Agent,- when MsgFrom(Agent,Goal,Time,Rule), cooperative(Agent). assert when MsgFrom(Agent,Goal,Time,Rule). assert when goal(Goal,Time,Agent,- Agent  α,

Actuator Executes actions on the environment Each cycle (of Internal Clock?) – extracts intentions and performs them Successful? if so, assert action name in Object KB Form of LUPS command(s): assert event ω when Current(Time), Cond, ActionProc(ω).

Effector At each cycle – evaluates LUPS commands Capabilities and Behaviour rules These don’t belong exclusively to Effector Planner can access Capabilities – prior successful action execution Behaviour – doesn’t require

Reactor Has reactive rules if executed, produce an action to perform assert event when L 1,…,L k  Example assert event when danger  Can also reactively block actions with assert event not when L 1,…,L k

Planner Can find plans by abduction in LUPS specified scenarios Uses Object KB, Intentions, Capabilities, and Common Behaviour Rules to find plans LUPS command for AbductivePlan(Goal,Plan,Cond) assert when AbductivePlan(Goal,Plan,Cond) Other planners can be used – interface w/ LUPS commands

Goal Manager Deal with conflicting goals asserted by other sub-agents possibly originating from other agents Works on the Goals structure can delete goals, change priorities, etc. Example of two incompatible goals being handled retract goal(G 1,T 1,A 1,P 1 when goal(G 1,T 1,A 1,P 1 goal(G 2,T 2,A 2,P 2 incomp(G 1,G 2 ), P 1 < P 2.

Scheduler Determines intentions based on current state Acts if there are pending reactions or goals and plans May be more than 1 specialized scheduling procedure Example assert when not SchedulePlans( Π ). assert when not ScheduleReactions( Π ). assert when CombineSchedule( Π ).

Conclusion Logic Programming provides clear specification and correctness Dynamic Logic Programming (DLP) provides a way to represent knowledge (possibly from different sources) that evolves over time sequence of logic programs different states MDLP – express added knowledge of environment, other agents LUPS – specifies transitioning between states in (M)DLP This together with strong concepts such as intentions, planning, etc. form a solid agent architecture

Ian’s Diagnosis Looks a little complicated, but sounds cool enough to want to give it a shot!