Presentation is loading. Please wait.

Presentation is loading. Please wait.

Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,

Similar presentations


Presentation on theme: "Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,"— Presentation transcript:

1 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology, The Netherlands

2 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 2 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 2 Outline Previous Lecture, last lecture on Prolog: –“Input & Output” –Negation as failure –Search Coming lectures: –Agents that use Prolog for knowledge representation This lecture: –Agent Introduction –“Hello World” example in GOAL

3 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 3 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 3 Agents: Act in environments Choose an action Percepts Action environment agent

4 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 4 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 4 4 Agent Capabilities Reactive – respond in timely manner to change Proactive – (persistently) pursues multiple, explicit goals over time Social – agents need to interact and perform e.g. as a team => last lecture

5 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 5 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 5 Agents: Act to achieve goals Percepts Action actions eventsgoals environment agent Reactivity Proactivity

6 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 6 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 6 Agents: Represent environment Percepts Action events actionsgoals plans beliefs environment agent

7 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 7 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 7 Agent Oriented Programming Develop programming languages where events, beliefs, goals, actions, plans,.... are first class citizens in the language

8 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 8 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 8 Language Elements Key language elements of APLs: beliefs and goals to represent environment events received from environment (& internal) actions to update beliefs, adopt goals, send messages, act in environment plans, capabilities & modules to structure action rules to select actions/plans/modules/capabilities support for multi-agent systems Inspired by Belief-Desire-Intention agent metaphor

9 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 9 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 9 A Brief History of AOP 1990: AGENT-0 (Shoham) 1993: PLACA (Thomas; AGENT-0 extension with plans) 1996: AgentSpeak(L) (Rao; inspired by PRS) 1996: Golog (Reiter, Levesque, Lesperance) 1997: 3APL (Hindriks et al.) 1998: ConGolog (Giacomo, Levesque, Lesperance) 2000: JACK (Busetta, Howden, Ronnquist, Hodgson) 2000: GOAL (Hindriks et al.) 2000: CLAIM (Amal El FallahSeghrouchni) 2002: Jason (Bordini, Hubner; implementation of AgentSpeak) 2003: Jadex (Braubach, Pokahr, Lamersdorf) 2008: 2APL (successor of 3APL) This overview is far from complete!

10 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 10 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 10 A Brief History of AOP AGENT-0Speech acts PLACA Plans AgentSpeak(L)Events/Intentions GologAction theories, logical specification 3APLPractical reasoning rules JACKCapabilities, Java-based GOALDeclarative goals CLAIMMobile agents (within agent community) JasonAgentSpeak + Communication JadexJADE + BDI 2APLModules, PG-rules, …

11 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 11 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 11 Outline Some of the more actively being developed APLs –2APL (Utrecht, Netherlands) –Agent Factory (Dublin, Ireland) –G OAL (Delft, Netherlands) –Jason (Porto Alegre, Brasil) –Jadex (Hamburg, Germany) –JACK (Melbourne, Australia) –JIAC (Berlin, Germany) References

12 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 12 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 12 2APL – Features 2APL is a rule-based language for programming BDI agents: actions: belief updates, send, adopt, drop, external actions beliefs: represent the agent’s beliefs goals: represents what the agent wants plans: sequence, while, if then PG-rules: goal handling rules PC-rules: event handling rules PR-rules: plan repair rules

13 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 13 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 13 2APL – Code Snippet Beliefs: worker(w1), worker(w2), worker(w3) Goals: findGold() and haveGold() Plans: = { send( w3, play(explorer) ); } Rules = { …goal handling rule G( findGold() ) <- B( -gold(_) && worker(A) && -assigned(_, A) ) | send( A, play(explorer) ); ModOwnBel( assigned(_, A) );, E( receive( A, gold(POS) ) ) | B( worker(A) ) ->event handling rule { ModOwnBel( gold(POS) ); }, E( receive( A, done(POS) ) ) | B( worker(A) ) -> explicit operator for events { ModOwnBel( -assigned(POS, A), -gold(POS) ); }, … } modules to combine and structure rules

14 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 14 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 14 JACK – Features The JACK agent Language is built on top of and extends Java and provides the following features: agents: used to define the overall behavior of mas beliefset: represents an agent’s beliefs view: allows to perform queries on belief sets capability: reusable functional component made up of plans, events, belief sets and other capabilities plan: instructions the agent follows to try to achieve its goals and handle events event: occurrence to which agent should respond

15 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 15 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 15 JACK – Agent Template agent AgentType extends Agent { // Knowledge bases used by the agent are declared here. #private data BeliefType belief_name(arg_list); // Events handled, posted and sent by the agent are declared here. #handles event EventType; #posts event EventType reference;used to create internal events #sends event EventType reference;used to send messages to other agents // Plans used by the agent are declared here. Order is important. #uses plan PlanType; // Capabilities that the agent has are declared here. #has capability CapabilityType reference; // other Data Member and Method definitions }

16 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 16 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 16 Jason – Features beliefs: weak and strong negation to support both closed-world assumption and open-world belief annotations: label information source, e.g. self, percept events: internal, messages, percepts a library of “internal actions”, e.g. send user-defined internal actions: programmed in Java. automatic handling of plan failures annotations on plan labels: used to select a plan speech-act based inter-agent communication Java-based customization: (plan) selection functions, trust functions, perception, belief-revision, agent communication

17 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 17 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 17 Jason – Plans triggering event test on beliefs plan body

18 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 18 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 18 How are these APLs related? AGENT-0 1 (PLACA ) Family of Languages Basic concepts: beliefs, action, plans, goals-to-do): AgentSpeak(L) 1, Jason 2 Golog 3APL 3 = = = 1 mainly interesting from a historical point of view 2 from a conceptual point of view, we identify AgentSpeak(L) and Jason 3 without practical reasoning rules Main addition: Declarative goals 2APL ≈ 3APL + GOAL A comparison from a high-level, conceptual point, not taking into account any practical aspects (IDE, available docs, speed, applications, etc) Java-based BDI Languages Agent Factory, Jack (commercial), Jadex, JIAC Mobile Agents CLAIM, AgentScape Multi-Agent Systems All of these languages (except AGENT-0, PLACA, JACK) have versions implemented “on top of” JADE. Prolog-based

19 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 19 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 19 References Websites 2APL: http://www.cs.uu.nl/2apl/http://www.cs.uu.nl/2apl/ Agent Factory: http://www.agentfactory.comhttp://www.agentfactory.com G OAL : http://mmi.tudelft.nl/trac/goalhttp://mmi.tudelft.nl/trac/goal JACK: http://www.agent-software.com.au/products/jack/http://www.agent-software.com.au/products/jack/ Jadex: http://jadex.informatik.uni-hamburg.de/http://jadex.informatik.uni-hamburg.de/ Jason: http://jason.sourceforge.net/http://jason.sourceforge.net/ JIAC: http://www.jiac.de/http://www.jiac.de/ Books Bordini, R.H.; Dastani, M.; Dix, J.; El Fallah Seghrouchni, A. (Eds.), 2005 Multi-Agent Programming Languages, Platforms and Applications. presents 3APL, CLAIM, Jadex, Jason Bordini, R.H.; Dastani, M.; Dix, J.; El Fallah Seghrouchni, A. (Eds.), 2009, Multi-Agent Programming: Languages, Tools and Applications. presents a.o.: Brahms, CArtAgO, G OAL, JIAC Agent Platform

20 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 The G OAL Agent Programming Language

21 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 21 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 THE BLOCKS WORLD The Hello World example of Agent Programming

22 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 22 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 22 The Blocks World Positioning of blocks on table is not relevant. A block can be moved only if it there is no other block on top of it. Objective: Move blocks in initial state such that result is goal state. A classic AI planning problem.

23 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 23 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 23 The Blocks World (Cont’d) Key concepts: A block is in position if “it is in the right place”; otherwise misplaced A constructive move puts a block in position A self-deadlock is a misplaced block above a block it should be above

24 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 24 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 MENTAL STATES

25 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 25 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 25 Representing the Blocks World Basic predicate: on(X,Y). Defined predicates: block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T) :- on(X,Y),tower([Y|T]). EXERCISE: Prolog is the knowledge representation language used in G OAL.

26 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 26 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 26 Representing the Initial State Using the on(X,Y) predicate we can represent the initial state. beliefs{ on(a,b). on(b,c). on(c,table). on(d,e). on(e,table). on(f,g). on(g,table). } Initial belief base of agent

27 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 27 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 27 Representing the Blocks World What about the rules we defined before? Add clauses that do not change into the knowledge base. tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y),tower([Y|T]). clear(X) :- block(X), not(on(Y,X)). clear(table). block(X) :- on(X, _). knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } Static knowledge base of agent

28 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 28 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 28 Representing the Goal State Using the on(X,Y) predicate we can represent the goal state. goals{ on(a,e), on(b,table), on(c,table), on(d,c), on(e,b), on(f,d), on(g,table). } Initial goal base of agent

29 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 29 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 29 One or Many Goals In the goal base using the comma- or period-separator makes a difference! goals{ on(a,table), on(b,a), on(c,b). } goals{ on(a,table). on(b,a). on(c,b). }  Left goal base has three goals, right goal base has single goal. Moving c on top of b (3 rd goal), c to the table, a to the table (2 nd goal), and b on top of a (1 st goal) achieves all three goals but not single goal of right goal base. The reason is that the goal base on the left does not require block c to be on b, b to be on a, and a to be on the table at the same time.

30 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 30 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 30 Mental State of G OAL Agent knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } beliefs{ on(a,b). on(b,c). on(c,table). on(d,e). on(e,table). on(f,g). on(g,table). } goals{ on(a,e), on(b,table), on(c,table), on(d,c), on(e,b), on(f,d), on(g,table). } The knowledge, belief, and goal sections together constitute the specification of the Mental State of a G OAL Agent. Initial mental state of agent

31 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 31 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 31 Why a Separate Knowledge Base? Concepts defined in knowledge base can be used in combination with both the belief and goal base. Example –Since agent believes on(e,table),on(d,e), then infer: agent believes tower([d,e]). –If agent wants on(a,table),on(b,a), then infer: agent wants tower([b,a]). Knowledge base introduced to avoid duplicating clauses in belief and goal base.

32 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 32 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 32 Using the Belief & Goal base Selecting actions using beliefs and goals Basic idea: –If I believe B then do action A (reactivity) –If I believe B and have goal G, then do action A (proactivity)

33 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 33 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 33 Inspecting the Belief & Goal base Operator bel(  ) to inspect the belief base. Operator goal(  ) to inspect the goal base. –Where  is a Prolog conjunction of literals. Examples: –bel(clear(a), not(on(a,c))). –goal(tower([a,b])).

34 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 34 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 34 Inspecting the Belief Base bel(  ) succeeds if  follows from the belief base in combination with the knowledge base. –Condition  is evaluated as a Prolog query. Example: –bel(clear(a), not(on(a,c))) succeeds knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } beliefs{ on(a,b). on(b,c). on(c,table). on(d,e). on(e,table). on(f,g). on(g,table). }

35 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 35 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 35 Inspecting the Belief Base Which of the following succeed? 1.bel(on(b,c), not(on(a,c))). 2.bel(on(X,table), on(Y,X), not(clear(Y)). 3.bel(tower([X,b,d]). [X=c;Y=b] knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } beliefs{ on(a,b). on(b,c). on(c,table). on(d,e). on(e,table). on(f,g). on(g,table). } EXERCISE:

36 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 36 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 36 Inspecting the Goal Base goal(  ) succeeds if  follows from one of the goals in the goal base in combination with the knowledge base. Example: –goal(clear(a)) succeeds. –but not goal(clear(a),clear(c)). Use the goal(…) operator to inspect the goal base. knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } goals{ on(a,e), on(b,table), on(c,table), on(d,c), on(e,b), on(f,d), on(g,table). on(c,table). }

37 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 37 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 37 Inspecting the Goal Base Which of the following succeed? 1.goal(on(b,table), not(on(d,c))). 2.goal(on(X,table), on(Y,X), clear(Y)). 3.goal(tower([d,X]). knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } goals{ on(a,e), on(b,table), on(c,table), on(d,c), on(e,b), on(f,d), on(g,table). } EXERCISE:

38 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 38 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 38 Negation and Beliefs not(bel(on(a,c))) = bel(not(on(a,c))) ? Answer: Yes. –Because Prolog implements negation as failure. –If φ cannot be derived, then not( φ ) can be derived. –We always have: not(bel(  )) = bel(not(  )) knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } beliefs{ on(a,b). on(b,c). on(c,table). on(d,e). on(e,table). on(f,g). on(g,table). }

39 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 39 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 39 Negation and Goals not(goal(  )) = goal(not(  )) ? Answer: No. We have, for example: goal(on(a,b)) and goal(not(on(a,b))). knowledge{ block(X) :- on(X, _). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } goals{ on(a,b), on(b,table). on(a,c), on(c,table). } EXERCISE:

40 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 40 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 40 Combining Beliefs and Goals –Consider the following beliefs and goals –We have both bel(on(a,b)) as well as goal(on(a,b)). –Why have something as a goal that has already been achieved? Useful to combine the bel(…) and goal(…) operators. beliefs{ on(a,b). on(b,c). on(c,table). } goals{ on(a,b), on(b,table). }

41 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 41 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 41 Combining Beliefs and Goals Achievement goals: –a-goal(  ) = goal(  ), not(bel(  )) Agent only has an achievement goal if it does not believe the goal has been reached already. Goal achieved: –goal-a(  ) = goal(  ), bel(  ) A (sub)-goal  has been achieved if the agent believes in . Useful to combine the bel(…) and goal(…) operators.

42 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 42 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 42 Expressing BW Concepts Define: block X is misplaced Solution: goal(tower([X|T])),not(bel(tower([X|T]))). But this means that saying that a block is misplaced is saying that you have an achievement goal: a-goal(tower([X|T])). Possible to express key Blocks World concepts by means of basic operators. Mental States EXERCISE:

43 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 43 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 ACTIONS SPECIFICATIONS Changing Blocks World Configurations

44 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 44 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 44 Actions Change the Environment… move(a,d)

45 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 45 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 45 and Require Updating Mental States. To ensure adequate beliefs after performing an action the belief base needs to be updated (and possibly the goal base). –Add effects to belief base: insert on(a,d) after move(a,d). –Delete old beliefs: delete on(a,b) after move(a,d).

46 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 46 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 46 and Require Updating Mental States. If a goal has been (believed to be) completely achieved, the goal is removed from the goal base. It is not rational to have a goal you believe to be achieved. Default update implements a blind commitment strategy. move(a,b) beliefs{ on(a,table), on(b,table). } goals{ on(a,b), on(b,table). } beliefs{ on(a,b), on(b,table). } goals{ }

47 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 47 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 47 Action Specifications Actions in GOAL have preconditions and postconditions. Executing an action in GOAL means: –Preconditions are conditions that need to be true: Check preconditions on the belief base. –Postconditions (effects) are add/delete lists (STRIPS): Add positive literals in the postcondition Delete negative literals in the postcondition STRIPS-style specification move(X,Y){ pre { clear(X), clear(Y), on(X,Z), not( on(X,Y) ) } post { not(on(X,Z)), on(X,Y) } }

48 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 48 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 48 move(X,Y){ pre { clear(X), clear(Y), on(X,Z), not( on(X,Y) )} post { not(on(X,Z)), on(X,Y) } } Example: move(a,b) Check: clear(a), clear(b), on(a,Z), not( on(a,b) ) Remove: on(a,Z) Add: on(a,b) Note: first remove, then add. Actions Specifications table

49 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 49 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 49 move(X,Y){ pre { clear(X), clear(Y), on(X,Z) } post { not(on(X,Z)), on(X,Y) } } Example: move(a,b) Actions Specifications beliefs{ on(a,table), on(b,table). } beliefs{ on(b,table). on(a,b). }

50 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 50 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 50 move(X,Y){ pre { clear(X), clear(Y), on(X,Z) } post { not(on(X,Z)), on(X,Y) } } 1.Is it possible to perform move(a,b) ? 2.Is it possible to perform move(a,d) ? Actions Specifications EXERCISE: knowledge{ block(a), block(b), block(c), block(d), block(e), block(f), block(g), block(h), block(i). clear(X) :- block(X), not(on(Y,X)). clear(table). tower([X]) :- on(X,table). tower([X,Y|T]) :- on(X,Y), tower([Y|T]). } beliefs{ on(a,b), on(b,c), on(c,table), on(d,e), on(e,table), on(f,g), on(g,table). } No, not( on(a,b) ) fails. Yes.

51 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 51 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 ACTION RULES Selecting actions to perform

52 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 52 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 52 Agent-Oriented Programming How do humans choose and/or explain actions? Examples: I believe it rains; so, I will take an umbrella with me. I go to the video store because I want to rent I-robot. I don’t believe busses run today so I take the train. Use intuitive common sense concepts: beliefs + goals => action See Chapter 1 of the Programming Guide

53 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 53 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 53 Selecting Actions: Action Rules Action rules are used to define a strategy for action selection. Defining a strategy for blocks world: –If constructive move can be made, make it. –If block is misplaced, move it to table. What happens: –Check condition, e.g. can a-goal(tower([X|T])) be derived given current mental state of agent? –Yes, then (potentially) select move(X,table). program{ if bel(tower([Y|T])), a-goal(tower([X,Y|T])) then move(X,Y). if a-goal(tower([X|T])) then move(X,table). }

54 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 54 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 54 Order of Action Rules Action rules are executed by default in linear order. The first rule that fires is executed. Default order can be changed to random. Arbitrary rule that is able to fire may be selected. program{ if bel(tower([Y|T])), a-goal(tower([X,Y|T])) then move(X,Y). if a-goal(tower([X|T])) then move(X,table). } program[order=random]{ if bel(tower([Y|T])), a-goal(tower([X,Y|T])) then move(X,Y). if a-goal(tower([X|T])) then move(X,table). }

55 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 55 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 55 Example Program: Action Rules Agent program may allow for multiple action choices d To table Random, arbitrary choice program[order=random]{ if bel(tower([Y|T])), a-goal(tower([X,Y|T])) then move(X,Y). if a-goal(tower([X|T])) then move(X,table). }

56 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 56 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 56 The Sussman Anomaly (1/5) Non-interleaved planners typically separate the main goal, on(A,B),on(B,C) into 2 sub-goals: on(A,B) and on(B,C). Planning for these two sub-goals separately and combining the plans found does not work in this case, however. a c Initial state bc b a Goal state

57 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 57 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 57 The Sussman Anomaly (2/5) Initially, all blocks are misplaced One constructive move can be made (c to table) Note: move(b,c) not enabled. Only action enabled: c to table (2x). Need to check conditions of action rules: if bel(tower([Y|T]),a-goal(tower([X,Y|T))then move(X,Y). if a-goal(tower([X|T))then move(X,table). We have bel(tower([c,a]) and a-goal(tower([c])). c b a Goal state a c Initial state b

58 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 58 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 58 The Sussman Anomaly (3/5) Only constructive move enabled is –Move b onto c Need to check conditions of action rules: if bel(tower([Y|T]), a-goal(tower([X,Y|T))then move(X,Y). if a-goal(tower([X|T))then move(X,table). Note that we have: a-goal(on(a,b),on(b,c),on(c,table)), but not: a-goal(tower[c])). Current state c b a Goal state a c b

59 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 59 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 59 The Sussman Anomaly (4/5) Again, only constructive move enabled –Move a onto b Need to check conditions of action rules: if bel(tower([Y|T]), a-goal(tower([X,Y|T))then move(X,Y). if a-goal(tower([X,T))then move(X,Y). Note that we have: a-goal(on(a,b),on(b,c),on(c,table)), but not: a-goal(tower[b,c]). c b a Goal state a c b Current state

60 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 60 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 60 The Sussman Anomaly (5/5) Upon achieving a goal completely that goal is automatically removed. The idea is that no resources should be wasted on achieving the goal. In our case, goal(on(a,b),on(b,c),on(c,table)) has been achieved, and is dropped. The agent has no other goals and is ready. c b a Goal state a c b Current state

61 Birna van Riemsdijk & Koen Hindriks Multi-Agent Systems 2013 61 Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 61 Organisation Read Programming Guide Ch1-3 (+ User Manual) Tutorial: –Download G OAL : See http://ii.tudelft.nl/trac/goal (v4537)http://ii.tudelft.nl/trac/goal –Practice exercises from Programming Guide –BW4T assignments 3 and 4 available Next lecture: –Sensing, perception, environments –Other types of rules & macros –Agent architectures


Download ppt "Birna van Riemsdijk & Koen HindriksMulti-Agent Systems 2013 Introduction Agent Programming Birna van Riemsdijk and Koen Hindriks Delft University of Technology,"

Similar presentations


Ads by Google