Presentation is loading. Please wait.

Presentation is loading. Please wait.

Software Agent - BDI architecture -. Outline BDI Agent AgentSpeak(L) Summary 1/39.

Similar presentations


Presentation on theme: "Software Agent - BDI architecture -. Outline BDI Agent AgentSpeak(L) Summary 1/39."— Presentation transcript:

1 Software Agent - BDI architecture -

2 Outline BDI Agent AgentSpeak(L) Summary 1/39

3 Definition of BDI Architecture The Belief-Desire-Intention (BDI) architectures are examples of practical reasoning: the process of deciding, moment by moment which action to perform in the furtherance of our goals –Two-front reasoning Agents not only strive to achieve their goals but must take time to reflect on whether their goals are still valid and possibly revise them The BDI architecture is an example of balancing reactive behavior with goal-directed behavior BDI agents use two important processes –Deliberations: deciding what goals we want to achieve –Means-ends reasoning: deciding how we are going to achieve them To differentiate between these three concepts –I believe that if I study hard I will pass this course –I desire to pass this course –I intend to study hard 2/39

4 BELIEFS DESIRES SELECTION FUNCTION INTENTION 3/39

5 Examples of BDI Agents Hotel manager –Duties: Rooms supervising (occupation checking, making reservation), gives cleaning orders, gives fixing orders, needs submission –Models Beliefs – room status and schedule Desires – provide reservation to client Intentions – order to clean / order to fix Hotel maid –Duties: room cleaning (i.e. cyclic) (changing bedclothes, tablecloths and towels, vacuuming, bathroom cleaning, cabinet keeping), room cleaning on managers demand, defects submission, taking new stuff (bedclothes, towels etc.) from warehouse manager and giving him used ones –Models Beliefs – room stuff and device status Desires – make room clean / defect detection Intentions – clean room / submits defects / exchange used stuff with warehouse 4/39

6 Role of Intentions Intentions drive means-ends reasoning: –Having formed an intention, I will attempt to achieve it –This involves deciding how to achieve it –If one action fails, I will try another approach Intentions constrain future deliberation: –I will not entertain options that are inconsistent with my intentions Intentions persist: –I will not give up my intentions without good reason Typically, they persist until either I believe they have been achieved or I believe they will never be achieved, or if the reason I had the intention is no longer present Intentions influence beliefs on which future practical reasoning is based –If I adopt an intention, I will plan for the future on the assumption that I will achieve that intention 5/39

7 Balancing Reactivity and Goal-directed How often should an agent reconsider its intentions? –Bold agents: an agents that rarely stops to reconsider will continue attempting to achieve intentions even after they are no longer possible or it has no reason for achieving them. –Cautious agents: an agent that constantly reconsiders will spend insufficient time actually working to achieve its intentions, and hence may never achieve them. The main factor that affects how thee agents perform in different environments is the rate of world change, r: –If r is low, bold agents do well compared with cautious ones, while cautious agents waste time reconsidering their intentions, bold agents are busy working towards and achieving their goals. –If r is high, cautious agents outperform bold agents; cautious agents can recognize when their intentions are doomed, and can also take advantage of serendipitous situations and new opportunities. 6/39

8 Schematic of BDI Architecture A belief revision function ( brf ) A set of current beliefs An option generation function A set of current desires (options) A filter function A set of current intentions An action selection function 7/39

9 Components of a BDI Agent (1) A set of current beliefs –These represent information that agent has about the current environment A belief revision function (brf) –Takes perceptual input and agent’s current belief and, using these, determines a new set of beliefs An option generation function –Determines options available to agent, using current belief about environment and current intentions A set of current desires (options) –These represent possible courses of action available to agent 8/39

10 Components of a BDI Agent (2) A filter function –Represents agent’s deliberation process and determines agent’s intentions on the basis of current beliefs, desires and intentions A set of current intentions –These represent agent’s current focus – those states of affairs that is has committed to trying to bring about An action selection function –That determines an action to perform on the basis of current intentions 9/39

11 Formalization of a BDI Agent Let Bel, Des, Int be sets of all possible beliefs, desires and intentions –We will not consider the contents of these sets – it depends on the purpose of the agent – but often they are logical formulas There must be a notion of consistency defined on these sets. e. g., –Is an intention to achieve x consistent with the belief y? The internal state of a BDI agent at a given instant is a triple (B, D, I) where The belief revision function is a mapping: –On the basis of the current percept and current beliefs determines a new set of beliefs 10/39

12 Functions The option generation function is a mapping: –Is responsible for the agent’s means-ends reasoning – how to achieve its intentions –Some of the options feedback – recursively elaborating a hierarchical plan structure, until it reaches executable actions The agent’s deliberation process (filter) is a mapping: –Updates agent’s intentions –Drops impossible or non-beneficial intentions –Retains those not yet achieved and expected to be of benefit –Finally, it adopts new intentions either to achieve existing intentions, or to exploit new opportunities. NB. Intentions must come from somewhere: 11/39

13 Action Selection Function The execute function simply returns any executable intention: The whole action function of a BDI agent is simply: 12/39

14 Passing the Course A student agent perceives the following beliefs: The agent has an initial intention to pass the course: The agent’s desires are freshly generated each for cycle (they do not persist). The option generation function leads to desires to pass the course and its consequence: Example 13/39

15 Generating Intentions The filter function leads to some new intentions being added: One or more of which will then be executed before the agent’s deliberation cycle recommences. Example 14/39

16 Obtaining New Beliefs Suppose the agent perceives new information which leads to his beliefs being revised: Example 15/39

17 Revising Desires and Intentions The agent recomputes his current desires And intentions The agent drops his original intention to work hard (and its consequences) and adopts a new one to cheat Example 16/39

18 Adding More Beliefs Subsequently, the agent perceives that if caught cheating, he will no longer pass the course. What’s more, he is certain to be caught Because the new beliefs lead to an inconsistency, the agent has had to drop his belief in Example 17/39

19 Revising Desires and Intentions The agent recomputes his desires and intentions Because it’s not longer consistent to cheat (even through it may be preferable to working hard), the agent drops that intention and re- adopts workHard (and consequences) Example 18/39

20 AgentSpeak(L) A model that shows a one-to-one correspondence between the model theory, proof theory and the abstract interpreter –Attempt to bridge the gap between theory and practice –Natural extension of logic programming for the BDI agent architecture –Provides an elegant abstract framework for programming BDI agents –Based on a restricted first-order language with events and actions –The behavior of the agent (i.e., its interaction with the environment) is dictated by the programs written in AgentSpeak(L) 19/39

21 Basic Notions (1) A set of base beliefs: facts in the logic programming sense A set of plans: context-sensitive, event-invoked recipes that allow hierarchical decomposition of goals as well as the execution of actions with the purpose of accomplishing a goal Belief atom –A first-order predicate in the usual notation –Belief atoms or their negations are termed belief literals Goal: a state of the system, which the agent wants to achieve. –Achievement goals predicates prefixed with the operator “!” state to achieve a state of the world where the associated predicate is true in practice, these initiate the execution of subplans –Test goals predicates prefixed with the operator‘?’ returns a unification for the associated predicate with one of the agent’s beliefs; it fails if no unification is found AgentSpeak(L) 20/39

22 Basic Notions (2) Triggering event –Defines which events may initiate the execution of a plan. –An event can be internal, when a subgoal needs to be achieved external, when generated from belief updates as a result of perceiving the environment –Two types of triggering events: related to the addition (‘+’) and deletion (‘-’) of attitudes (beliefs or goals) Plans: refer to the basic actions that an agent is able to perform on its environment AgentSpeak(L) p ::= te : ct <- h Where: te - triggering event (denoting the purpose for that plan) ct - a conjunction of belief literals representing a context.  The context must be a logical consequence of that agent’s current beliefs for the plan to be applicable. h - a sequence of basic actions or (sub)goals that the agent has to achieve (or test) when the plan, if applicable, is chosen for execution. 21/39

23 +concert (A,V) : likes(A) <- !book_tickets(A,V). +!book_tickets(A, V) : ¬busy(phone) <- call(V); …; !choose seats(A,V). Triggering event Context Achievement goal added Basic action 22/39

24 Basic Notions (3) Intentions –Plans the agent has chosen for execution. –Intentions are executed one step at a time –A step can query or change the beliefs perform actions on the external world suspend the execution until a certain condition is met submit new goals –The operations performed by a step may generate new events, which, in turn, may start new intentions –An intention succeeds when all its steps have been completed. It fails when certain conditions are not met or actions being performed report errors AgentSpeak(L) 23/39

25 Syntax AgentSpeak(L) ag ::= bs ps bs ::= at1. … atn.(n  0) at ::= P(t1, … tn)(n  0) ps ::= p1 … pn(n  1) p ::= te : ct <- h. te ::= +at | -at | +g | -g ct ::= true | l1 & … & ln(n  1) h ::= true | f1 ; … ; fn(n  1) l ::= at | not (at) f ::= A(t1, … tn) | g | u(n  0) g ::= !at | ?at u ::= +at | -at 24/39

26 Informal Semantic (1) The interpreter for AgentSpeak(L) manages –A set of events –A set of intentions –Three selection functions Events, which may start off the execution of plans that have relevant triggering events, can be: –External, when originating from perception of the agent’s environment (i.e., addition and deletion of beliefs based on perception are external events). External events create new intentions. –Internal, when generated from the agent’s own execution of a plan (i.e., a subgoal in a plan generates an event of type “addition of achievement goal”). Intentions are particular courses of actions to which an agent has committed in order to handle certain events. Each intention is a stack of partially instantiated plans AgentSpeak(L) 25/39

27 Informal Semantic (2) SE (the event selection function) –Selects a single event from the set of events SO –Selects an “option” (i.e., an applicable plan) from a set of applicable plans SI –Selects one particular intention from the set of intentions The selection functions are agent-specific, in the sense that they should make selections based on an agent’s characteristics AgentSpeak(L) 26/39

28 Informal Semantic: Overview (1) AgentSpeak(L) 27/39

29 Informal Semantic: Overview (2) AgentSpeak(L) 28/39

30 Example  During lunch time, forward all calls to Carla.  When I am busy, incoming calls from colleagues should be forwarded to Denise. ALICE 29/39

31 Beliefs user(alice). user(bob). user(carla). user(denise). ~status(alice, idle). status(bob, idle). colleague(bob). lunch_time(“11:30”). Example 30/39

32 Plan user(alice). user(bob). user(carla). user(denise). ~status(alice, idle). status(bob, idle). colleague(bob). lunch_time(“11:30”). “During lunch time, forward all calls to Carla”. +invite(X, alice) : lunch_time(t)  !call_forward(alice, X, carla). (p1) “When I am busy, incoming calls from colleagues should be forwarded to Denise”. +invite(X, alice) : colleague(X)  call_forward_busy(alice,X,denise). (p2) +invite(X, Y): true  connect(X,Y). (p3) Example 31/39

33 Plan Example user(alice). user(bob). user(carla). user(denise). ~status(alice, idle). status(bob, idle). colleague(bob). lunch_time(“11:30”). +invite(X, alice) : lunch_time(t)  !call_forward(alice, X, carla). (p1) +invite(X, alice) : colleague(X)  call_forward_busy(alice,X,denise).(p2) +invite(X, Y): true  connect(X,Y). (p3) +!call_forward(X, From, To) : invite(From, X)  +invite(From, To), - invite(From,X) (p4) +!call_forvard_busy(Y, From, To) : invite(From, Y)& not(status(Y, idle)))  +invite(From, To), - invite(From,Y). (p5) 32/39

34 Plan Example user(alice). user(bob). user(carla). user(denise). ~status(alice, idle). status(bob, idle). colleague(bob). lunch_time(“11:30”). +invite(X, alice) : lunch_time(t)  !call_forward(alice, X, carla). (p1) +invite(X, alice) : colleague(X)  call_forward_busy(alice,X,denise). (p2) +invite(X, Y): true  connect(X,Y). (p3) +!call_forward(X, From, To) : invite(From, X)  +invite(From, To), - invite(From,X) (p4) +!call_forvard_busy(Y, From, To) : invite(From, Y)& not(status(Y, idle)))  +invite(From, To), - invite(From,Y). (p5) 33/39

35 Execution 1 A new event is sensed from the environment, +invite(Bob, Alice) (there is a call for Alice from Bob). There are three relevant plans for this event (p1, p2 and p3) –the event matches the triggering event of those three plans. Example Relevant PlansUnifier p1: +invite(X, alice) : lunch_time(t)  !call_forward(alice, X, carla) p2: +invite(X, alice) : colleague(Bob)  !call_forward_busy(alice, X, denise). {X=bob} p3 : +invite(X, Y): true  connect(X,Y). {Y=alice, X=bob} 34/39

36 Execution 2 Only the context of plan p2 is satisfied - colleague(bob) => p2 is applicable A new intention based on this plan is created in the set of intentions, because the event was external, generated from the perception of the environment The plan starts to be executed. It adds a new event, this time an internal event: !call_forward_busy(alice,bob,denise) Example Intention IDIntension StackUnifier 1+invite(X,alice):colleague(X) <- !call_forward_busy(alice,X,denise) {X=bob} 35/39

37 Execution 3 A plan relevant to this new event is found (p5): p5 has the context condition true, so it becomes an applicable plan and it is pushed on top of intention 1 (it was generated by an internal event) Example Relevant PlansUnifier p5: +!call_forvard_busy(Y, From, To) : invite(From, Y) & not(status(Y, idle)))  +invite(From, To), - invite(From,Y). {From=bob, Y=alice, To=denise} Intention IDIntension StackUnifier 1+!call_forward_busy(Y,From,To) : invite(From,Y) & not status(Y,idle) <- +invite(From,To); -invite(From,Y) {From=bob, Y=alice, To=denise} +invite(X,alice) : colleague(X) <- !call_forward_busy(alice,X,denise) {X=bob} 36/39

38 Execution 4 A new internal event is created, +invite(bob, denise). Three relevant plans for this event are found, p1, p2 and p3. However, only plan p3 is applicable in this case, since the others don’t have the context condition true. The plan is pushed on top of the existing intention. Example Intention IDIntension StackUnifier 1+invite(X,Y) : <- connect(X,Y){Y=denise, X=bob} +!call_forward_busy(Y,From,To) : invite(From,Y) & not status(Y,idle) <- +invite(From,To); -invite(From,Y) {From=bob, Y=alice, To=denise} +invite(X,alice) : colleague(X) <- !call_forward_busy(alice,X,denise) {X=bob} 37/39

39 Execution 5 On top of the intention is a plan whose body contains an action. The action is executed, connect(bob, denise) and is removed from the intention. When all formulas in the body of a plan have been removed (i.e., have been executed), the whole plan is removed from the intention, and so is the achievement goal that generated it. The only thing that remains to be done is –invite(bob, alice) (this event is removed from the beliefs base). This ends a cycle of execution, and the process starts all over again, checking the state of the environment and reacting to events. Example Intention IDIntension StackUnifier 1+!call_forward_busy(Y,From,To) : invite(From,Y) & not status(Y,idle) <- -invite(From,Y) {From=bob, Y=alice, To=denise} +invite(X,alice) : colleague(X) <- !call_forward_busy(alice,X,denise) {X=bob} 38/39

40 Summary BDI Agents –An example of practical reasoning: the process of deciding, moment by moment which action to perform in the furtherance of our goals –An example of balancing reactive behavior with goal-directed behavior AgentSpeak(L) has many similarities with traditional logic programming, which would favor its becoming a popular language –It proves quite intuitive for those familiar with logic programming. –It has a neat notation, thus providing quite elegant specifications of BDI agents. 39/39


Download ppt "Software Agent - BDI architecture -. Outline BDI Agent AgentSpeak(L) Summary 1/39."

Similar presentations


Ads by Google