Presentation is loading. Please wait.

Presentation is loading. Please wait.

Reasoning About Beliefs, Observability, and Information Exchange in Teamwork Thomas R. Ioerger Department of Computer Science Texas A&M University.

Similar presentations


Presentation on theme: "Reasoning About Beliefs, Observability, and Information Exchange in Teamwork Thomas R. Ioerger Department of Computer Science Texas A&M University."— Presentation transcript:

1 Reasoning About Beliefs, Observability, and Information Exchange in Teamwork Thomas R. Ioerger Department of Computer Science Texas A&M University

2 The Need for Reasoning about Beliefs of Others in MAS Decision-making depends on beliefs: Does the other driver see me? Does the other driver seem to be in a hurry? Did the other driver see who arrived at the intersection first? Does the other driver see my turn signal? Does other driver allow gap to open for changing lanes? The traditional interpretation of BDI: Beliefs, Desires, and Intentions of self What about beliefs of others? - important for agent interactions

3 The Need for Reasoning about Beliefs of Others in Teams Proactive Information Exchange: automatically share info with others makes teamwork more efficient infer relevance from pre-conditions of others’ goals in team plan should try to avoid redundancy Ideal conditions to send message: Bel(A,I)  Bel(A,  Bel(B,I))  Bel(A,Goal(B,G))  Precond(I,G)  Bel(B,I)    Done(B,G)  Bel(B,I)    Done(B,G) A B team-plan: catch-thief (do B (turn-on light-switch)) (do A (enter-room)) (do A (jump-on thief)) should B tell A the light is now on???

4 Observability Obs(  ) - agent  will observe  under conditions  (i.e. the “context”) example:  x Obs(A 1,broken(x),holding(A 1,x)) Similarity to VSK logic (Wooldridge) –V(  =accessible, S(  =perceives, K(  =knows –Obs(  )   S a (  ) –Assumption: agents believe what they see: S a (  )  K a (  ) Small differences: –we use Belief instead of Knowledge: S a (  )  B a (  ) –B is weak-S5 modal logic (KD45, without T axiom, B(  ) |=  –only believe whether  is true (or false) –Obs(  )   [(  S a (  ))  (  S a (  ))]

5 Belief Database... tuples: A  agents F  facts (propositions) V  valuations valuations= {true,false,unkown,whether} unknown   true  false whether  true  false Update Algorithm: DiDi DiDi D i+1 perceptions P justification rules J D i+1 =Update(D i,P,J) Update

6 Justifications for Belief Updates Justification type Representation Priority Notes direct observation: (sense  6self only observability:(obs  )5obs of others effects of actions: (effect x  )4if aware of x inferences: (infer  )3   memory: (persist  true OR false assumptions: (default  )1

7 Belief Update Algorithm updating beliefs is not so simple... Prioritized logic programs –Horn clauses annotated with strengths –semantics based models in which facts are supported by strongest rule implementation: –(assuming rules are not cyclic...) –create DAG of propositions –topoligically sort: P 1..P n –determine true values in order –P i depends at most on truth values of {P 1..P i-1 } A  B  C (1) G  C (2) C  D  E (1) A  F  E (3) A B F G C D E

8 PIEX Algorithm PIEX = Proactive Information EXchange given: belief database D, perceptions P, and justification rules J D’  BeliefUpdateAlg(D,P,J) for each agent A i  Agents and G  Goals(A i ) for each C  PreConditions(G) if C is a positive literal, let v  true if C is a negative literal, let v  false if  D’ or  D’ Tell(A i,C,v) Update(D’, )

9 Experiment: Wumpus Roundup!

10 Issues Current formalism does not allow for nested beliefs –Bel(A1,Bel(A2,lightOn(room5))) –Bel(A1,Bel(A2,Bel(A1,lightOn(room5)))) –see Isozaki and Katsuno (1996) We are working on an representation of modal logic in Prolog –allows nested beliefs and rules –backward-chaining rather than forward (e.g. UpdateAlg) –of course, not complete Better reasoning about knowledge of actions –assert pre-conds before effects? uncertainty of do-er/time?

11 Conclusions 1. Modeling beliefs of others is important for multi- agent interactions 2. Observability is a key to modeling others’ beliefs 3. Must be integrated properly with other justifications, such as inference, persistence... 4. Different strengths can be managed using prioritized inference (Prioritized Logic Programs) 5. Proactive information exchange can improve performance of teams 6. Message traffic can be intelligently reduced by reasoning about beliefs


Download ppt "Reasoning About Beliefs, Observability, and Information Exchange in Teamwork Thomas R. Ioerger Department of Computer Science Texas A&M University."

Similar presentations


Ads by Google