Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Computational Models for Argumentation in MAS Leila Amgoud IRIT – CNRS France

Similar presentations


Presentation on theme: "1 Computational Models for Argumentation in MAS Leila Amgoud IRIT – CNRS France"— Presentation transcript:

1 1 Computational Models for Argumentation in MAS Leila Amgoud IRIT – CNRS France

2 2 Outline Introduction to MAS Fundamentals of argumentation Argumentation in MAS Conclusions

3 3 The notion of agent ( Wooldridge 2000 ) An agent is a computer system that is capable of autonomous (i.e. independent) action on behalf of its user or owner (figuring out what needs to be done to satisfy design objectives, rather constantly being told) Rationality: agent will act in order to achieve its goals, and will not act in such a way as to prevent its goals being achieved — at least insofar as its beliefs permit

4 4 The notion of agent An agent needs the ability to make internal reasoning: Reasoning about beliefs, desires, … Handling inconsistencies Making decisions Generating, revising, and selecting goals...

5 5 Multi-agent systems ( Wooldrige 2000 ) A multi-agent system is one that consists of a number of agents, which interact with one another Generally, agents will be acting on behalf of users of different goals and motivations To successfully interact, they will require the ability to cooperate, coordinate, and negotiate with each other

6 6 Multi-agent systems Agents need to: exchange information and explanations resolve conflicts of opinions resolve conflicts of interests make joint decisions they need to engage in dialogues

7 7 Dialogue types ( Walton & Krabbe 1995 )

8 8 The role of argumentation Argumentation plays a key role for achieving the goals of the above dialogue types Argument = Reason for some conclusion (belief, action, goal, etc.) Argumentation = Reasoning about arguments  decide on conclusion Dialectical argumentation = Multi-party argumentation through dialogue

9 9 The role of argumentation Argumentation plays a key role for reaching agreements: Additional information can be exchanged The opinion of the agent is explicitly explained (e.g. arguments in favor of opinions or offers, arguments in favor of a rejection or an acceptance) Agents can modify/revise their beliefs / preferences / goals To influence the behavior of an agent (threats, rewards)

10 10 A persuasion dialogue P : The newspapers have no right to publish information I. C : Why? P : Because it is about X's private life and X does not agree (P1) C : The information I is not private because X is a minister and all information concerning ministers is public (C1) P : But X is not a minister since he resigned last month (P2) P2  C1  P1

11 11 A negotiation dialogue Buyer: Can’t you give me this 806 a bit cheaper? Seller: Sorry that’s the best I can do. Why don’t you go for a Polo instead? Buyer: I have a big family and I need a big car (B 1 ) Seller: Modern Polo are becoming very spacious and would easily fit in a big family. (S 1 ) Buyer: I didn’t know that, let’s also look at Polo then.

12 12 Why study argumentation in agent technology? For internal reasoning of single agents: Reasoning about beliefs, goals,... Making decisions Generating, revising, and selecting goals For interaction between multiple agents: Exchanging information and explanations Resolving conflicts of opinions Resolving conflicts of interests Making joint decisions

13 13 Outline Introduction to MAS Fundamentals of argumentation Argumentation in MAS Conclusions

14 14 Defeasible reasoning Reasoning is generally defeasible Assumptions, exceptions, uncertainty,... AI formalises such reasoning with non-monotonic logics Default logic, etc … New premisses can invalidate old conclusions Argumentation logics formalise defeasible reasoning as construction and comparison of arguments

15 15 Argumentation process Defining the interactions between arguments Evaluating the strengths of arguments Defining the status of arguments Drawing conclusions using a consequence relation Comparing decisions using a given principle Inference problem Decision making problem Constructing arguments

16 16 Main challenges Q1: What are the different types of arguments ? How do we construct arguments ? Q2: How can an argument interact with another argument ? Q3: How do we compute the strength of an argument ? Q4: How do we determine the status of arguments Q5: How do we conclude ? How decisions are compared on the basis of their arguments? Q6: What are the properties that an argumentation system should satisfy ?

17 17 Q1: Building arguments Types of arguments: (Kraus et al. 98, Amgoud & Prade 05) Explanations (involve only beliefs) Tweety flies because it is a bird Threats (involve beliefs + goals) You should do  otherwise I will do  You should not do  otherwise I will do  Rewards (involve beliefs + goals) If you do , I will do  If you don’t do , I will do  …

18 18 Q1: Building arguments Forms of arguments: An inference tree grounded in premises A deduction sequence A pair (Premises, Conclusion), leaving unspecified the particular proof that leads from the Premises to the Conclusion

19 19 Q1: Building arguments Example 1. (Inference problem) Let  be a propositional knowledge base An argument A is a paire A = (H, h) such that: 1. H   2. H is consistent 3. H  h 4. H is minimal (for set inclusion) satisfying 1, 2 and 3

20 20 Q1: Building arguments A: ({p, p  b, b  f}, f) B: ({p, p  f},  f) p: pinguin b: bird f: fly p p  b p  f bfbf 

21 21 Q1: Building arguments Example 2. (Decision problem)  = a propositional knowledge base G = a goals base D = a set of decision options An argument in favor of a decision d is a triple A = s.t. 1. d  D 2. g  G 3. S   4. S  {d} is consistent 5. S  {d} |- g 6. S is minimal (set  ) satisfying the above conditions

22 22 Q1: Building arguments r: rain w: wet c: cloud u: umbrella l: overloaded D = {u, ¬u} A = B = c, u  l ¬u  ¬l u  ¬w r  ¬u  w ¬r  ¬w c  r 1  ¬w ¬l G 1 

23 23 Q2: Interactions between arguments Three conflict relations: Rebutting attacks: two arguments with contradictory conclusions Assumption attacks: an argument attacks an assumption of another argument Undercutting attacks: an argument undermines some intermediate step (inference rule) of another argument

24 24 Rebutting attacks Tweety flies because it is a bird versus Tweety does not fly because it is a penguin Tweety flies ¬Tweety flies

25 25 Assumption attacks Tweety flies because it is a bird, and it is not provable that Tweety is a penguin versus Tweety is a penguin Tweety flies Penguin Tweety Not(Penguin Tweety)

26 26 Undercutting attack An argument challenges the connection between the premisses and the conclusion Tweety flies because all the birds I ’ve seen fly abc d I ’ve seen Opus, it is a bird and it does not fly ¬[a, b, c /d]

27 27 Q3: Strengths of arguments Why do we need to compute the strengths of arguments ? To compare arguments To refine the status of arguments by removing some attacks To define decision principles

28 28 Q3: Strengths of arguments The strength of an argument depends on the quality of information used to build that argument Examples: Weakest link principle (Benferhat & al. 95, Amgoud 96) Last link principle (Prakken & Sartor 97) Specificity principle (Simari & Loui 92)... Preference relation between data Strength of an argument Preference relation between arguments

29 29 Q3: Strengths of arguments Example 1. (Weakest link principle) A: ({p, p  b, b  f}, f) B: ({p, p  f},  f) Strength(A) = 3 Strength(B) = 2 Then B is preferred to (stronger than) A p p  b p  f bfbf 1 2 3

30 30 Q3: Strengths of arguments Example 2. A = B = Strength(A) = (1, 1) Strength(B) = (1,  ) Different preference relations between such arguments are defined (Amgoud, Prade 05) c, u  l ¬u  ¬l u  ¬w r  ¬u  w ¬r  ¬w c  r 1 K ¬w ¬l G  1

31 31 Q4: Status of arguments Some attacks can be removed Defeat = Attack + Preference relation between arguments Attacking and not weaker Defeat Attacking and stronger Strict Defeat AB < AB > A does not defeat BA strictly defeats B

32 32 Q4: Status of arguments Given, what is the status of a given argument A  Args? Three classes of arguments Arguments with which a dispute can be won (justified) Arguments with which a dispute can be lost (rejected) Arguments that leave the dispute undecided

33 33 Q4: Status of arguments Two ways for computing the status of arguments: The declarative form usually requires fixed-point definitions, and establishes certain sets of arguments as acceptable Acceptability semantics The procedural form amounts to defining a procedure for testing whether a given argument is a member of « a set of acceptable arguments » Proof theory

34 34 Acceptability semantics Semantics = specifies conditions for labelling the argument graph The labelling should: accept undefeated arguments capture the notion of reinstatement ABC A reinstates C

35 35 Acceptability semantics Example of labelling: L: Args  {in, out, und} An argument is in if all its defeaters are out An argument is out if it has a defeater that is in An argument is und otherwise

36 36 Acceptability semantics Example 1: Only one possible labelling: ABC ABC in out

37 37 Acceptability semantics Example 2: Two possible labellings: AB ABABand

38 38 Acceptability semantics Two approaches: A unique status approach An argument is justified iff it is in An argument is rejected if it is out An argument is undecided it is und A multiple status approach An argument is justified iff it is in in any labelling An argument is rejected if it is out in any labelling An argument is undecided it is in in some labelling and out in others

39 39 Acceptability semantics Unique status: Grounded semantics (Dung 95) E 1 = all undefeated arguments E 2 = E 1 + all arguments reinstated by E 1 … It exists only if there are undefeated arguments

40 40 Acceptability semantics Problem with grounded semantics: floating arguments AB C D AB C D We want

41 41 Acceptability semantics Multiple labellings: AB C D AB C D D is justified and C is rejected

42 42 Proof theories Let be an AS S 1, …, S n its extensions under a given semantics. Problem: Let a  Args Is a in one extension ? Is a in every extension ?

43 43 Proof theories Let a  Args. Problem: Is a in the grounded extension ? Example: A0A0 A4A4 A5A5 A3A3 A1A1 A2A2 A6A6

44 44 Proof theories ( Amgoud & Cayrol 00 ) A dialogue is a non-empty sequence of moves s.t: Move i = (Player i, Arg i ) (i  0) where: Player i = P iff i is even, Player i = C iff i is odd Player 0 = P and Arg 0 = a If Player i = Player j = P and i  j then Arg i  Arg j If Player i = P (i > 1) then Arg i strictly defeats Arg i-1 If Player i = C then Arg i defeats Arg i-1

45 45 Proof theories ( Amgoud & Cayrol 00 ) A dialogue tree is a finite tree where each branch is a dialogue P C C P A0A0 A4A4 A5A5 A3A3 A1A1 A2A2 A6A6 A0A0 A4A4 A5A5 A3A3 A1A1 A2A2 A6A6 Dialogue tree

46 46 Proof theories A player wins a dialogue iff it ends the dialogue P C C P A0A0 A4A4 A5A5 A3A3 A1A1 A2A2 A6A6 won by P won by C

47 47 Proof theories A candidate sub-tree is a sub-tree of the dialogue tree containing all the edges of an even move (P) and exactly one edge of an odd move (C) A solution sub-tree is a candidate subtree whose branches are all won by P P wins a dialogue tree iff the dialogue tree has a solution sub-tree Complete construction: ‘ a ’  the grounded extension iff  a dialogue tree whose root is ‘ a ’ and won by P

48 48 Proof theories Two candidate sub-trees: Each branch of S 2 is won by P  S 2 is a solution sub-tree  A 0 is in the grounded extension P C C P A0A0 A4A4 A5A5 A3A3 A1A1 A2A2 A6A6 A0A0 A4A4 A5A5 A2A2 A3A3 A6A6 S1S1 A0A0 A4A4 A5A5 A1A1 A3A3 S2S2

49 49 Q5: Consequence relations  : a knowledge base built from a logical language L, x: a formula of L : an argumentation system S 1, …, S n : the extensions under a given semantics.  |~ x iff  an argument A for x s.t. A  S i,  S i, i = 1, …, n  |~ x iff  S i,  an argument A for x, and A  S i  |~ x iff  S i st  an argument A for x and A  S i, and  S j st  an argument A for  x and A  S i  |~ x iff  S i st  an argument A for x and A  S i

50 50 Q5: Making decisions D = a set of decision options Problem = to define a preordering on D = an argumentation system Let d  D Arg. PRO d Arg. CON d

51 51 Q5: Making decisions Arg P (d) = the arguments in E which are PRO d Arg C (d) = the arguments in E which are CON d Args E = Acceptable arguments

52 52 Q5: Making decisions Decision principles: 3 categories of principles: ( Amgoud, Prade ) Unipolar principles = only one kind of arguments (PRO or CON) is involved Bipolar principles = both arguments PRO and CON are involved Non-polar principles

53 53 Q5: Making decisions Unipolar principles: Let d, d’  D. Counting arguments PRO: d  d’ iff |Arg P (d)| > |Arg P (d’)| Counting arguments CON: d  d’ iff |Arg C (d)| < |Arg C (d’)| Promotion focus: d  d’ iff  P  Arg P (d) st.  P’  Arg P (d’), P is stronger than P’. Prevention focus: d  d’ iff  C’  Arg C (d’) st.  C  Arg C (d), C’ is stronger than C.

54 54 Q5: Making decisions Bipolar principles: Let d, d’  D. d  d’ iff  P  Arg P (d) s.t.  P’  Arg P (d’), P is stronger than P’, and  C’  Arg C (d’) s.t.  C  Arg C (d), C’ is stronger than C

55 55 Q5: Making decisions Non-polar principles: Let d, d’  D   ’ d  d’ iff  is stronger than  ’

56 56 Q5: Rationality postulates Idea: What are the properties/rationality postulates that any AS should satisfy ? (Amgoud, Caminada 05) Consistency = AS should ensure safe conclusions the set {x |  |~ x } should be consistent the set of conclusions of each extension should be consistent Closedness = AS should not forget safe conclusions the set {x |  |~ x } should be closed the set of conclusions of each extension should be closed

57 57 Outline Introduction to MAS Fundamentals of argumentation Argumentation in MAS Conclusions

58 58 Dialogue systems ………………… CS 1 CS n ………………… Claim p Argue (S, p) …. An argumentation system for evaluating the outcome of the dialogue

59 59 Components of a dialogue system Communication language + Domain language Protocol = the set of rules for generating coherent dialogues Agent Strategies = the set of tactics used by the agents to choose a move to play Outcome One of a set of possible deals, or Conflict Protocol + Strategies Outcome

60 60 Communication language A syntax = a set of locutions, utterances or speech acts (Propose, Argue, Accept, Reject, etc.) A semantics = a unique meaning for each utterance Mentalistic approaches Social approaches Protocol-based approaches

61 61 Dialogue Protocol Protocol is public and independent from the mental states of the agents Main parameters: the set of allowed moves (e.g. Claim, Argue,...) the possible replies for each move the number of moves per turn the turntaking the notion of Backtracking the computation of the outcome Identifies more or less rich dialogues

62 62 Dialogue Protocol Computing the outcome: two approaches The protocol is equipped with an argumentation system that evaluates the content of CS 1  …  CS n :, or The rules of the proof theory are encoded in the protocol

63 63 Dialogue Protocol For persuasion dialogues, the two approaches return the same result if: the other parameters are fixed in the same way the acceptability semantics used is the same

64 64 Dialogue Strategies BDI agent Protocol CS 1 … CS n Set of allowed replies Argumentation-based decision model Next move to play Move = Locution + Content locution argument type argument offer

65 65 Dialogue Strategies Different arguments are exchanged: about beliefs about goals Eg. I have a big family and I need a big car referring to plans (instrumental arguments) Eg. Modern Polo are becoming very spacious and would easily fit in a big family Which argument to present and when? Need of a formal model for practical reasoning

66 66 Dialogue Strategies Dialogue strategies are at early stages Thus, not possible yet to characterize the outcome of the dialogue, i.e. when the outcome is optimal, etc.

67 67 Open issues How goals are generated ? How / when are they revised ? Do we always privilege new goals ? The answer = NO Threats ---> the goal can be adopted Rewards ---> the goal can be ignored AGM postulates for revising goals ?

68 68 Thank you


Download ppt "1 Computational Models for Argumentation in MAS Leila Amgoud IRIT – CNRS France"

Similar presentations


Ads by Google