Argumentation Henry Prakken SIKS Basic Course Learning and Reasoning May 26 th, 2009.

Slides:



Advertisements
Similar presentations
Visualization Tools, Argumentation Schemes and Expert Opinion Evidence in Law Douglas Walton University of Winnipeg, Canada Thomas F. Gordon Fraunhofer.
Advertisements

On norms for the dynamics of argumentative interaction: argumentation as a game Henry Prakken Amsterdam January 18, 2010.
Argumentation Based on the material due to P. M. Dung, R.A. Kowalski et al.
Commonsense Reasoning and Argumentation 14/15 HC 8 Structured argumentation (1) Henry Prakken March 2, 2015.
Computational Models for Argumentation in MAS
Commonsense Reasoning and Argumentation 14/15 HC 9 Structured argumentation (2) Henry Prakken March 4, 2015.
On the structure of arguments, and what it means for dialogue Henry Prakken COMMA-08 Toulouse,
The Logic of Intelligence Pei Wang Department of Computer and Information Sciences Temple University.
Commonsense Reasoning and Argumentation 14/15 HC 10: Structured argumentation (3) Henry Prakken 16 March 2015.
Legal Argumentation 2 Henry Prakken March 28, 2013.
Argumentation Logics Lecture 5: Argumentation with structured arguments (1) argument structure Henry Prakken Chongqing June 2, 2010.
Commonsense Reasoning and Argumentation 14/15 HC 15: Concluding remarks Henry Prakken 1 April 2015.
Moral Reasoning Making appropriate use of facts and opinions to decide the right thing to do Quotations from Jacob Needleman’s The American Soul A Crucial.
Identifying and Analyzing Arguments in a Text Argumentation in (Con)Text Symposium, Jan. 4, 2007, Bergen.
Critical Thinking Value Conflicts & Assumptions
Argumentation Logics Lecture 1: Introduction Henry Prakken Chongqing May 26, 2010.
BIRDS FLY. is a bird Birds fly Tweety is a bird Tweety flies DEFEASIBLE NON-MONOTONIC PRESUMPTIVE?
Elements and Methods of Argumentation Theory University of Padua Lecture Padua, Italy, Dec.1, Douglas Walton Assumption University Chair in Argumentation.
Argumentation Logics Lecture 7: Argumentation with structured arguments (3) Rationality postulates, Self-defeat Henry Prakken Chongqing June 4, 2010.
Some problems with modelling preferences in abstract argumentation Henry Prakken Luxemburg 2 April 2012.
The Argument Mapping Tool of the Carneades Argumentation System DIAGRAMMING EVIDENCE: VISUALIZING CONNECTIONS IN SCIENCE AND HUMANITIES’ DIAGRAMMING EVIDENCE:
Legal Argumentation 1 Henry Prakken March 21, 2013.
Commonsense Reasoning and Argumentation 14/15 HC 13: Dialogue Systems for Argumentation (1) Henry Prakken 25 March 2015.
Argumentation in Artificial Intelligence Henry Prakken Lissabon, Portugal December 11, 2009.
| 1 › Floris Bex / Centre for Law and ICT › Henry Prakken / Centre for Law and ICT Dept. of ICS, Utrecht University Investigating stories in.
FINDING THE LOGIC OF ARGUMENTATION Douglas Walton CRRAR Coimbra, March 24, 2011.
1 OSCAR: An Architecture for Generally Intelligent Agents John L. Pollock Philosophy and Cognitive Science University of Arizona
Lincoln-Douglas Debate An Examination of Values. OBJECTIVES: The student will 1. Demonstrate understanding of the concepts that underlie Lincoln-Douglas.
Reasoning with testimony Argumentation vs. Explanatory Coherence Floris Bex - University of Groningen Henry Prakken - University of Groningen - Utrecht.
Argumentation Logics Lecture 4: Games for abstract argumentation Henry Prakken Chongqing June 1, 2010.
Argumentation Logics Lecture 6: Argumentation with structured arguments (2) Attack, defeat, preferences Henry Prakken Chongqing June 3, 2010.
Argumentation Logics Lecture 7: Argumentation with structured arguments (3) Henry Prakken Chongqing June 4, 2010.
Argumentation Logics Lecture 6: Argumentation with structured arguments (2) Attack, defeat, preferences Henry Prakken Chongqing June 3, 2010.
Argumentation Logics Lecture 3: Abstract argumentation semantics (3) Henry Prakken Chongqing May 28, 2010.
Argumentation Logics Lecture 4: Games for abstract argumentation Henry Prakken Chongqing June 1, 2010.
Argumentation Logics Lecture 1: Introduction Henry Prakken Chongqing May 26, 2010.
Argumentation in Agent Systems Part 2: Dialogue Henry Prakken EASSS
Argumentation Logics Lecture 5: Argumentation with structured arguments (1) argument structure Henry Prakken Chongqing June 2, 2010.
Henry Prakken August 23, 2013 NorMas 2013 Argumentation about Norms.
Introduction to Philosophy Lecture 7 The argument from evil By David Kelsey.
Knowledge Representation and Reasoning University "Politehnica" of Bucharest Department of Computer Science Fall 2010 Adina Magda Florea
Chapter 4: Lecture Notes
Database Systems Normal Forms. Decomposition Suppose we have a relation R[U] with a schema U={A 1,…,A n } – A decomposition of U is a set of schemas.
Chapter 6: Objections to the Physical Symbol System Hypothesis.
Introduction to formal models of argumentation
MODELING CRITICAL QUESTIONS AS ADDITIONAL PREMISES Douglas Walton CRRAR OSSA, May 19, 2011.
Argumentation and Trust: Issues and New Challenges Jamal Bentahar Concordia University (Montreal, Canada) University of Namur, Belgium, June 26, 2007.
Legal Argumentation 3 Henry Prakken April 4, 2013.
LEVEL 3 I can identify differences and similarities or changes in different scientific ideas. I can suggest solutions to problems and build models to.
Arguing Agents in a Multi- Agent System for Regulated Information Exchange Pieter Dijkstra.
Commonsense Reasoning and Argumentation 14/15 HC 14: Dialogue systems for argumentation (2) Henry Prakken 30 March 2015.
A Quantitative Trust Model for Negotiating Agents A Quantitative Trust Model for Negotiating Agents Jamal Bentahar, John Jules Ch. Meyer Concordia University.
National Public Health Institute, Finland Open risk assessment Lecture 5: Argumentation Mikko Pohjola KTL, Finland.
On the Semantics of Argumentation 1 Antonis Kakas Francesca Toni Paolo Mancarella Department of Computer Science Department of Computing University of.
Henry Prakken & Giovanni Sartor July 18, 2012 Law Logic Summerschool 2012 Session (Part 2): Burdens of proof and presumptions.
National Public Health Institute, Finland Open risk assessment Lecture 5: Argumentation Mikko Pohjola KTL, Finland.
Building Blocks of Scientific Research Chapter 5 References:  Business Research (Duane Davis)  Business Research Methods (Cooper/Schindler) Resource.
An argument-based framework to model an agent's beliefs in a dynamic environment Marcela Capobianco Carlos I. Chesñevar Guillermo R. Simari Dept. of Computer.
Why Stakeholder Theorists Should Support Stakeholder Democracy Jeffrey Moriarty Bentley University February, 2011.
Argumentation Logics Lecture 2: Abstract argumentation grounded and stable semantics Henry Prakken Chongqing May 27, 2010.
Argumentation pour le raisonnement pratique
Henry Prakken & Giovanni Sartor July 16, 2012
Open risk assessment Lecture 5: Argumentation
Henry Prakken Guangzhou (China) 10 April 2018
Alternating tree Automata and Parity games
Henry Prakken COMMA 2016 Berlin-Potsdam September 15th, 2016
Henry Prakken February 23, 2018
Back to “Serious” Topics…
Henry Prakken Chongqing May 27, 2010
Presentation transcript:

Argumentation Henry Prakken SIKS Basic Course Learning and Reasoning May 26 th, 2009

Why do agents need argumentation? For their internal reasoning Reasoning about beliefs, goals, intentions etc often is defeasible For their interaction with other agents Information exchange, negotiation, collaboration, …

Overview Inference (logic) Abstract argumentation Rule-based argumentation Dialogue

Part 1: Inference

We should lower taxes Lower taxes increase productivity Increased productivity is good

We should lower taxes Lower taxes increase productivity Increased productivity is good We should not lower taxes Lower taxes increase inequality Increased inequality is bad

We should lower taxes Lower taxes increase productivity Increased productivity is good We should not lower taxes Lower taxes increase inequality Increased inequality is bad Lower taxes do not increase productivity USA lowered taxes but productivity decreased

We should lower taxes Lower taxes increase productivity Increased productivity is good We should not lower taxes Lower taxes increase inequality Increased inequality is bad Lower taxes do not increase productivity Prof. P says that … USA lowered taxes but productivity decreased

We should lower taxes Lower taxes increase productivity Increased productivity is good We should not lower taxes Lower taxes increase inequality Increased inequality is bad Lower taxes do not increase productivity Prof. P says that … Prof. P has political ambitions People with political ambitions are not objective Prof. P is not objective USA lowered taxes but productivity decreased

We should lower taxes Lower taxes increase productivity Increased productivity is good We should not lower taxes Lower taxes increase inequality Increased inequality is bad Lower taxes do not increase productivity Prof. P says that … Prof. P has political ambitions People with political ambitions are not objective Prof. P is not objective USA lowered taxes but productivity decreased

We should lower taxes Lower taxes increase productivity Increased productivity is good We should not lower taxes Lower taxes increase inequality Increased inequality is bad Lower taxes do not increase productivity Prof. P says that … Prof. P has political ambitions People with political ambitions are not objective Prof. P is not objective Increased inequality is good Increased inequality stimulates competition Competition is good USA lowered taxes but productivity decreased

Sources of conflict Default generalisations Conflicting information sources Alternative explanations Conflicting goals, interests Conflicting normative, moral opinions …

Application areas Medical diagnosis and treatment Legal reasoning Interpretation Evidence / crime investigation Intelligence Decision making Policy design …

We should lower taxes Lower taxes increase productivity Increased productivity is good We should not lower taxes Lower taxes increase inequality Increased inequality is bad Lower taxes do not increase productivity Prof. P says that … Prof. P has political ambitions People with political ambitions are not objective Prof. P is not objective Increased inequality is good Increased inequality stimulates competition Competition is good USA lowered taxes but productivity decreased

AB C D E

Status of arguments: abstract semantics (Dung 1995) INPUT: a pair  Args,Defeat  OUTPUT: An assignment of the status ‘in’ or ‘out’ to all members of Args So: semantics specifies conditions for labeling the ‘argument graph’. Should capture reinstatement: ABC

Possible labeling conditions Every argument is either ‘in’ or ‘out’. 1. An argument is ‘in’ if all arguments defeating it are ‘out’. 2. An argument is ‘out’ if it is defeated by an argument that is ‘in’. Works fine with: But not with: ABC AB

Two solutions Change conditions so that always a unique status assignment results Use multiple status assignments: and ABC ABAB ABC AB

Unique status assignments Grounded semantics (Dung 1995): S0: the empty set Si+1: Si + all arguments defended by Si... (S defends A if all defeaters of A are defeated by a member of S)

AB C D E Is B, D or E defended by S1? Is B or E defended by S2?

A problem(?) with grounded semantics We have: We want(?): AB C D AB C D

A problem(?) with grounded semantics AB C D A = Frederic Michaud is French since he has a French name B = Frederic Michaud is Dutch since he is a marathon skater C = F.M. likes the EU since he is European (assuming he is not Dutch or French) D = F.M. does not like the EU since he looks like a person who does not like the EU

A problem(?) with grounded semantics AB C D A = Frederic Michaud is French since Alice says so B = Frederic Michaud is Dutch since Bob says so C = F.M. likes the EU since he is European (assuming he is not Dutch or French) D = F.M. does not like the EU since he looks like a person who does not like the EU E E = Alice and Bob are unreliable since they contradict each other

Multiple labellings AB C D AB C D

Status assignments (1) Given  Args,Defeat  : A status assignment is a partition of Args into sets In and Out such that : 1. An argument is in In if all arguments defeating it are in Out. 2. An argument is in Out if it is defeated by an argument that is in In. AB C

Status assignments (1) Given  Args,Defeat  : A status assignment is a partition of Args into sets In and Out such that : 1. An argument is in In if all arguments defeating it are in Out. 2. An argument is in Out if it is defeated by an argument that is in In. AB C

Status assignments (1) Given  Args,Defeat  : A status assignment is a partition of Args into sets In and Out such that : 1. An argument is in In if all arguments defeating it are in Out. 2. An argument is in Out if it is defeated by an argument that is in In. AB C

Status assignments (1) Given  Args,Defeat  : A status assignment is a partition of Args into sets In and Out such that : 1. An argument is in In if all arguments defeating it are in Out. 2. An argument is in Out if it is defeated by an argument that is in In. AB C

Status assignments (1) Given  Args,Defeat  : A status assignment is a partition of Args into sets In and Out such that : 1. An argument is in In if all arguments defeating it are in Out. 2. An argument is in Out if it is defeated by an argument that is in In. AB C

Status assignments (2) Given  Args,Defeat  : A status assignment is a partition of Args into sets In, Out and Undecided such that: 1. An argument is in In if all arguments defeating it are in Out. 2. An argument is in Out if it is defeated by an argument that is in In. A status assignment is stable if Undecided = . In is a stable extension A status assignment is preferred if Undecided is  -minimal. In is a preferred extension A status assignment is grounded if Undecided is  -maximal. In is the grounded extension

Dung’s original definitions Given  Args,Defeat , S  Args, A  Args: S is conflict-free if no member of S defeats a member of S S defends A if all defeaters of A are defeated by a member of S S is admissible if it is conflict-free and defends all its members S is a preferred extension if it is  -maximally admissible S is a stable extension if it is conflict-free and defeats all arguments outside it S is the grounded extension if S is the  -smallest set such that A  S iff S defends A.

AB C D E S defends A if all defeaters of A are defeated by a member of S S is admissible if it is conflict-free and defends all its members Admissible?

AB C D E S defends A if all defeaters of A are defeated by a member of S S is admissible if it is conflict-free and defends all its members Admissible?

AB C D E S defends A if all defeaters of A are defeated by a member of S S is admissible if it is conflict-free and defends all its members Admissible?

AB C D E S defends A if all defeaters of A are defeated by a member of S S is admissible if it is conflict-free and defends all its members Admissible?

AB C D E S defends A if all defeaters of A are defeated by a member of S S is admissible if it is conflict-free and defends all its members Preferred? S is preferred if it is maximally admissible

AB C D E S defends A if all defeaters of A are defeated by a member of S S is admissible if it is conflict-free and defends all its members Preferred? S is preferred if it is maximally admissible

AB C D E S defends A if all defeaters of A are defeated by a member of S S is admissible if it is conflict-free and defends all its members Preferred? S is preferred if it is maximally admissible

AB C D E S defends A if all defeaters of A are defeated by a member of S S is admissible if it is conflict-free and defends all its members Grounded? S is groundeded if it is the smallest set s.t. A  S iff S defends A

AB C D E S defends A if all defeaters of A are defeated by a member of S S is admissible if it is conflict-free and defends all its members Grounded? S is groundeded if it is the smallest set s.t. A  S iff S defends A

Properties The grounded extension is unique Every stable extension is preferred (but not v.v.) There exists at least one preferred extension The grounded extension is a subset of all preferred and stable extensions …

The ‘ultimate’ status of arguments (and conclusions) With grounded semantics: A is justified if A  g.e. A is overruled if A  g.e. and A is defeated by g.e. A is defensible otherwise With preferred semantics: A is justified if A  p.e for all p.e. A is defensible if A  p.e. for some but not all p.e. A is overruled otherwise (?) In all semantics:  is justified if  is the conclusion of some justified argument  is defensible if  is not justified and  is the conclusion of some defensible argument

The status of arguments: proof theory Argument games between proponent and opponent: Proponent starts with an argument Then each party replies with a suitable counterargument Possibly backtracking A winning criterion E.g. the other player cannot move An argument is (dialectically) provable iff proponent has a winning strategy in a game for it.

The G-game for grounded semantics: A sound and complete game: Each move replies to previous move Proponent does not repeat moves Proponent moves strict defeaters, opponent moves defeaters A player wins iff the other player cannot move Result: A is in the grounded extension iff proponent has a winning strategy in a game about A.

A game tree A B C D E F

P: A A B C D E F

A game tree P: A A B C D E F O: F

A game tree P: A A B C D E F O: F P: E

A game tree P: A O: B A B C D E F O: F P: E

A game tree P: A O: B P: C A B C D E F O: F P: E

A game tree P: A O: B P: C O: D A B C D E F O: F P: E

A game tree P: A O: B P: CP: E O: D A B C D E F O: F P: E

The structure of arguments: current accounts Assumption-based approaches (Dung-Kowalski-Toni, Besnard & Hunter, …) K = theory A = assumptions, - is conflict relation on A R = inference rules (strict) An argument for p is a set A’  A such that A’  K |- R p Arguments attack each other on their assumptions Rule-based approaches (Pollock, Vreeswijk, DeLP, Prakken & Sartor, Defeasible Logic, …) K = theory R = inference rules (strict and defeasible) K yields an argument for p if K |- R p Arguments attack each other on applications of defeasible inference rules

Aspic system: overview Argument structure based on Vreeswijk (1997) ≈ Trees where Nodes are wff of logical language L closed under negation Links are applications of inference rules Strict (  1,...,  1   ); or Defeasible (  1,...,  1   ) Reasoning starts from knowledge base K  L Defeat based on Pollock Argument acceptability based on Dung (1995)

ASPIC system: structure of arguments An argument A is:  if   K with Conc(A) = {  } Sub(A) =  A 1,..., A n   if there is a strict inference rule Conc(A 1 ),..., Conc(A n )   Conc(A) = {  } Sub(A) = Sub(A 1 ) ...  Sub(A n )  {A} A 1,..., A n   if there is a defeasible inference rule Conc(A 1 ),..., Conc(A n )   Conc(A) = {  } Sub(A) = Sub(A 1 ) ...  Sub(A n )  {A} A is strict if all members of Sub(A) apply strict rules; else A is defeasible

Q1Q2 P R1R2 R1, R2  Q2 Q1, Q2  P Q1,R1,R2  K

Domain-specific vs. inference general inference rules R1: Bird  Flies R2: Penguin  Bird Penguin  K R1: ,      Strict rules: all deductively valid inference rules Bird  Flies  K Penguin  Bird  K Penguin  K Flies Bird Penguin Flies Bird Bird  Flies Penguin Penguin  Bird

ASPIC system: attack and defeat ≥ is a preference ordering between arguments such that if A is strict and B is defeasible then A > B A rebuts B if Conc(A) = ¬Conc(B’ ) for some B’  Sub(B); and B’ applies a defeasible rule; and not B’ > A A undercuts B if Conc(A) = ¬B’ for some B’  Sub(B); and B’ applies a defeasible rule A defeats B if A rebuts or undercuts B Naming convention implicit

Q1Q2 P R1R2  Q2 V 1V 2 V 3 S 2 T 1T 2

Argument acceptability Dung-style semantics and proof theory directly apply!

Additional properties (cf. Caminada & Amgoud 2007) Let E be any stable, preferred or grounded extension: 1. If B  Sub(A) and A  E then B  E 2. If the strict rules R S are closed under contraposition, then {  |  = Conc(A) for some A  E } is closed under R S ; consistent if K is consistent

Argument schemes Many arguments (and attacks) follow patterns. Much work in argumentation theory (Perelman, Toulmin, Walton,...) Argument schemes Critical questions Recent applications in AI (& Law)

Argument schemes: general form But also critical questions Negative answers are counterarguments Premise 1, …, Premise n Therefore (presumably), conclusion

Expert testimony (Walton 1996) Critical questions: Is E biased? Is P consistent with what other experts say? Is P consistent with known evidence? E is expert on D E says that P P is within D Therefore (presumably), P is the case

Witness testimony Critical questions: Is W sincere? (veracity) Was P evidenced by W’s senses? (objectivity) Did P occur? (observational sensitivity) Witness W says P Therefore (presumably), P

Perception Critical questions: Are the circumstances such that reliable observation of P is impossible? … P is observed Therefore (presumably), P

Memory Critical questions: Was P originally based on beliefs of which one is false? … P is recalled Therefore (presumably), P

‘Unpacking’ the witness testimony scheme Critical questions: Is W sincere? (veracity) Was P evidenced by W’s senses? (objectivity) Did P occur? (observational sensitivity) Witness W says “I remember I saw P” Therefore (presumably), W remembers he saw P Therefore (presumably), W saw P Therefore (presumably), P Witness testimony

‘Unpacking’ the witness testimony scheme Critical questions: Is W sincere? (veracity) Was P evidenced by W’s senses? (objectivity) Did P occur? (observational sensitivity) Witness W says “I remember I saw P” Therefore (presumably), W remembers he saw P Therefore (presumably), W saw P Therefore (presumably), P Memory

‘Unpacking’ the witness testimony scheme Critical questions: Is W sincere? (veracity) Was P evidenced by W’s senses? (objectivity) Did P occur? (observational sensitivity) Witness W says “I remember I saw P” Therefore (presumably), W remembers he saw P Therefore (presumably), W saw P Therefore (presumably), P Perception

Applying commonsense generalisations Critical questions: are there exceptions to the generalisation? exceptional classes of people may have other reasons to flea Illegal immigrants Customers of prostitutes … P If P then usually Q Therefore (presumably), Q People who flea from a crime scene usually have consciousness of guilt Consc of Guilt Fleas If Fleas then usually Consc of Guilt

Arguments from consequences Critical questions: Does A also have bad (good) consequences? Are there other ways to bring about G?... Action A brings about G, G is good (bad) Therefore (presumably), A should (not) be done

Other work on argument-based inference Reasoning about priorities and defeat Abstract support relations between arguments Gradual defeat Other semantics Dialectical proof theories Combining modes of reasoning...

Part 2: Dialogue

‘Argument’ is ambiguous Inferential structure Single agents (Nonmonotonic) logic Fixed information state Form of dialogue Multiple agents Dialogue theory Changing information state

Example P: Tell me all you know about recent trading in explosive materials (request) P: why don’t you want to tell me? P: why aren’t you allowed to tell me? P: You may be right in general (concede) but in this case there is an exception since this is a matter of national importance P: since we have heard about a possible terrorist attack P: OK, I agree (offer accepted). O: No I won’t (reject) O: since I am not allowed to tell you O: since sharing such information could endanger an investigation O: Why is this a matter of national importance? O: I concede that there is an exception, so I retract that I am not allowed to tell you. I will tell you on the condition that you don’t exchange the information with other police officers (offer)

Example P: Tell me all you know about recent trading in explosive materials (request) P: why don’t you want to tell me? P: why aren’t you allowed to tell me? P: You may be right in general (concede) but in this case there is an exception since this is a matter of national importance P: since we have heard about a possible terrorist attack P: OK, I agree (offer accepted). O: No I won’t (reject) O: since I am not allowed to tell you O: since sharing such information could endanger an investigation O: Why is this a matter of national importance? O: I concede that there is an exception, so I retract that I am not allowed to tell you. I will tell you on the condition that you don’t exchange the information with other police officers (offer)

Example P: Tell me all you know about recent trading in explosive materials (request) P: why don’t you want to tell me? P: why aren’t you allowed to tell me? P: You may be right in general (concede) but in this case there is an exception since this is a matter of national importance P: since we have heard about a possible terrorist attack P: OK, I agree (offer accepted). O: No I won’t (reject) O: since I am not allowed to tell you O: since sharing such information could endanger an investigation O: Why is this a matter of national importance? O: I concede that there is an exception, so I retract that I am not allowed to tell you. I will tell you on the condition that you don’t exchange the information with other police officers (offer)

Types of dialogues (Walton & Krabbe) Dialogue TypeDialogue GoalInitial situation Persuasionresolution of conflictconflict of opinion Negotiationmaking a dealconflict of interest Deliberationreaching a decisionneed for action Information seekingexchange of informationpersonal ignorance Inquirygrowth of knowledgegeneral ignorance

Dialogue systems (according to Carlson 1983) Dialogue systems define the conditions under which an utterance is appropriate An utterance is appropriate if it promotes the goal of the dialogue in which it is made Appropriateness defined not at speech act level but at dialogue level Dialogue game approach Protocol should promote the goal of the dialogue

Formal dialogue systems Topic language With a logic (possibly nonmonotonic) Communication language Locution + content (from topic language) With a protocol: rules for when utterances may be made Should promote the goal of the dialogue Effect rules (e.g. on agent’s commitments) Termination and outcome rules

Negotiation Dialogue goal: making a deal Participants’ goals: maximise individual gain Typical communication language: Request p, Offer p, Accept p, Reject p, …

Persuasion Participants: proponent (P) and opponent (O) of a dialogue topic T Dialogue goal: resolve the conflict of opinion on T Participants’ goals: P wants O to accept T O wants P to give up T Typical speech acts: Claim p, Concede p, Why p, p since S, Retract p, Deny p … Goal of argument games: Verify logical status of argument or proposition relative to given theory

Standards for dialogue systems Argument games: soundness and completeness wrt some logical semantics Dialogue systems: Effectiveness wrt dialogue goal Efficiency, relevance, termination,... Fairness wrt participants’ goals Can everything relevant be said?,...

Some standards for persuasion systems Correspondence With participants’ beliefs If union of beliefs implies p, can/will agreement on p result? If parties agree that p, does the union of their beliefs imply p?... With ‘dialogue theory’ If union of commitments implies p, can/will agreement on p result?...

A communication language (Dijkstra et al. 2007) Speech actAttackSurrender request(  )offer (  ’), reject(  ) - offer(  )offer(  ’) (  ≠  ’), reject(  )accept(  ) reject(  )offer(  ’) (  ≠  ’), why-reject (  ) - accept(  ) -- why-reject(  )claim (  ’) - claim(  )why(  )concede(  ) why(  )  since S (an argument)retract(  )  since Swhy(  ) (   S)  ’ since S’ (a defeater) concede(  ) concede  ’ (  ’  S) concede(  ) -- retract(  ) -- deny(  ) --

A protocol (Dijkstra et al. 2007) Start with a request Repy to a previous move of the other agent Pick your replies from the table Finish persuasion before resuming negotiation Turntaking: In nego: after each move In pers: various rules possible Termination: In nego: if offer is accepted or someone withdraws In pers: if main claim is retracted or conceded

Example dialogue formalised P: Request to tell O: Reject to tell P: Why reject to tell? Embedded persuasion... O: Offer to tell if no further exchange P: Accept after tell no further exchange

Persuasion part formalised O: Claim Not allowed to tell P: Why not allowed to tell? O: Not allowed to tell since telling endangers investigation & What endangers an investigation is not allowed P: Concede What endangers an investigation is not allowed O: Why National importance? P: National importance since Terrorist threat & Terrorist threat  National importance P: Exception to R1 since National importance & National importance  Exception to R1

Persuasion part formalised O: Claim Not allowed to tell P: Why not allowed to tell? O: Not allowed to tell since telling endangers investigation & What endangers an investigation is not allowed P: Concede What endangers an investigation is not allowed O: Why National importance? P: National importance since Terrorist threat & Terrorist threat  National importance P: Exception to R1 since National importance & National importance  Exception to R1 P: Concede Exception to R1

Persuasion part formalised O: Claim Not allowed to tell P: Why not allowed to tell? O: Not allowed to tell since telling endangers investigation & What endangers an investigation is not allowed P: Concede What endangers an investigation is not allowed O: Why National importance? P: National importance since Terrorist threat & Terrorist threat  National importance P: Exception to R1 since National importance & National importance  Exception to R1 O: Concede Exception to R1 O: Retract Not allowed to tell

Theory building in dialogue In my 2005 approach to (persuasion) dialogue: Agents build a joint theory during the dialogue A dialectical graph Moves are operations on the joint theory

Not allowed to tell claim

Not allowed to tell claimwhy

Not allowed to tell Telling endangers investigation R1: What endangers an investigation is not allowed claimwhy since

Not allowed to tell Telling endangers investigation R1: What endangers an investigation is not allowed claimwhy since concede

Not allowed to tell Telling endangers investigation R1: What endangers an investigation is not allowed Exception to R1 claimwhy since National importance R2: national importance  Not R1 concede

Not allowed to tell Telling endangers investigation R1: What endangers an investigation is not allowed Exception to R1 claimwhy since National importance R2: national importance  Not R1 why concede

Not allowed to tell Telling endangers investigation R1: What endangers an investigation is not allowed Exception to R1 claimwhy since National importance R2: national importance  Not R1 Terrorist threat  national importance Terrorist threat why since concede

Not allowed to tell Telling endangers investigation R1: What endangers an investigation is not allowed Exception to R1 claimwhy since National importance R2: national importance  Not R1 Terrorist threat  national importance Terrorist threat why since concede

Not allowed to tell Telling endangers investigation R1: What endangers an investigation is not allowed Exception to R1 claimwhy since National importance R2: national importance  Not R1 Terrorist threat  national importance Terrorist threat why since concede retract

Research issues Investigation of protocol properties Mathematical proof or experimentation Combinations of dialogue types Deliberation! Multi-party dialogues Dialogical agent behaviour (strategies)...

Further information