Presentation is loading. Please wait.

Presentation is loading. Please wait.

Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive.

Similar presentations


Presentation on theme: "Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive."— Presentation transcript:

1 Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive Science 2 Department of Computer Science 3 Rensselaer Polytechnic Institute Paul.Bello@rl.af.mil yangyri@rpi.edu brings@rpi.edu arkouk@rpi.edu Towards a Psychology of Rational Agency

2 Presentation Summary Some Questions… What Can AI/Computer Science Tell Us? What Can Philosophy/Economics Tell Us? What Can Psychology Tell Us? Is There a Synthesis? Humble (Yet Promising) Beginnings… Implications and Applications…

3 Some Questions… What is an agent, and what stance should we take on mental attitudes and constructions? If we admit these mental representations, what should they look like, and how do they guide behavior of the individual possessing them? What happens when agents interact, either cooperatively, or competitively? How do the mental representations of each individual interact holistically? CAN WE MODEL ANY OF THIS????

4 AI and Computer Science Taxonomy of agents in Russell & Norvig –Sensors, Effectors, “Processing Unit”. From reflex agents to BOID. Variety of models –Bayesian models for belief, decision-theoretic models for intentions, desires. –Logical models Ranges from the exceedingly simple (propositional calculus) to the exceedingly complicated (multi-modal logics) Interaction –Mostly goal-driven planning, heuristic search, probabilistic inference Models? –Of course. That’s AI’s bread and butter. Issues –Not very informed from the psychological dimension. Lip service paid to philosophy and economic theory. Usually heuristics and short-cuts.

5 Philosophy and Economics Vastly more complex definitions –Philosophy: Intentionality, Utilitarianism, etc. Argument between PSSH and connectionists. –Economics: Rationality and Four Requirements Usually modeled using foundational mathematics –Philosophical logic, set theory, and representations of uncertainty. Emphasis on remaining true to philosophical roots. Interactions –Utilitarian semantics, Adam Smith, Game theory Models? –Yup. Issues –Overly formal. Not psychologically plausible. Optimality emphasized. Philosophers not concerned with intractability.

6 Psychology and Cognitive Science Usually left to the more “philosophically oriented.” Closest thing is the debate between psychological paradigms. Cognitivism: Mental representations. MM, ML, Concept Hierarchies, etc. Interactions: Behavioral Game Theory Models? –A ton. Some with representation, some without. Overall, captures some general phenomenon in human behavior/cognition/mental processing. Issues –Affect-by-Affect approach. No systematicity. Not informed by normative theory (most of the time). Hard to capture computationally.

7 Synthesis Yes! Resoundingly so. Need patience to bridge disciplinary gaps, and find the non-null intersection. Focus of effort: Deontic preference logic –How do we reason about context-dependent situations, including social settings? –Bridge the gap between individual cognition and group cognition. Approach: Narrow the divide between the normative and the psychological as much as possible, then implement the narrowed system in a computational architecture.

8 General Algorithm Leverage RAIR lab expertise in developing machine reasoning systems. Produce a natural deduction system for a deontic preference logic. Perform experiments in the empty domain of the psychology of philosophical logic. Focus on the deontic “distinctions” present in the literature on philosophical logic. Rework natural deduction system so it is informed by experimental results.

9 Hold On a Sec… What’s this natural deduction stuff? What’s deontic logic about, and how is it useful?

10 Natural Deduction An intuitive framework for suppositional reasoning. –Assumptions, discharging, and conditional introduction. –Amenable to the goal-decomposition paradigm pioneered by Herb Simon & co. Means-Ends. –Some more complex systems like Hyperproof become suitable for the all important “heterogeneous” style of reasoning that we claim is what most folks do. Consistent with psychological models of reasoning. –Braine’s mental logic –Rip’s production style mental logic

11 Deontic Logic (SDL) Logic of norms (obligations) Introduces a new modal operator O(p) standing for “it ought to be the case that p” Separates possible states of affairs into “deontically ideal” and non-ideal situations. Closed under the simple rules –Modus ponens: p implies q, p, therefore q –Necessitation of obligation: p yields O(p) –Distribution of obligation: O(p implies q) gives O(p) implies O(q) –Non-contradictory obligation: it can’t be the case that O(p) and not O(p) –Normal propositional rules

12 Deontic Paradox 1 You should not insult someone. If you insult someone, you should do it in private. Insulting someone in private logically implies that you insult them. You insult someone O(  i) i  O(p) p  i i

13 Dyadic Deontic Logic Uh oh. Lots of deontic material talks about “sub-ideal” situations. SDL fails miserably on this kind of material. What to do? Dyadic deontic logic: O(a|b): if b (is done), then a ought to be (done). Basically: in the best non-ideal situations where b holds, a should hold as well. Regular SDL is subsumed: O(a|T) = O(a)

14 Deontic Paradox 2 A person should not commit murder. It should be that if someone doesn’t commit murder, he should not be punished for it. If the person commits murder, he should be punished for it. A suspect commits murder O(  m) O(m   p) m  O(p) m

15 Deontic Paradox 3 Usually, you should not insult someone. When someone harms the public interest, you should insult him. How to model this???

16 1 & 3 Combined Usually you should not insult someone. If you insult someone, you should do it in private. Insulting someone in private implies that you insult him. If someone harms the public interest, then you should insult him.

17 Van der Torre’s DIODE Logic Syntax –Standard propositional language. –A finite set of violation propositions, one per deontic statement. –A finite set of exception propositions, one per deontic statement. –A set of background facts –A set of conditional obligations. Semantics –Basically insure that situations are ordered in terms of increasing numbers of violations. Preferred situations have the least number of violations. –Semantics for exceptions. Normal worlds are separated from “exceptional circumstances” –Conditions to ensure the proper mixing of these two notions.

18 Solutions to Deontic Paradoxes i p ii pp ii hh pp ip hh V 1, V 2 hh i pp Ex 1 hip V3V3 h ii pp hi pp V2V2 normal exceptional ii hh hh i V1V1 hi Ex 1 V2V2 h ii pp mm pm V1V1 V1V1 V 1, V 2 m pp mm p V2V2 p V1V1 i normal exceptional F = {i} F = {m}

19 Partial Taxonomy Obligations –Unconditional vs. Conditional (CTD problem) –Normal vs. Exceptional (defeasibility vs. CTD) –Prima Facie vs. conditional (overriding conditions) –….

20 A Sample ND Schema i p ii pp p V1V1 i V 1, V 2 F = {i} p ii i pp p V1V1 i V 1, V 2 … F = {i} p ii i pp p V1V1 i V 1, V 2 V1V1 V1V1 V2V2 O(i)

21 Decisions, Decisions… Well, we’ve talked about reasoning, and even about preference… What about decisions? –Where’s the probability? Let’s have a look at some new interesting work…

22 Kahneman/Tversky 1 Imagine that the U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows: Program A: 200 people will be saved. Program B: 600 people will be saved with probability 1/3 and 0 people will be saved with probability 2/3. Which of the two programs would you favor?

23 Kahneman/Tversky 2 Imagine that the U.S. is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows: Program C: 400 people will die. Program D: 0 people die with 1/3 probability and 600 people die with 2/3 probability. Which of the two programs would you favor?

24 Bello/Yang 1

25 Bello/Yang 2

26 Kahneman/Tversky Results

27 Bello/Yang Results ? However, it wouldn’t be unreasonable to expect that they are analogous to the Kahneman/Tversky results, but purely explainable without resorting to traditional decision-theoretic devices.

28 Implications and Applications Implications: Next-Generation Logic-Based AI –A synthesis of philosophical, psychological, and computational dimensions for higher-order cognitive function. Reasoning and Decision-Making! –MARMML as an embodiment and a test-bed. Application: RASCALS for… –Third-Generation Wargaming –Intelligence Analysis –Educational Technologies

29 Game EnvironmentCognitive system Strategic Decisions Long-term Planning Human and Machine Reasoning Computational Cognitive Modeling Perception & Action Resource Management Terrain Model Physics Opportunism output input Decision Problem Formation High-level Low-level Tactics Infrastructure Environment Urban Models Entity Interdependence Knowledge Ethical Norms Wargaming: A Cognitive Approach

30 SLATE

31 Educational Technology


Download ppt "Paul Bello 1,2, Yingrui Yang 2, Selmer Bringsjord 2,3 & Kostas Arkoudas 2,3 Air Force Research Laboratory – Information Directorate 1 Department of Cognitive."

Similar presentations


Ads by Google