Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Modeling Social Cognition in a Unified Cognitive Architecture.

Slides:



Advertisements
Similar presentations
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Advertisements

Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley Arizona State University and Institute for the Study of Learning and Expertise Expertise, Transfer, and Innovation in.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California Elena Messina.
Center for the Study of Language and Information Stanford University, Stanford, California March 20-21, 2004 Symposium on Reasoning and Learning in Cognitive.
Pat Langley Seth Rogers Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA.
Pat Langley Institute for the Study of Learning and Expertise Palo Alto, CA Cumulative Learning of Relational and Hierarchical Skills.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Varieties of Problem Solving in a Unified Cognitive Architecture.
Pat Langley Dan Shapiro Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Extending the I CARUS Cognitive Architecture Thanks to D. Choi,
Pat Langley Dongkyu Choi Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Mental Simulation and Learning in the I CARUS Architecture.
General learning in multiple domains transfer of learning across domains Generality and Transfer in Learning training items test items training items test.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Information Processing Technology Office Learning Workshop April 12, 2004 Seedling Overview Learning Hierarchical Reactive Skills from Reasoning and Experience.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
IL Kickoff Meeting June 20-21, 2006 DARPA Integrated Learning POIROT Project 1 Learning Hierarchical Task Networks by Analyzing Expert Traces Pat Langley.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona Institute for the Study of Learning and Expertise Palo Alto, California.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona A Cognitive Architecture for Integrated.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA A Unified Cognitive Architecture for Embodied Agents Thanks.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Cognitive Architectures and Virtual Intelligent Agents Thanks.
Pat Langley Computer Science and Engineering Arizona State University Tempe, Arizona USA Institute for the Study of Learning and Expertise Palo Alto, California.
Pat Langley Institute for the Study of Learning and Expertise Palo Alto, California A Cognitive Architecture for Complex Learning.
1 ISLE Transfer Learning Team Main Technology Components The I CARUS Architecture Markov Logic Networks Executes skills in the environment Long-TermConceptualMemoryShort-TermConceptualMemory.
Cognitive Systems, ICANN panel, Q1 What is machine intelligence, as beyond pattern matching, classification and prediction. What is machine intelligence,
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
A Cognitive Architecture Theory of Comprehension and Appraisal: Unifying Cognitive Functions and Appraisal Bob Marinier John Laird University of Michigan.
HUMANOBS AGI Beijing – August 2013 Toward a programming paradigm for control systems with high levels of existential autonomy Eric Nivel, Kristinn.
JSIMS 28-Jan-99 1 JOINT SIMULATION SYSTEM Modeling Command and Control (C2) with Collaborative Planning Agents Randall Hill and Jonathan Gratch University.
Introduction to SOAR Based on “a gentle introduction to soar: an Architecture for Human Cognition” by Jill Fain Lehman, John Laird, Paul Rosenbloom. Presented.
Faculty of Management and Organization Emergence of social constructs and organizational behaviour How cognitive modelling enriches social simulation Martin.
The Importance of Architecture for Achieving Human-level AI John Laird University of Michigan June 17, th Soar Workshop
Marakas: Decision Support Systems, 2nd Edition © 2003, Prentice-Hall Chapter Chapter 7: Expert Systems and Artificial Intelligence Decision Support.
Polyscheme John Laird February 21, Major Observations Polyscheme is a FRAMEWORK not an architecture – Explicitly does not commit to specific primitives.
Models of Human Performance Dr. Chris Baber. 2 Objectives Introduce theory-based models for predicting human performance Introduce competence-based models.
Science and Engineering Practices
PROCESSING APPROACHES
Modeling Driver Behavior in a Cognitive Architecture
Understanding Social Interactions Using Incremental Abductive Inference Benjamin Meadows Pat Langley Miranda Emery Department of Computer Science The University.
Integrated Interpretation and Generation of Task-Oriented Dialogue Alfredo Gabaldon 1 Ben Meadows 2 Pat Langley 1,2 1 Carnegie Mellon Silicon Valley 2.
SLB /04/07 Thinking and Communicating “The Spiritual Life is Thinking!” (R.B. Thieme, Jr.)
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Ecological Interface Design
Synthetic Cognitive Agent Situational Awareness Components Sanford T. Freedman and Julie A. Adams Department of Electrical Engineering and Computer Science.
Mehdi Ghayoumi Kent State University Computer Science Department Summer 2015 Exposition on Cyber Infrastructure and Big Data.
Working Memory and Learning Underlying Website Structure
Cognitive Science and Biomedical Informatics Department of Computer Sciences ALMAAREFA COLLEGES.
The Architecture of Systems. System Architecture Every human-made and natural system is characterized by a structure and framework that supports and/or.
What is Artificial Intelligence?
Cognitive Architectures For Physical Agents Sara Bolduc Smith College CSC 290.
RULES Patty Nordstrom Hien Nguyen. "Cognitive Skills are Realized by Production Rules"
Beyond Chunking: Learning in Soar March 22, 2003 John E. Laird Shelley Nason, Andrew Nuxoll and a cast of many others University of Michigan.
Time Management.  Time management is concerned with OS facilities and services which measure real time.  These services include:  Keeping track of.
Cognitive Architectures and General Intelligent Systems Pay Langley 2006 Presentation : Suwang Jang.
Expert System Seyed Hashem Davarpanah University of Science and Culture.
Some Thoughts to Consider 5 Take a look at some of the sophisticated toys being offered in stores, in catalogs, or in Sunday newspaper ads. Which ones.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, CA
From NARS to a Thinking Machine Pei Wang Temple University.
Cognitive Modeling Cogs 4961, Cogs 6967 Psyc 4510 CSCI 4960 Mike Schoelles
Done by Fazlun Satya Saradhi. INTRODUCTION The main concept is to use different types of agent models which would help create a better dynamic and adaptive.
Learning Teleoreactive Logic Programs by Observation
Artificial Intelligence Chapter 25 Agent Architectures
Introduction Artificial Intelligent.
Symbolic cognitive architectures
Artificial Intelligence Chapter 25. Agent Architectures
Presented By: Darlene Banta
Artificial Intelligence Chapter 25 Agent Architectures
A Value-Driven Architecture for Intelligent Behavior
Presentation transcript:

Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Modeling Social Cognition in a Unified Cognitive Architecture Thanks to D. Choi, T. Konik, N. Li, D. Shapiro, and D. Stracuzzi for their contributions. This talk reports research partly funded by grants from DARPA IPTO, which is not responsible for its contents.

Cognitive Architectures A cognitive architecture (Newell, 1990) is the infrastructure for an intelligent system that is constant across domains: the memories that store domain-specific content the systems representation and organization of knowledge the mechanisms that use this knowledge in performance the processes that learn this knowledge from experience An architecture typically comes with a programming language that eases construction of knowledge-based systems. Research in this area incorporates many ideas from psychology about the nature of human thinking.

The I CARUS Architecture I CARUS (Langley, 2006) is a computational theory of the human cognitive architecture that posits: It shares these assumptions with other cognitive architectures like Soar (Laird et al., 1987) and ACT-R (Anderson, 1993) Short-term memories are distinct from long-term stores 2. 2.Memories contain modular elements cast as symbolic structures 3. 3.Long-term structures are accessed through pattern matching 4. 4.Cognition occurs in retrieval/selection/action cycles 5. 5.Learning involves monotonic addition of elements to memory 6. 6.Learning is incremental and interleaved with performance

Cascaded Integration in I CARUS I CARUS adopts a cascaded approach to integration in which lower-level modules produce results for higher-level ones. conceptual inference skill execution problem solving learning Like other unified cognitive architectures, I CARUS incorporates a number of distinct modules.

An I CARUS Agent for Urban Driving Consider driving a vehicle in a city, which requires: selecting routes obeying traffic lights avoiding collisions being polite to others finding addresses staying in the lane parking safely stopping for pedestrians following other vehicles delivering packages These tasks range from low-level execution to high-level reasoning.

Structure and Use of Conceptual Memory I CARUS organizes conceptual memory in a hierarchical manner. Conceptual inference occurs from the bottom up, starting from percepts to produce high-level beliefs about the current state.

I CARUS Skills Build on Concepts concepts skills Each concept is defined in terms of other concepts and/or percepts. Each skill is defined in terms of other skills, concepts, and percepts. I CARUS stores skills in a hierarchical manner that links to concepts.

Skill Execution in I CARUS This process repeats on each cycle to produce goal-directed but reactive behavior, biased toward continuing initiated skills. Skill execution occurs from the top down, starting from goals to find applicable paths through the skill hierarchy.

Execution and Problem Solving in I CARUS Executed plan Problem ? Skill Hierarchy Primitive Skills Reactive Execution impasse? Problem Solving yes no Problem solving involves means-ends analysis that chains backward over skills and concept definitions, executing skills whenever they become applicable.

I CARUS Learns Skills from Problem Solving Executed plan Problem ? Skill Hierarchy Primitive Skills Reactive Execution impasse? Problem Solving yes no Skill Learning

An I CARUS Agent for Urban Combat

Challenge: Thinking about Others The framework can deal with other independent agents, but only by viewing them as other objects in the environment. We designed I CARUS to model intelligent behavior in embodied agents, but our work to date has treated them in isolation. But people can reason more deeply about the goals and actions of others, then use their inferences to make decisions. Adding this ability to I CARUS will require knowledge, but it may also demand extensions to the architecture.

An Urban Driving Example You are driving in a city behind another vehicle when a dog suddenly runs across the road ahead of it. You do not want to hit the dog, but you are in no danger of that, yet you guess the other driver shares this goal. You reason that, if you were in his situation, you would swerve or step on the brakes to avoid hitting the dog. This leads you to predict that the other car may soon slow down very rapidly. Since you have another goal – to avoid collisions – you slow down in case that event happens.

Social Cognition in I CARUS Imagine itself in another agents physical/social situation; Infer the other agents goals either by default reasoning or based on its behavior; Carry out mental simulation of the other agents plausible actions and their effects on the world; Take high-probability trajectories into account in selecting which actions to execute itself. For I CARUS to handle social cognition of this sort, it should: Each of these abilities require changes to the architecture of I CARUS, not just its knowledge base.

Architectural Extensions Replace deductive inference with generation of plausible beliefs (e.g., others goals) – via abductive inference Support use of agents own concepts/skills to reason about others – by encoding alternate worlds with inheritance Extend the problem solver to support forward-chaining search – via mental simulation using repeated lookahead Revise skill execution to consider probability of future events – using desirability of likely trajectories In response, we are planning a number of changes to I CARUS: These extensions will let I CARUS exhibit social cognition, but they should also support many other abilities.

Automating Social Cognition Detect when the agents interactions with others lead to an undesirable situation (e.g., a collision); Analyze how this occurred and use counterfactual reasoning about how it might have been avoided (e.g., slowing down); Store new skills that avoid the undesired situation in an automated, reactive manner. But most social cognition seems more routine, in that it does not require complex reasoning or mental simulation. In response, we are extending I CARUS learning mechanisms to: Over time, the agent will come to behave in socially relevant ways without explicit reasoning or simulation.

Concluding Remarks includes hierarchical memories for concepts and skills; interleaves conceptual inference with reactive execution; resorts to problem solving when it lacks routine skills; learns such skills from successful resolution of impasses. I CARUS is a unified theory of the cognitive architecture that: We have developed agents for a variety of simulated physical environments, including urban driving. We are extending I CARUS to reason about others situations/ goals, predict their behavior, and select appropriate responses.

End of Presentation