Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA

Slides:



Advertisements
Similar presentations
Ontology-Based Computing Kenneth Baclawski Northeastern University and Jarg.
Advertisements

Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley Arizona State University and Institute for the Study of Learning and Expertise Expertise, Transfer, and Innovation in.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California Elena Messina.
Center for the Study of Language and Information Stanford University, Stanford, California March 20-21, 2004 Symposium on Reasoning and Learning in Cognitive.
Pat Langley Institute for the Study of Learning and Expertise Palo Alto, California and Center for the Study of Language and Information Stanford University,
Pat Langley Seth Rogers Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA.
Pat Langley Institute for the Study of Learning and Expertise Palo Alto, CA Cumulative Learning of Relational and Hierarchical Skills.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Varieties of Problem Solving in a Unified Cognitive Architecture.
Pat Langley Dan Shapiro Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Extending the I CARUS Cognitive Architecture Thanks to D. Choi,
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Pat Langley Dongkyu Choi Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Mental Simulation and Learning in the I CARUS Architecture.
General learning in multiple domains transfer of learning across domains Generality and Transfer in Learning training items test items training items test.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Modeling Social Cognition in a Unified Cognitive Architecture.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Information Processing Technology Office Learning Workshop April 12, 2004 Seedling Overview Learning Hierarchical Reactive Skills from Reasoning and Experience.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
IL Kickoff Meeting June 20-21, 2006 DARPA Integrated Learning POIROT Project 1 Learning Hierarchical Task Networks by Analyzing Expert Traces Pat Langley.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona Institute for the Study of Learning and Expertise Palo Alto, California.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona A Cognitive Architecture for Integrated.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA A Unified Cognitive Architecture for Embodied Agents Thanks.
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Cognitive Architectures and Virtual Intelligent Agents Thanks.
Pat Langley Computer Science and Engineering Arizona State University Tempe, Arizona USA Institute for the Study of Learning and Expertise Palo Alto, California.
Pat Langley Institute for the Study of Learning and Expertise Palo Alto, California A Cognitive Architecture for Complex Learning.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
1 ISLE Transfer Learning Team Main Technology Components The I CARUS Architecture Markov Logic Networks Executes skills in the environment Long-TermConceptualMemoryShort-TermConceptualMemory.
Cognitive Systems, ICANN panel, Q1 What is machine intelligence, as beyond pattern matching, classification and prediction. What is machine intelligence,
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
The CLARION Cognitive Architecture: A Tutorial Part 5 – Conclusion Nick Wilson, Michael Lynch, Ron Sun, Sébastien Hélie Cognitive Science, Rensselaer Polytechnic.
Computation and representation Joe Lau. Overview of lecture What is computation? Brief history Computational explanations in cognitive science Levels.
Threaded Cognition: An Integrated Theory of Concurrent Multitasking
An Introduction to Artificial Intelligence Presented by : M. Eftekhari.
Object-Oriented Analysis and Design
Introduction to Cognitive Science Lecture #1 : INTRODUCTION Joe Lau Philosophy HKU.
Polyscheme John Laird February 21, Major Observations Polyscheme is a FRAMEWORK not an architecture – Explicitly does not commit to specific primitives.
Models of Human Performance Dr. Chris Baber. 2 Objectives Introduce theory-based models for predicting human performance Introduce competence-based models.
Meaningful Learning in an Information Age
Biointelligence Laboratory School of Computer Science and Engineering Seoul National University Cognitive Robots © 2014, SNU CSE Biointelligence Lab.,
Cognitive level of Analysis
Vedrana Vidulin Jožef Stefan Institute, Ljubljana, Slovenia
Modeling Driver Behavior in a Cognitive Architecture
Knowledge representation
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Lecture 9: Chapter 9 Architectural Design
Towards Cognitive Robotics Biointelligence Laboratory School of Computer Science and Engineering Seoul National University Christian.
1 CS 2710, ISSP 2610 Foundations of Artificial Intelligence introduction.
Korea Univ. Division Information Management Engineering UI Lab. Korea Univ. Division Information Management Engineering UI Lab. IMS 802 Cognitive Modeling.
1 Viewing Vision-Language Integration as a Double-Grounding case Katerina Pastra Department of Computer Science, Natural Language Processing Group, University.
Introduction of Intelligent Agents
Cognitive Science and Biomedical Informatics Department of Computer Sciences ALMAAREFA COLLEGES.
Plan Introducing the SINTELNET white paper The background: agent-based models, social simulations, logical analysis, and mirror-neuron system... Where.
Cognitive Architectures For Physical Agents Sara Bolduc Smith College CSC 290.
RULES Patty Nordstrom Hien Nguyen. "Cognitive Skills are Realized by Production Rules"
Vedrana Vidulin Jožef Stefan Institute, Ljubljana, Slovenia
Artificial Intelligence: Research and Collaborative Possibilities a presentation by: Dr. Ernest L. McDuffie, Assistant Professor Department of Computer.
第 25 章 Agent 体系结构. 2 Outline Three-Level Architectures Goal Arbitration The Triple-Tower Architecture Bootstrapping Additional Readings and Discussion.
Banaras Hindu University. A Course on Software Reuse by Design Patterns and Frameworks.
Cognitive Architectures and General Intelligent Systems Pay Langley 2006 Presentation : Suwang Jang.
Joscha Bach Nick Cassimatis Ken Forbus Ben Goertzel Stacey Marsella John Laird Pat Langley Christian Lebiere Paul Rosenbloom Matthias Scheutz Satinder.
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, CA
Cognitive Modeling Cogs 4961, Cogs 6967 Psyc 4510 CSCI 4960 Mike Schoelles
Using Cognitive Science To Inform Instructional Design
Learning Teleoreactive Logic Programs by Observation
Artificial Intelligence (AI)
Basic Concepts and Issues on Human Development
Artificial Intelligence Chapter 25. Agent Architectures
Presentation transcript:

Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA A Cognitive Architecture for Physical Agents Thanks to D. Choi, K. Cummings, N. Nejati, S. Rogers, S. Sage, and D. Shapiro for their contributions. This talk reports research. funded by grants from DARPA IPTO and the National Science Foundation, which are not responsible for its contents.

The original goal of artificial intelligence was to design and implement computational artifacts that: handled difficult tasks that require cognitive processing; handled difficult tasks that require cognitive processing; combined many capabilities into integrated systems; combined many capabilities into integrated systems; provided insights into the nature of mind and intelligence. provided insights into the nature of mind and intelligence. Cognitive Systems Instead, modern AI has divided into many subfields that care little about cognition, systems, or intelligence. But the challenge remains and we need far more research on cognitive systems.

The Fragmentation of AI Research action perception reasoning learning planning language

The Domain of In-City Driving Consider driving a vehicle in a city, which requires: selecting routes selecting routes obeying traffic lights obeying traffic lights avoiding collisions avoiding collisions being polite to others being polite to others finding addresses finding addresses staying in the lane staying in the lane parking safely parking safely stopping for pedestrians stopping for pedestrians following other vehicles following other vehicles delivering packages delivering packages These tasks range from low-level execution to high-level reasoning.

Newells Critique move beyond isolated phenomena and capabilities to develop complete models of intelligent behavior; move beyond isolated phenomena and capabilities to develop complete models of intelligent behavior; demonstrate our systems intelligence on the same range of domains and tasks as humans can handle; demonstrate our systems intelligence on the same range of domains and tasks as humans can handle; view artificial intelligence and cognitive psychology as close allies with distinct but related goals; view artificial intelligence and cognitive psychology as close allies with distinct but related goals; evaluate these systems in terms of generality and flexibility rather than success on a single class of tasks. evaluate these systems in terms of generality and flexibility rather than success on a single class of tasks. However, there are different paths toward achieving such systems. In 1973, Allen Newell argued You cant play twenty questions with nature and win. Instead, he proposed that we:

A System with Communicating Modules action perception reasoning learning planning language software engineering / multi-agent systems

action perception reasoning learning planning language short-term beliefs and goals A System with Shared Short-Term Memory blackboard architectures

Newells vision for research on theories of intelligence was that: cognitive systems should make strong theoretical assumptions about the nature of the mind; cognitive systems should make strong theoretical assumptions about the nature of the mind; theories of intelligence should change only gradually, as new structures or processes are determined necessary; theories of intelligence should change only gradually, as new structures or processes are determined necessary; later design choices should be constrained heavily by earlier ones, not made independently. later design choices should be constrained heavily by earlier ones, not made independently. Integration vs. Unification A successful framework is all about mutual constraints, and it should provide a unified theory of intelligent behavior. He associated these aims with the idea of a cognitive architecture.

A System with Shared Long-Term Memory action perception reasoning learning planning language short-term beliefs and goals long-term memory structures cognitive architectures

A Constrained Cognitive Architecture action perception reasoning learning planning language short-term beliefs and goals long-term memory structures

The I CARUS Architecture In this talk I will use one such framework I CARUS to illustrate the advantages of cognitive architectures. I CARUS incorporates a variety of assumptions from psychological theories; the most basic are that: These claims give I CARUS much in common with other cognitive architectures like ACT-R, Soar, and Prodigy. 1.Short-term memories are distinct from long-term stores 2.Memories contain modular elements cast as list structures 3.Long-term structures are accessed through pattern matching 4.Cognition occurs in retrieval/selection/action cycles 5.Performance and learning compose elements in memory

A cognitive architecture makes a specific commitment to: long-term memories that store knowledge and procedures; long-term memories that store knowledge and procedures; short-term memories that store beliefs and goals; short-term memories that store beliefs and goals; sensori-motor memories that hold percepts and actions. sensori-motor memories that hold percepts and actions. Architectural Commitment to Memories Each memory holds different content the agent uses in activities. For each memory, a cognitive architecture also commits to: the encoding of contents in that memory; the encoding of contents in that memory; the organization of structures within the memory; the organization of structures within the memory; the connections among structures across memories. the connections among structures across memories.

concepts and skills encode different aspects of knowledge that are stored as distinct cognitive structures; concepts and skills encode different aspects of knowledge that are stored as distinct cognitive structures; cognition occurs in a physical context, with concepts and skills being grounded in perception and action; cognition occurs in a physical context, with concepts and skills being grounded in perception and action; many mental structures are relational in nature, in that they describe connections or interactions among objects; many mental structures are relational in nature, in that they describe connections or interactions among objects; long-term memories have hierarchical organizations that define complex structures in terms of simpler ones; long-term memories have hierarchical organizations that define complex structures in terms of simpler ones; each element in a short-term memory is an active version of some structure in long-term memory. each element in a short-term memory is an active version of some structure in long-term memory. Ideas about Representation Cognitive psychology makes important representational claims: I CARUS adopts these assumptions about the contents of memory.

I CARUS Memories Long-TermConceptualMemory Long-Term Skill Memory Short-Term Belief Memory Short-Term Goal Memory Environment PerceptualBuffer MotorBuffer

Representing Long-Term Structures Conceptual clauses: A set of relational inference rules with perceived objects or defined concepts in their antecedents; Conceptual clauses: A set of relational inference rules with perceived objects or defined concepts in their antecedents; Skill clauses: A set of executable skills that specify: Skill clauses: A set of executable skills that specify: a head that indicates a goal the skill achieves; a head that indicates a goal the skill achieves; a single (typically defined) precondition; a single (typically defined) precondition; a set of ordered subgoals or actions for achieving the goal. a set of ordered subgoals or actions for achieving the goal. These define a specialized class of hierarchical task networks in which each task corresponds to a goal concept. I CARUS syntax is very similar to Nau et al.s SHOP2 formalism for hierarchical task networks. I CARUS encodes two forms of general long-term knowledge:

I CARUS Concepts for In-City Driving ((in-rightmost-lane ?self ?clane) :percepts ((self ?self) (segment ?seg) :percepts ((self ?self) (segment ?seg) (line ?clane segment ?seg)) :relations ((driving-well-in-segment ?self ?seg ?clane) :relations ((driving-well-in-segment ?self ?seg ?clane) (last-lane ?clane) (not (lane-to-right ?clane ?anylane)))) ((driving-well-in-segment ?self ?seg ?lane) :percepts ((self ?self) (segment ?seg) (line ?lane segment ?seg)) :percepts ((self ?self) (segment ?seg) (line ?lane segment ?seg)) :relations ((in-segment ?self ?seg) (in-lane ?self ?lane) :relations ((in-segment ?self ?seg) (in-lane ?self ?lane) (aligned-with-lane-in-segment ?self ?seg ?lane) (centered-in-lane ?self ?seg ?lane) (steering-wheel-straight ?self))) ((in-lane ?self ?lane) :percepts ((self ?self segment ?seg) (line ?lane segment ?seg dist ?dist)) :percepts ((self ?self segment ?seg) (line ?lane segment ?seg dist ?dist)) :tests ((> ?dist -10) ( ?dist -10) (<= ?dist 0))) ((in-segment ?self ?seg) :percepts ((self ?self segment ?seg) (segment ?seg))) :percepts ((self ?self segment ?seg) (segment ?seg)))

((in-rightmost-lane ?self ?line) :percepts ((self ?self) (line ?line)) :percepts ((self ?self) (line ?line)) :start ((last-lane ?line)) :start ((last-lane ?line)) :subgoals ((driving-well-in-segment ?self ?seg ?line))) :subgoals ((driving-well-in-segment ?self ?seg ?line))) ((driving-well-in-segment ?self ?seg ?line) :percepts ((segment ?seg) (line ?line) (self ?self)) :percepts ((segment ?seg) (line ?line) (self ?self)) :start ((steering-wheel-straight ?self)) :start ((steering-wheel-straight ?self)) :subgoals ((in-segment ?self ?seg) :subgoals ((in-segment ?self ?seg) (centered-in-lane ?self ?seg ?line) (aligned-with-lane-in-segment ?self ?seg ?line) (steering-wheel-straight ?self))) ((in-segment ?self ?endsg) :percepts ((self ?self speed ?speed) (intersection ?int cross ?cross) :percepts ((self ?self speed ?speed) (intersection ?int cross ?cross) (segment ?endsg street ?cross angle ?angle)) :start ((in-intersection-for-right-turn ?self ?int)) :start ((in-intersection-for-right-turn ?self ?int)) :actions (( steer 1))) :actions (( steer 1))) I CARUS Skills for In-City Driving

Representing Short-Term Beliefs/Goals (current-street me A)(current-segment me g550) (lane-to-right g599 g601)(first-lane g599) (last-lane g599)(last-lane g601) (at-speed-for-u-turn me)(slow-for-right-turn me) (steering-wheel-not-straight me)(centered-in-lane me g550 g599) (in-lane me g599)(in-segment me g550) (on-right-side-in-segment me)(intersection-behind g550 g522) (building-on-left g288)(building-on-left g425) (building-on-left g427)(building-on-left g429) (building-on-left g431)(building-on-left g433) (building-on-right g287)(building-on-right g279) (increasing-direction me)(buildings-on-right g287 g279)

Encoding Perceived Objects (self me speed 5 angle-of-road -0.5 steering-wheel-angle -0.1) (segment g562 street 1 dist -5.0 latdist 15.0) (line g564 length width 0.5 dist 35.0 angle 1.1 color white segment g562) (line g565 length width 0.5 dist 15.0 angle 1.1 color white segment g562) (line g563 length width 0.5 dist 25.0 angle 1.1 color yellow segment g562) (segment g550 street A dist oor latdist nil) (line g600 length width 0.5 dist angle -0.5 color white segment g550) (line g601 length width 0.5 dist 5.0 angle -0.5 color white segment g550) (line g599 length width 0.5 dist -5.0 angle -0.5 color yellow segment g550) (intersection g522 street A cross 1 dist -5.0 latdist nil) (building g431 address 99 street A c1dist 38.2 c1angle -1.4 c2dist 57.4 c2angle -1.0) (building g425 address 25 street A c1dist 37.8 c1angle -2.8 c2dist 56.9 c2angle -3.1) (building g389 address 49 street 1 c1dist 49.2 c1angle 2.7 c2dist 53.0 c2angle 2.2) (sidewalk g471 dist 15.0 angle -0.5) (sidewalk g474 dist 5.0 angle 1.07) (sidewalk g469 dist angle -0.5) (sidewalk g470 dist 45.0 angle 1.07) (stoplight g538 vcolor green hcolor red))

Hierarchical Structure of Long-Term Memory concepts skills Each concept is defined in terms of other concepts and/or percepts. Each skill is defined in terms of other skills, concepts, and percepts. I CARUS organizes both concepts and skills in a hierarchical manner.

Hierarchical Structure of Long-Term Memory conceptsskills For example, the skill highlighted here refers directly to the highlighted concepts. I CARUS interleaves its long-term memories for concepts and skills.

In addition, a cognitive architecture makes commitments about: performance processes for: performance processes for: retrieval, matching, and selection retrieval, matching, and selection inference and problem solving inference and problem solving perception and motor control perception and motor control learning processes that: learning processes that: generate new long-term knowledge structures generate new long-term knowledge structures refine and modulate existing structures refine and modulate existing structures Architectural Commitment to Processes In most cognitive architectures, performance and learning are tightly intertwined.

humans can handle multiple goals with different priorities, which can interrupt tasks to which attention returns later; humans can handle multiple goals with different priorities, which can interrupt tasks to which attention returns later; conceptual inference, which typically occurs rapidly and unconsciously, is more basic than problem solving; conceptual inference, which typically occurs rapidly and unconsciously, is more basic than problem solving; humans often resort to means-ends analysis to solve novel, unfamiliar problems; humans often resort to means-ends analysis to solve novel, unfamiliar problems; mental problem solving requires greater cognitive resources than execution of automatized skills; mental problem solving requires greater cognitive resources than execution of automatized skills; problem solving often occurs in a physical context, with mental processing being interleaved with execution. problem solving often occurs in a physical context, with mental processing being interleaved with execution. Ideas about Performance I CARUS embodies these ideas in its performance mechanisms. Cognitive psychology makes clear claims about performance:

I CARUS Functional Processes Long-TermConceptualMemory Short-TermBeliefMemory Short-Term Goal Memory ConceptualInference SkillExecution Perception Environment PerceptualBuffer Problem Solving Skill Learning MotorBuffer Skill Retrieval and Selection Long-Term Skill Memory

I CARUS Inference-Execution Cycle 1.places descriptions of sensed objects in the perceptual buffer; 2.infers instances of concepts implied by the current situation; 3.finds paths through the skill hierarchy from top-level goals; 4.selects one or more applicable skill paths for execution; 5.invokes the actions associated with each selected path. On each successive execution cycle, the I CARUS architecture: I CARUS agents are teleoreactive (Nilsson, 1994) in that they are executed reactively but in a goal-directed manner.

Basic I CARUS Processes concepts skills Concepts are matched bottom up, starting from percepts. Skill paths are matched top down, starting from intentions. I CARUS matches patterns to recognize concepts and select skills.

I CARUS Interleaves Execution and Problem Solving Executed plan Problem ? Skill Hierarchy Primitive Skills Reactive Execution impasse? Problem Solving yes no

Interleaving Reactive Control and Problem Solving Solve(G) Push the goal literal G onto the empty goal stack GS. On each cycle, If the top goal G of the goal stack GS is satisfied, Then pop GS. Else if the goal stack GS does not exceed the depth limit, Let S be the skill instances whose heads unify with G. If any applicable skill paths start from an instance in S, Then select one of these paths and execute it. Else let M be the set of primitive skill instances that have not already failed in which G is an effect. If the set M is nonempty, Then select a skill instance Q from M. Push the start condition C of Q onto goal stack GS. Else if G is a complex concept with the unsatisfied subconcepts H and with satisfied subconcepts F, Then if there is a subconcept I in H that has not yet failed, Then push I onto the goal stack GS. Else pop G from the goal stack GS and store information about failure with G's parent. Else pop G from the goal stack GS. Store information about failure with G's parent. Else if G is a complex concept with the unsatisfied subconcepts H and with satisfied subconcepts F, Then if there is a subconcept I in H that has not yet failed, Then push I onto the goal stack GS. Else pop G from the goal stack GS and store information about failure with G's parent. Else pop G from the goal stack GS. Store information about failure with G's parent. This is traditional means-ends analysis, with three exceptions: (1) conjunctive goals must be defined concepts; (2) chaining occurs over both skills/operators and concepts/axioms; and (3) selected skills are executed whenever applicable.

A Successful Problem-Solving Trace (ontable A T) (on B A) (on C B) (hand-empty) (clear C) (unst. C B) (unstack C B) (clear B) (putdown C T) (unst. B A) (unstack B A) (clear A) (holding C)(hand-empty) (holding B) A B CB A C initial state goal

efforts to overcome impasses during problem solving can lead to the acquisition of new skills; efforts to overcome impasses during problem solving can lead to the acquisition of new skills; learning can transform backward-chaining heuristic search into more informed forward-chaining behavior; learning can transform backward-chaining heuristic search into more informed forward-chaining behavior; learning is incremental and interleaved with performance; learning is incremental and interleaved with performance; structural learning involves monotonic addition of symbolic elements to long-term memory; structural learning involves monotonic addition of symbolic elements to long-term memory; transfer to new tasks depends on the amount of structure shared with previously mastered tasks. transfer to new tasks depends on the amount of structure shared with previously mastered tasks. Claims about Learning Cognitive psychology has also developed ideas about learning: I CARUS incorporates these assumptions into its basic operation.

I CARUS Learns Skills from Problem Solving Executed plan Problem ? Skill Hierarchy Primitive Skills Reactive Execution impasse? Problem Solving yes no Skill Learning

I CARUS Constraints on Skill Learning What determines the hierarchical structure of skill memory? What determines the hierarchical structure of skill memory? The structure emerges the subproblems that arise during problem solving, which, because operator conditions and goals are single literals, form a semilattice. The structure emerges the subproblems that arise during problem solving, which, because operator conditions and goals are single literals, form a semilattice. What determines the heads of the learned clauses/methods? What determines the heads of the learned clauses/methods? The head of a learned clause is the goal literal that the planner achieved for the subproblem that produced it. The head of a learned clause is the goal literal that the planner achieved for the subproblem that produced it. What are the conditions on the learned clauses/methods? What are the conditions on the learned clauses/methods? If the subproblem involved skill chaining, they are the conditions of the first subskill clause. If the subproblem involved skill chaining, they are the conditions of the first subskill clause. If the subproblem involved concept chaining, they are the subconcepts that held at the subproblems outset. If the subproblem involved concept chaining, they are the subconcepts that held at the subproblems outset.

(ontable A T) (on B A) (on C B) (hand-empty) (clear C) (unst. C B) (unstack C B) (clear B) (putdown C T) (unst. B A) (unstack B A) (clear A) (holding C)(hand-empty) (holding B) A B CB A C 1 skill chaining Constructing Skills from a Trace

(ontable A T) (on B A) (on C B) (hand-empty) (clear C) (unst. C B) (unstack C B) (clear B) (putdown C T) (unst. B A) (unstack B A) (clear A) (holding C)(hand-empty) (holding B) A B CB A C 1 2 skill chaining Constructing Skills from a Trace

(ontable A T) (on B A) (on C B) (hand-empty) (clear C) (unst. C B) (unstack C B) (clear B) (putdown C T) (unst. B A) (unstack B A) (clear A) (holding C)(hand-empty) (holding B) A B CB A C concept chaining Constructing Skills from a Trace

(ontable A T) (on B A) (on C B) (hand-empty) (clear C) (unst. C B) (unstack C B) (clear B) (putdown C T) (unst. B A) (unstack B A) (clear A) (holding C)(hand-empty) (holding B) A B CB A C skill chaining Constructing Skills from a Trace

Learned Skills in the Blocks World Learned Skills in the Blocks World (clear (?C) :percepts((block ?D) (block ?C)) :start((unstackable ?D ?C)) :skills((unstack ?D ?C))) (clear (?B) :percepts ((block ?C) (block ?B)) :start((on ?C ?B) (hand-empty)) :skills((unstackable ?C ?B) (unstack ?C ?B))) (unstackable (?C ?B) :percepts((block ?B) (block ?C)) :start ((on ?C ?B) (hand-empty)) :skills((clear ?C) (hand-empty))) (hand-empty ( ) :percepts ((block ?D) (table ?T1)) :start ((putdownable ?D ?T1)) :skills ((putdown ?D ?T1))) Hierarchical skills are generalized traces of successful means-ends problem solving

Cumulative Curves for Blocks World

Cumulative Curves for FreeCell

Learning Skills for In-City Driving We have also trained I CARUS to drive in our in-city environment. We provide the system with tasks of increasing complexity. Learning transforms the problem-solving traces into hierarchical skills. The agent uses these skills to change lanes, turn, and park using only reactive control.

Skill Clauses Learning for In-City Driving Skill Clauses Learning for In-City Driving ((parked ?me ?g1152) :percepts((lane-line ?g1152) (self ?me)) :start( ) :subgoals((in-rightmost-lane ?me ?g1152) (stopped ?me)) ) ((in-rightmost-lane ?me ?g1152) :percepts((self ?me) (lane-line ?g1152)) :start ((last-lane ?g1152)) :subgoals((driving-well-in-segment ?me ?g1101 ?g1152)) ) ((driving-well-in-segment ?me ?g1101 ?g1152) :percepts((lane-line ?g1152) (segment ?g1101) (self ?me)) :start ((steering-wheel-straight ?me)) :subgoals((in-lane ?me ?g1152) (centered-in-lane ?me ?g1101 ?g1152) (aligned-with-lane-in-segment ?me ?g1101 ?g1152) (steering-wheel-straight ?me)) )

Learning Curves for In-City Driving

The architecture also supports the transfer of knowledge in that: skills acquired later can build on those learned earlier; skills acquired later can build on those learned earlier; skill clauses are indexed by the goals they achieve; skill clauses are indexed by the goals they achieve; conceptual inference supports mapping across domains. conceptual inference supports mapping across domains. Transfer of Skills in I CARUS We are exploring such effects in I CARUS as part of a DARPA program on the transfer of learned knowledge. Testbeds include first-person shooter games, board games, and physics problem solving.

Transfer Effects in FreeCell On 16-card FreeCell tasks, prior training aids solution probability.

Transfer Effects in FreeCell However, it also lets the system solve problems with less effort.

Cognitive architectures come with a programming language that: includes a syntax linked to its representational assumptions includes a syntax linked to its representational assumptions inputs long-term knowledge and initial short-term elements inputs long-term knowledge and initial short-term elements provides an interpreter that runs the specified program provides an interpreter that runs the specified program incorporates tracing facilities to inspect system behavior incorporates tracing facilities to inspect system behavior Architectures as Programming Languages Such programming languages ease construction and debugging of knowledge-based systems. For this reason, cognitive architectures support far more efficient development of software for intelligent systems.

The programming language associated with I CARUS comes with: a syntax for concepts, skills, beliefs, and percepts a syntax for concepts, skills, beliefs, and percepts the ability to load and parse such programs the ability to load and parse such programs an interpreter for inference, execution, planning, and learning an interpreter for inference, execution, planning, and learning a trace package that displays system behavior over time a trace package that displays system behavior over time Programming in I CARUS We have used this language to develop adaptive intelligent agents in a variety of domains.

An I CARUS Agent for Urban Combat

Intellectual Precursors earlier research on integrated cognitive architectures earlier research on integrated cognitive architectures especially ACT, Soar, and Prodigy especially ACT, Soar, and Prodigy earlier frameworks for reactive control of agents earlier frameworks for reactive control of agents research on belief-desire-intention (BDI) architectures research on belief-desire-intention (BDI) architectures planning/execution with hierarchical transition networks planning/execution with hierarchical transition networks work on learning macro-operators and search-control rules work on learning macro-operators and search-control rules previous work on cumulative structure learning previous work on cumulative structure learning I CARUS design has been influenced by many previous efforts: However, the framework combines and extends ideas from its various predecessors in novel ways.

Directions for Future Research forward chaining and mental simulation of skills; forward chaining and mental simulation of skills; learning expected utilities from skill execution histories; learning expected utilities from skill execution histories; learning new conceptual structures in addition to skills; learning new conceptual structures in addition to skills; probabilistic encoding and matching of Boolean concepts; probabilistic encoding and matching of Boolean concepts; flexible recognition of skills executed by other agents; flexible recognition of skills executed by other agents; extension of short-term memory to store episodic traces. extension of short-term memory to store episodic traces. Future work on I CARUS should introduce additional methods for: Taken together, these features should make I CARUS a more general and powerful cognitive architecture.

Contributions of I CARUS includes separate memories for concepts and skills; includes separate memories for concepts and skills; organizes both memories in a hierarchical fashion; organizes both memories in a hierarchical fashion; modulates reactive execution with goal seeking; modulates reactive execution with goal seeking; augments routine behavior with problem solving; and augments routine behavior with problem solving; and learns hierarchical skills in a cumulative manner. learns hierarchical skills in a cumulative manner. I CARUS is a cognitive architecture for physical agents that: These ideas have their roots in cognitive psychology, but they are also effective in building flexible intelligent agents.

Concluding Remarks are embedded within a unified cognitive architecture; are embedded within a unified cognitive architecture; incorporate modules that provide mutual constraints; incorporate modules that provide mutual constraints; demonstrate a wide range of intelligent behavior; demonstrate a wide range of intelligent behavior; are evaluated on multiple tasks in challenging testbeds. are evaluated on multiple tasks in challenging testbeds. We need more research on integrated intelligent systems that: For more information about the I CARUS architecture, see:

End of Presentation