Presentation is loading. Please wait.

Presentation is loading. Please wait.

Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Extending the I CARUS Cognitive Architecture Thanks to D. Choi,

Similar presentations


Presentation on theme: "Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Extending the I CARUS Cognitive Architecture Thanks to D. Choi,"— Presentation transcript:

1 Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Extending the I CARUS Cognitive Architecture Thanks to D. Choi, T. Konik, U. Kutur, D. Nau, S. Ohlsson, S. Rogers, and D. Shapiro for their many contributions. This talk reports research partly funded by grants from DARPA IPTO, which is not responsible for its contents.

2 The I CARUS Architecture I CARUS is a theory of the human cognitive architecture that posits: It shares the assumptions with other cognitive architectures like Soar (Laird et al., 1987) and ACT-R (Anderson, 1993). 1.Short-term memories are distinct from long-term stores 2.Memories contain modular elements cast as symbolic structures 3.Long-term structures are accessed through pattern matching 4.Cognition occurs in retrieval/selection/action cycles 5.Learning involves monotonic addition of elements to memory 6.Learning is incremental and interleaved with performance

3 Distinctive Features of I CARUS However, I CARUS also makes assumptions that distinguish it from these architectures: Some of these tenets also appear in Bonasso et al.s (2003) 3T, Freeds APEX, and Sun et al.s (2001) CLARION. 1.Cognition is grounded in perception and action 2.Categories and skills are separate cognitive entities 3.Short-term elements are instances of long-term structures 4.Inference and execution are more basic than problem solving 5.Skill hierarchies are learned in a cumulative manner

4 Cascaded Integration in I CARUS I CARUS adopts a cascaded approach to integration in which lower-level modules produce results for higher-level ones. conceptual inference skill execution problem solving learning Like other unified cognitive architectures, I CARUS incorporates a number of distinct modules.

5 I CARUS Memories and Processes Long-TermConceptualMemory Short-TermBeliefMemory Short-Term Goal Memory ConceptualInference SkillExecution Perception Environment PerceptualBuffer Problem Solving Skill Learning MotorBuffer Skill Retrieval and Selection Long-Term Skill Memory

6 Each concept is defined in terms of other concepts and/or percepts. Each skill is defined in terms of other skills, concepts, and percepts. I CARUS interleaves its long-term memories for concepts and skills. Hierarchical Structure of Memory concepts skills

7 I CARUS interleaves its long-term memories for concepts and skills. Hierarchical Structure of Memory Each concept is defined in terms of other concepts and/or percepts. Each skill is defined in terms of other skills, concepts, and percepts. concepts skills

8 Basic I CARUS Processes concepts skills Concepts are matched bottom up, starting from percepts. Skill paths are matched top down, starting from goals. I CARUS matches patterns to recognize concepts and select skills.

9 I CARUS Interleaves Execution and Problem Solving Executed plan Problem ? Skill Hierarchy Primitive Skills Reactive Execution impasse? Problem Solving yes no This organization reflects the psychological distinction between automatized and controlled behavior.

10 Means-Ends Problem Solving in I CARUS Solve(G) Push the goal literal G onto the empty goal stack GS. On each cycle, If the top goal G of the goal stack GS is satisfied, Then pop GS. Else if the goal stack GS does not exceed the depth limit, Let S be the skill instances whose heads unify with G. If any applicable skill paths start from an instance in S, Then select one of these paths and execute it. Else let M be the set of primitive skill instances that have not already failed in which G is an effect. If the set M is nonempty, Then select a skill instance Q from M. Push the start condition C of Q onto goal stack GS. Else if G is a complex concept with the unsatisfied subconcepts H and with satisfied subconcepts F, Then if there is a subconcept I in H that has not yet failed, Then push I onto the goal stack GS. Else pop G from the goal stack GS and store information about failure with G's parent. Else pop G from the goal stack GS. Store information about failure with G's parent. Else if G is a complex concept with the unsatisfied subconcepts H and with satisfied subconcepts F, Then if there is a subconcept I in H that has not yet failed, Then push I onto the goal stack GS. Else pop G from the goal stack GS and store information about failure with G's parent. Else pop G from the goal stack GS. Store information about failure with G's parent. Previous versions of I CARUS have used means-ends analysis, which has been observed repeatedly in humans, but it differs from standard variants in that it interleaves backward chaining with execution.

11 Learning from Problem Solutions operates whenever problem solving overcomes an impasse; operates whenever problem solving overcomes an impasse; incorporates only information available from the goal stack; incorporates only information available from the goal stack; generalizes beyond the specific objects concerned; generalizes beyond the specific objects concerned; depends on whether chaining involved skills or concepts; depends on whether chaining involved skills or concepts; supports cumulative learning and within-problem transfer. supports cumulative learning and within-problem transfer. I CARUS incorporates a mechanism for learning new skills that: This skill creation process is fully interleaved with means-ends analysis and execution. Learned skills carry out forward execution in the environment rather than backward chaining in the mind.

12 Forward Search and Mental Simulation However, in some domains, humans carry out forward-chaining search with methods like progressive deepening (de Groot, 1978). In response, we are adding a new module to I CARUS that: performs mental simulation of a single trajectory consistent with its stored hierarchical skills; performs mental simulation of a single trajectory consistent with its stored hierarchical skills; repeats this process to find a number of alternative paths from the current environmental state; repeats this process to find a number of alternative paths from the current environmental state; selects the path that produces the best outcome to determine the next primitive skill to execute. selects the path that produces the best outcome to determine the next primitive skill to execute. We refer to this memory-limited search method as hierarchical iterative sampling (Langley, 1992).

13 A Trace of Iterative Sampling

14

15

16

17

18

19 Challenges in Lookahead Search To support such mental simulation in I CARUS, we must first: extend its representation to associate beliefs with states; extend its representation to associate beliefs with states; use expected values to guide selection of desirable paths. use expected values to guide selection of desirable paths. We want a single mechanism that will let I CARUS handle all of these situations. This should be easy for some domains, but it must also: reason about environments that change on their own; reason about environments that change on their own; operate in settings that involve other goal-directed agents. operate in settings that involve other goal-directed agents.

20 More on Mental Simulation We must address other issues to make this idea operational: determine the depth of lookahead and number of samples; determine the depth of lookahead and number of samples; ensure reasonable diversity among the sampled paths; ensure reasonable diversity among the sampled paths; explain when problem solvers chain backward and forward; explain when problem solvers chain backward and forward; use the results of forward search to drive skill acquisition. use the results of forward search to drive skill acquisition. Answering these questions will let I CARUS provide a more complete theory of human problem solving.

21 Learning from Undesirable Outcomes Despite their best efforts, humans sometimes take actions that produce undesired results. We plan to model learning from such outcomes in I CARUS by: identifying conditions on path that, if violated, avoid result; identifying conditions on path that, if violated, avoid result; carry out search to find another path that would violate it; carry out search to find another path that would violate it; analyze the alternative path to learn skills that produce it; analyze the alternative path to learn skills that produce it; store the new skills so as to mask older, problematic ones. store the new skills so as to mask older, problematic ones. Learning from such counterfactual reasoning is an important human ability.

22 Plans for Evaluation We propose to evaluate these extensions to I CARUS on two different testbeds: a simulated urban driving environment that contains other vehicles and pedestrians; a simulated urban driving environment that contains other vehicles and pedestrians; a mobile robot that carries out joint activities with humans to achieve shared goals. a mobile robot that carries out joint activities with humans to achieve shared goals. Both dynamic environments should illustrate the benefits of mental simulation and counterfactual learning.

23 Concluding Remarks includes hierarchical memories for concepts and skills; includes hierarchical memories for concepts and skills; interleaves conceptual inference with reactive execution; interleaves conceptual inference with reactive execution; resorts to problem solving when it lacks routine skills; resorts to problem solving when it lacks routine skills; learns such skills from successful resolution of impasses. learns such skills from successful resolution of impasses. I CARUS is a unified theory of the cognitive architecture that: However, we plan to extend the framework so it can support: forward-chaining search via repeated mental simulation; forward-chaining search via repeated mental simulation; learning new skills through counterfactual reasoning. learning new skills through counterfactual reasoning. These will let I CARUS more fully model human cognition.

24 End of Presentation

25 I CARUS Concepts for In-City Driving ((in-rightmost-lane ?self ?clane) :percepts ((self ?self) (segment ?seg) :percepts ((self ?self) (segment ?seg) (line ?clane segment ?seg)) :relations ((driving-well-in-segment ?self ?seg ?clane) :relations ((driving-well-in-segment ?self ?seg ?clane) (last-lane ?clane) (not (lane-to-right ?clane ?anylane)))) ((driving-well-in-segment ?self ?seg ?lane) :percepts ((self ?self) (segment ?seg) (line ?lane segment ?seg)) :percepts ((self ?self) (segment ?seg) (line ?lane segment ?seg)) :relations ((in-segment ?self ?seg) (in-lane ?self ?lane) :relations ((in-segment ?self ?seg) (in-lane ?self ?lane) (aligned-with-lane-in-segment ?self ?seg ?lane) (centered-in-lane ?self ?seg ?lane) (steering-wheel-straight ?self))) ((in-lane ?self ?lane) :percepts ((self ?self segment ?seg) (line ?lane segment ?seg dist ?dist)) :percepts ((self ?self segment ?seg) (line ?lane segment ?seg dist ?dist)) :tests ((> ?dist -10) ( ?dist -10) (<= ?dist 0)))

26 Representing Short-Term Beliefs/Goals (current-street me A)(current-segment me g550) (lane-to-right g599 g601)(first-lane g599) (last-lane g599)(last-lane g601) (at-speed-for-u-turn me)(slow-for-right-turn me) (steering-wheel-not-straight me)(centered-in-lane me g550 g599) (in-lane me g599)(in-segment me g550) (on-right-side-in-segment me)(intersection-behind g550 g522) (building-on-left g288)(building-on-left g425) (building-on-left g427)(building-on-left g429) (building-on-left g431)(building-on-left g433) (building-on-right g287)(building-on-right g279) (increasing-direction me)(buildings-on-right g287 g279)

27 ((in-rightmost-lane ?self ?line) :percepts ((self ?self) (line ?line)) :percepts ((self ?self) (line ?line)) :start ((last-lane ?line)) :start ((last-lane ?line)) :subgoals ((driving-well-in-segment ?self ?seg ?line))) :subgoals ((driving-well-in-segment ?self ?seg ?line))) ((driving-well-in-segment ?self ?seg ?line) :percepts ((segment ?seg) (line ?line) (self ?self)) :percepts ((segment ?seg) (line ?line) (self ?self)) :start ((steering-wheel-straight ?self)) :start ((steering-wheel-straight ?self)) :subgoals ((in-segment ?self ?seg) :subgoals ((in-segment ?self ?seg) (centered-in-lane ?self ?seg ?line) (aligned-with-lane-in-segment ?self ?seg ?line) (steering-wheel-straight ?self))) ((in-segment ?self ?endsg) :percepts ((self ?self speed ?speed) (intersection ?int cross ?cross) :percepts ((self ?self speed ?speed) (intersection ?int cross ?cross) (segment ?endsg street ?cross angle ?angle)) :start ((in-intersection-for-right-turn ?self ?int)) :start ((in-intersection-for-right-turn ?self ?int)) :actions (( steer 1))) :actions (( steer 1))) I CARUS Skills for In-City Driving

28 A Successful Means-Ends Trace (ontable A T) (on B A) (on C B) (hand-empty) (clear C) (unst. C B) (unstack C B) (clear B) (putdown C T) (unst. B A) (unstack B A) (clear A) (holding C)(hand-empty) (holding B) A B CB A C initial state goal


Download ppt "Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona USA Extending the I CARUS Cognitive Architecture Thanks to D. Choi,"

Similar presentations


Ads by Google