Presentation is loading. Please wait.

Presentation is loading. Please wait.

Pat Langley Seth Rogers Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA.

Similar presentations


Presentation on theme: "Pat Langley Seth Rogers Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA."— Presentation transcript:

1 Pat Langley Seth Rogers Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA http://cll.stanford.edu/ An Extended Theory of Human Problem Solving Thanks to D. Choi, K. Cummings, N. Nejati, S. Sage, D. Shapiro, and J. Xuan for their contributions. This talk reports research. funded by grants from DARPA IPTO and the US National Science Foundation, which are not responsible for its contents.

2 Traditional theories claim that human problem solving occurs in response to unfamiliar tasks and involves: the mental inspection and manipulation of list structures; the mental inspection and manipulation of list structures; search through a space of states generated by operators; search through a space of states generated by operators; backward chaining from goals through means-ends analysis; backward chaining from goals through means-ends analysis; a shift from backward to forward chaining with experience. a shift from backward to forward chaining with experience. The Standard Theory of Problem Solving These claims characterize problem solving accurately, but this does not mean they are complete.

3 We maintain that the standard theory is incomplete and that: Human problem solving occurs in a physical context. Human problem solving occurs in a physical context. Problem solving abstracts away from physical details, yet must return to them for implementing solutions. Problem solving abstracts away from physical details, yet must return to them for implementing solutions. Problem solving interleaves reasoning with execution. Problem solving interleaves reasoning with execution. Eager execution can lead to situations that require restarts. Eager execution can lead to situations that require restarts. Learning from solutions transforms backward chaining into informed forward execution. Learning from solutions transforms backward chaining into informed forward execution. Further Claims about Problem Solving These claims are not entirely new, but they have received little attention in previous computational models.

4 The I CARUS Architecture 1.Cognition grounded in perception and action 2.Cognitive separation of categories and skills 3.Hierarchical organization of long-term memory 4.Cumulative learning of skill/concept hierarchies 5.Correspondence of long-term/short-term structures We have embedded these extensions in I CARUS, a cognitive architecture that builds on five principles: These ideas distinguish I CARUS from other architectures like ACT-R, Soar, and EPIC.

5 Hierarchical Structure of Long-Term Memory concepts skills Each concept is defined in terms of other concepts and/or percepts. Each skill is defined in terms of other skills, concepts, and percepts. I CARUS organizes both concepts and skills in a hierarchical manner.

6 I CARUS Memories and Processes Long-TermConceptualMemory Short-TermConceptualMemory Short-TermGoal/SkillMemory Categorization and Inference SkillExecution Perception Environment PerceptualBuffer Problem Solving Skill Learning MotorBuffer Skill Retrieval Long-Term Skill Memory

7 The Physical Context of Problem Solving 1.places descriptions of sensed objects in the perceptual buffer; 2.infers instances of concepts implied by the current situation; 3.finds paths through the skill hierarchy from top-level goals; 4.selects one or more applicable skill paths for execution; 5.invokes the actions associated with each selected path. I CARUS is a cognitive architecture for physical, embodied agents. On each successive perception-execution cycle, the architecture: Problem solving in I CARUS builds upon this basic ability to recognize physical situations and execute skills therein.

8 Basic I CARUS Processes concepts skills Concepts are matched bottom up, starting from percepts. Skill paths are matched top down, starting from intentions. I CARUS matches patterns to recognize concepts and select skills.

9 Abstraction from Physical Details conceptual inference augments perceptions using high-level concepts that provide abstract state descriptions. conceptual inference augments perceptions using high-level concepts that provide abstract state descriptions. execution operates over high-level durative skills that serve as abstract problem-space operators. execution operates over high-level durative skills that serve as abstract problem-space operators. both inference and execution occur in an automated manner that demands few attentional resources. both inference and execution occur in an automated manner that demands few attentional resources. I CARUS typically pursues problem solving at an abstract level: However, concepts are always grounded in primitive percepts and skills always terminate in executable actions. I CARUS holds that cognition relies on a symbolic physical system which utilizes mental models of the environment.

10 Interleaved Problem Solving and Execution chains backward off skills that would produce the goal; chains backward off skills that would produce the goal; chains backwards off concepts if no skills are available; chains backwards off concepts if no skills are available; creates subgoals based on skill or concept conditions; creates subgoals based on skill or concept conditions; pushes these subgoals onto a goal stack and recurses; pushes these subgoals onto a goal stack and recurses; executes any selected skill as soon as it is applicable. executes any selected skill as soon as it is applicable. I CARUS includes a module for means-ends problem solving that: Embedding execution within problem solving reduces memory load and uses the environment as an external store.

11 Restarting on Problems detecting when action has made backtracking impossible; detecting when action has made backtracking impossible; storing the goal context to avoid repeating the error; storing the goal context to avoid repeating the error; physically restarting the problem in the initial situation; physically restarting the problem in the initial situation; repeating this process until succeeding or giving up. repeating this process until succeeding or giving up. Even when combined with backtracking, eager execution can lead problem solving to unrecoverable states. I CARUS problem solver handles such untenable situations by: This strategy produces quite different behavior from the purely mental systematic search assumed by most models.

12 Interleaved Problem Solving and Execution Solve(G) Push the goal literal G onto the empty goal stack GS. On each cycle, If the top goal G of the goal stack GS is satisfied, Then pop GS. Else if the goal stack GS does not exceed the depth limit, Let S be the skill instances whose heads unify with G. If any applicable skill paths start from an instance in S, Then select one of these paths and execute it. Else let M be the set of primitive skill instances that have not already failed in which G is an effect. If the set M is nonempty, Then select a skill instance Q from M. Push the start condition C of Q onto goal stack GS. Else if G is a complex concept with the unsatisfied subconcepts H and with satisfied subconcepts F, Then if there is a subconcept I in H that has not yet failed, Then push I onto the goal stack GS. Else pop G from the goal stack GS and store information about failure with G's parent. Else pop G from the goal stack GS. Store information about failure with G's parent. Else if G is a complex concept with the unsatisfied subconcepts H and with satisfied subconcepts F, Then if there is a subconcept I in H that has not yet failed, Then push I onto the goal stack GS. Else pop G from the goal stack GS and store information about failure with G's parent. Else pop G from the goal stack GS. Store information about failure with G's parent. This is traditional means-ends analysis, with three exceptions: (1) conjunctive goals must be defined concepts; (2) backward chaining occurs over both skills and concepts; and (3) selected skills are executed whenever applicable.

13 Learning from Problem Solutions operates whenever problem solving overcomes an impasse; operates whenever problem solving overcomes an impasse; incorporates only information available from the goal stack; incorporates only information available from the goal stack; generalizes beyond the specific objects concerned; generalizes beyond the specific objects concerned; depends on whether chaining involved skills or concepts; depends on whether chaining involved skills or concepts; supports cumulative learning and within-problem transfer. supports cumulative learning and within-problem transfer. I CARUS incorporates a mechanism for learning new skills that: This skill creation process is fully interleaved with means-ends analysis and execution. Learned skills carry out forward execution in the environment rather than backward chaining in the mind.

14 Execution, Problem Solving, and Learning Executed plan Problem ? Skill Hierarchy Primitive Skills Skill Execution impasse? Problem Solving yes no Skill Learning

15 (ontable A T) (on B A) (on C B) (hand-empty) (clear C) (unst. C B) (unstack C B) (clear B) (putdown C T) (unst. B A) (unstack B A) (clear A) (holding C)(hand-empty) (holding B) A B CB A C 1 3 2 4 Constructing Skills from a Trace

16 Learned Skills in the Blocks World Learned Skills in the Blocks World (clear (?C) :percepts((block ?D) (block ?C)) :start(unstackable ?D ?C) :skills((unstack ?D ?C))) (clear (?B) :percepts ((block ?C) (block ?B)) :start[(on ?C ?B) (hand-empty)] :skills((unstackable ?C ?B) (unstack ?C ?B))) (unstackable (?C ?B) :percepts((block ?B) (block ?C)) :start [(on ?C ?B) (hand-empty)] :skills((clear ?C) (hand-empty))) (hand-empty ( ) :percepts ((block ?D) (table ?T1)) :start (putdownable ?D ?T1) :skills ((putdown ?D ?T1)))

17 Three Questions about Skill Learning What is the hierarchical structure of the skill network? What is the hierarchical structure of the skill network? The structure is determined by the subproblems that arise in problem solving, which, because operator conditions and goals are single literals, form a semilattice. The structure is determined by the subproblems that arise in problem solving, which, because operator conditions and goals are single literals, form a semilattice. What are the heads of the learned clauses/methods? What are the heads of the learned clauses/methods? The head of a learned clause is the goal literal that the planner achieved for the subproblem that produced it. The head of a learned clause is the goal literal that the planner achieved for the subproblem that produced it. What are the conditions on the learned clauses/methods? What are the conditions on the learned clauses/methods? If the subproblem involved skill chaining, they are the conditions of the first subskill clause. If the subproblem involved skill chaining, they are the conditions of the first subskill clause. If the subproblem involved concept chaining, they are the subconcepts that held at the outset of the subproblem. If the subproblem involved concept chaining, they are the subconcepts that held at the outset of the subproblem.

18 Related Theoretical Extensions Zhang and Norman have noted the role of external memory; Zhang and Norman have noted the role of external memory; Gunzelman has modeled interleaved planning and execution; Gunzelman has modeled interleaved planning and execution; Jones and Langley modeled restarts on unsolved problems; Jones and Langley modeled restarts on unsolved problems; Soar and Prodigy learn production rules from impasses. Soar and Prodigy learn production rules from impasses. We are not the first to propose revisions to the standard theory: However, I CARUS is the first cognitive architecture that includes these extensions in a unified way.

19 Directions for Future Research How much do humans abstract away from physical details? How much do humans abstract away from physical details? How often do they return to this setting during their search? How often do they return to this setting during their search? How tightly do they interleave cognition with execution? How tightly do they interleave cognition with execution? Under what conditions do they start over on a problem? Under what conditions do they start over on a problem? How rapidly do they acquire automatized strategies? How rapidly do they acquire automatized strategies? Many questions about human problem solving still remain open: We should address these and related issues in future extensions to the standard theory of problem solving.


Download ppt "Pat Langley Seth Rogers Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA."

Similar presentations


Ads by Google