Presentation is loading. Please wait.

Presentation is loading. Please wait.

An Instructable Connectionist/Control Architecture: Using Rule-Based Instructions to Accomplish Connectionist Learning in a Human Time Scale Presented.

Similar presentations


Presentation on theme: "An Instructable Connectionist/Control Architecture: Using Rule-Based Instructions to Accomplish Connectionist Learning in a Human Time Scale Presented."— Presentation transcript:

1 An Instructable Connectionist/Control Architecture: Using Rule-Based Instructions to Accomplish Connectionist Learning in a Human Time Scale Presented by: Jim Ries for CECS 477, WS 2000 Paper by: Walter Schneider and William L. Oliver University of Pittsburgh, Learning Research and Development Center

2 Introduction n Overview n Task Decomposition u Gate Learning Example n CAP2 Architecture n CAP2 Rule Learning n Authors’ Conclusions n My Own Thoughts

3 Walter Schneider Ph.D., Indiana University Professor, Psychology University of Pittsburgh, Pittsburgh, PA 15260 Phone: (412) 624-7061. Fax: (412) 624-9149 Email: waltsch@vms.cis.pitt.eduwaltsch@vms.cis.pitt.edu http://www.lrdc.pitt.edu/

4 Overview n Hybrid approach to blend rules with connectionist model. u Rules are “learned” (with instruction) and represented in a connectionist manner. u Learned rules are less “brittle”. n Attempts to decompose problems in order to hasten learning. u Supposedly general decomposition mechanism. n Claims to model human cognition.

5 Task Decomposition n As task complexity increases, learning times in both symbolic and connectionist systems can dramatically increase (perhaps exponentially). n Cognitive psychology indicates that basic cognitive processes can be decomposed into stages.

6 Task Decomposition (cont.) n A good decomposition reduces the number of problem states needed for consideration. u e.g., humans can do arbitrary addition by memorizing 100 addition facts and an algorithm for adding one column at a time. w/o this decomposition, would need to learn 10 10 addend combinations to solve 5 column addition problems!

7 Gate Learning n Example of task decomposition. n In human version, subjects are instructed on the rules for each gate, and then do many trials to learn. n W/o task decomposition, number of states is: u 2 i X g X n (where i is gate inputs, g is # of gate types, n is # of negation states) n W/ task decomposition, number of states is: u 2 i + (g X r) + (n X o) (where g is # of recording states, o is # of output states of gate mapping stage)

8 Gate Learning (cont.) n For six-input gates following: u w/o decomposition: 384 states u w/ decomposition: 77 states n Decomposition reduces state growth from multiplicative function to additive.

9 Gate Learning (cont.)

10

11 n Networks trained to 100% accuracy (since there is no “noise”) n Results for 6 gate trial: u w/o decomposition - 10,835 trials u w/ decomposition - 948 trials u human - 300 trials

12 Gate Learning (cont.)

13 n Subjects begin by executing rules sequentially, and gradually switch to associative responses. n The stage taking the longest to converge (Recording) was the limiting factor. n The author did not mention whether real time for a “trial” differed between a net using decomposition or one without decomposition.

14 Gate Learning (cont.)

15 CAP2 Architecture n Controlled Automatic Processing model 2. n Macro level : system of modules that pass vector and scalar messages. u Scalar messages used for control. u Vector messages encode perceptual and conceptual information.

16 CAP2 Architecture (cont.) n Components u Data Modules - transforms and transmits vector messages (consistent with neurophysiology) u Control Signals - control activity of modules F Activity report - codes whether a data module is active and has a vector to transmit F Gain control - controls how strongly the output of a module activates the other modules to which it is connected F Feedback - controls the strength of the autoassociative feedback within the module

17 CAP2 Architecture (cont.)

18

19 u Controller Module - sequential rule net F Task input vector F Compare Result input vector F Context input vector F Outputs control operations Attend Compare (compare vectors from different modules) Receive (enable a module to receive a vector) Done F Currently implemented in C, rather than as a connectionist network!

20 CAP2 Architecture (cont.)

21

22 n Authors are committed to structural assumptions of the architecture (as related to human cognition) u Processing substrate in humans akin to data network u Modular network structures serve as functional units of processing u Information passes between modules as vectors u Memory associations among vectors develop through learning similar to connectionist learning

23 CAP2 Architecture (cont.) n Mechanisms for task decomposition u Configuring the data network (# of stages, # of modules/stage, etc. through control signals) u Specifying the number of states in each stage. n CAP2 captures knowledge specified in rules u However, the data network does not simply learn the rules stored in the controller, but will learn patterns in input data as well (driving instructor example). u To achieve the same level of tuning in a production system would require a huge set of rules.

24 CAP2 Architecture (cont.) n Chunking - “C” “A” “T”  “CAT” n Degree of matching (Euclidean distance) u activity report =  (x i + y i ) 2 u Does this mean that CAP2 would be unable to represent concepts that were “close” or “distant” in a different sense (e.g., Taxicab distance, or other distance measures)?

25 CAP2 Architecture (cont.)

26 CAP2 Rule Learning n Rule learning should be achieved in a small number of trials (or why bother; just use connectionist learning). n Gate Learning example

27 CAP2 Rule Learning (cont.)

28

29 n Sequential network learned rules even faster than humans u sequential network - 120 trials u humans - 216 trials u decomposed model - 932 trials u single stage model - 10,835 trials n Rule knowledge is brittle, and performas much as novices perform during the early stage of rule learning.

30 Authors’ Conclusions n Hybrid connectionist/control architecture illustrates the complementary nature of symbolic and connectionist processing. u Better than connectionist learning, because it benefits from instruction. u Better than symbolic processing because it captures rules in a connectionist network which can scale and is less brittle.

31 Authors’ Conclusions (cont.) n Closely models human cognition. n Not merely a connectionist implementation of a symbolic architecture.

32 My Own Thoughts n Unclear that this models human cognition, but I have no cognitive science background to truly evaluate this claim. n Is this really general? For example, they seemed to gloss over the fact that part of their rule system was implemented directly in C rather than in a connectionist manner. n The examples were generally done iteratively. How does parallelism change things (if at all)?

33 Full paper reference n Schneider, W. & Oliver, W. L. (1991). An instructable connectionist/control architecture: Using rule-based instructions to accomplish connectionist learning in a human time scale. In K. Van Lehn (Ed.), Architectures for intelligence: The 22nd Carnegie Mellon symposium on cognition (pp.113-145). Hillsdale, NJ: Erlbaum.


Download ppt "An Instructable Connectionist/Control Architecture: Using Rule-Based Instructions to Accomplish Connectionist Learning in a Human Time Scale Presented."

Similar presentations


Ads by Google