Presentation is loading. Please wait.

Presentation is loading. Please wait.

Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 1 Michael Arbib: CS564 - Brain Theory and Artificial.

Similar presentations


Presentation on theme: "Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 1 Michael Arbib: CS564 - Brain Theory and Artificial."— Presentation transcript:

1 Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 1 Michael Arbib: CS564 - Brain Theory and Artificial Intelligence University of Southern California, Fall 1999 Lecture 16. [NSLJ] Backprop: a. How to run the model; b. How to write the model Reading Assignments: NSL Book  2.6 Backpropagation  3.5 Backpropagation

2 Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 2 Training and Running The BackPropagation algorithm works in two phases, as in Hopfield. However, these two phases are not to be confused with the feedforward and backpropagation modes:  The training phase adjusts network weights. The training phase is made up of a large number of learning cycles, each comprising a forward pass (feedforward mode) and backward pass (backpropagation mode).  The running phase matches patterns against those already learned by the network. The running phase is made of a single forward pass taking a single cycle although sharing the same forward pass equations (feedforward mode) as in the training phase.

3 Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 3 Feedforward Mode Hidden Layer Layer Output Layer Layer *Warning: the NSL book has typos in these equations.*

4 Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 4 Backpropagation Mode - Output Layer error q = desiredOutput q - actualOutput q The accumulated error tss is given by  q = f (mp q )  error q where for this f the derivative is: f(mp q ) = mf q  (1- mf q ) The resulting  q is then used to modify the thresholds and weights in the output layer as follows  h q = -  q --  is the learning rate parameter h q (t+1) = h q (t) +  h q -- h q (t) is the threshold for output neuron q  w pq = -  q  mf p -- w pq is the weight from hidden unit p to q w pq (t+1) = w pq (t) +  w pq

5 Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 5 Backpropagation Mode - Hidden Layer In this case And then  h p =  p h p (t+1) = h p (t) +  h p  w sp =  p  in s w sp (t+1) = w sp (t) +  w sp

6 Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 6 Model Architecture BackPropModel TrainManager Network Display Results ErrorUpdate xtxt ywyw

7 Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 7 Scheduling and Buffering Scheduling specifies the order in which modules and their corresponding methods are executed. Buffering specifies how often ports read and write data in and out of the module.  In the immediate mode (sequential simulation), output ports immediately send their data to the connecting modules.  In the buffered mode (pseudo-concurrent simulation of, e.g., neural networks), output ports do not send their data to the connecting modules until the following clock cycle: Output ports are double buffered. One buffer contains the data that can be seen by the connecting modules during the current clock cycle, while the other buffer contains the data being currently generated.

8 Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 8 BackPropModel nslImport nslAllImports; nslImport TrainManager; nslImport Network; nslImport DisplayResult; nslModel BackPropModel () { int inSize = 2; int hidSize = 2; int outSize = 1; int nPats = 4; public TrainManager train(nPats,inSize,outSize); public Network network(inSize,hidSize,outSize); public DisplayResult result(inSize, outSize);

9 Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 9 BackPropModel 2 public void initSys() { system.setNumTrainEpochs(1000); system.setTrainEndTime(nPats); system.setRunEndTime(1); } public void initModule() { system.nslSetTrainDelta(1); system.nslSetRunDelta(1); } public void makeConn() { nslConnect(train.x, network.x); nslConnect(train.t, network.t); }

10 Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 10 TrainManager nslImport nslAllImports; nslModule TrainManager (int nPats,int inSize,int outSize) { public NslDoutFloat1 x(inSize); public NslDoutFloat1 t(outSize); private NslFloat2 patterns(nPats, inSize); private NslFloat2 outputs(nPats, outSize); public void simTrain() { int pat = system.getFinishedCycles(); x = nslGetRow(patterns,pat); t = nslGetRow(outputs,pat); }

11 Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 11 Network nslImport nslAllImports; nslImport Error; nslImport Update; nslModule Network (int inSize, int hidSize, int outSize) { public NslDinFloat1 x(inSize); public NslDinFloat1 t(outSize); public NslDoutFloat2 wh(inSize, hidSize); public NslDoutFloat1 hh(hidSize); public NslDoutFloat2 wo(hidSize, outSize); public NslDoutFloat1 ho(outSize); public NslDoutFloat1 yh(hidSize); public NslDoutFloat1 yo(outSize);

12 Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 12 Network 2 public Error error(outSize); public Update update(inSize,hidSize,outSize); private void forward() { yh = nslSigmoid(x*wh + hh); yo = yh*wo + ho; } public void initTrainEpoch() { wh = nslRandom(wh,(float)-0.5,(float)0.5); hh = nslRandom(hh,(float)-0.5,(float)0.5); wo = nslRandom(wo,(float)-0.5,(float)0.5); ho = nslRandom(ho,(float)-0.5,(float)0.5); }

13 Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 13 Network 3 public void makeConn() { nslRelabel(this.yo, error.yo); nslRelabel(this.t, error.t); nslRelabel(this.x, update.x); nslRelabel(this.yh, update.yh); nslRelabel(this.wh, update.wh); nslRelabel(this.hh, update.hh); nslRelabel(this.wo, update.wo); nslRelabel(this.ho, update.ho); nslConnect(error.err, update.deltaOut); } public void simRun() { forward(); } public void simTrain() { forward(); } }

14 Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 14 Error nslImport nslAllImports; nslModule Error (int outSize) { public NslDinFloat1 yo(outSize); public NslDinFloat1 t(outSize); public NslDoutFloat1 err(outSize); private NslFloat0 epsilon(); public NslFloat0 tss(); public void initTrain() { tss = 0.0; }

15 Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 15 Error 2 public void simTrain() { err = yo - t; tss = tss + nslSum(err ^ err); } public void endTrain() { if (tss < epsilon) { nslPrintln("Convergence"); verbatim_NSLJ; system.getScheduler().breakCycles(); system.continueCmd(); verbatim_off; }

16 Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 16 Update nslImport nslAllImports; nslModule Update (int inSize, int hidSize, int outSize) { public NslDinFloat1 deltaOut(outSize); public NslDinFloat1 x(inSize); public NslDinFloat1 yh(hidSize); public NslDoutFloat2 wh(inSize, hidSize); public NslDoutFloat1 hh(hidSize); public NslDoutFloat2 wo(hidSize, outSize); public NslDoutFloat1 ho(outSize); private NslFloat1 deltaHid(hidSize); private NslFloat0 lrate();

17 Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 17 Update 2 public void backward() { deltaHid = nslElemMult(yh ^ ((float)1.0 - yh), nslProd(deltaOut,nslTrans(wo))); ho = ho - (lrate ^ deltaOut); wo = wo - nslProd(nslTrans(yh),(lrate ^ deltaOut)); hh = hh - (lrate ^ deltaHid); wh = wh - nslProd(nslTrans(x),nslElemMult(lrate, deltaHid)); } public void simTrain() { backward (); }

18 Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 18 DisplayResults nslImport nslAllImports; nslOutModule DisplayResult (int inSize, int outSize) { private NslFloat1 in(inSize); private NslFloat1 outTemp(outSize); private NslFloat0 out(); public void initModule() { nslAddAreaCanvas(in,0,1); nslAddAreaCanvas(out,0,1); } private void updateValues() { in = (NslFloat1)nslGetValue("backPropModel.network.x"); outTemp = (NslFloat1)nslGetValue("backPropModel.network.yo"); out = outTemp[0]; } public void simTrain() { updateValues(); } public void simRun() { updateValues();} }

19 Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 19 The Exclusive-Or -- XOR Input Output 0 00 0 11 is not linearly separable. 1 01 1 10 Opening the BackProp Model script file for training

20 Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 20 Simulation Control Before doing any training the simulator executes the system initialization initSys. We then execute the backprop model train script and, when the learning process finishes, we execute the running script to test the network against new vectors. The training phase: We set trainEndTime to the number of training patterns specified by nPats and trainDelta to 1.0 in order to have as many training steps as there are training patterns. An epoch corresponds to a single pass over all patterns. We set trainEpochSteps to 1000 telling the system to train almost indefinitely until some suitable ending makes it stop, in this case that the error is small enough. We use epsilon as parameter to decide when learning should stop. The training phase: We set runEndTime to 1.0 and runDelta to 1.0.

21 Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 21 Visualization The input to the network is set to "0 0". After the network has been run the output becomes 0, as expected.

22 Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 22 Training script nsl set backPropModel.train.patterns { { 0 0 } { 0 1 } { 1 0 } { 1 1 } } nsl set backPropModel.train.outputs { { 0 } { 1 } { 1 } { 0 } } nsl set system.numTrainEpochs 1000 nsl set backPropModel.network.error.epsilon 0.0001 nsl set backPropModel.network.update.lrate 0.8 nsl doTrainEpochTimes

23 Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 23 Parameter Assignment We set epsilon to a number specifying that we have reached a small enough error for the network to obtain acceptable solutions such as 0.1, 10% of the output value,  nsl set backPropModel.network.error.epsilon 0.1 The learning parameter  is represented by the lRate parameter determining how big a step the network can take in correcting errors. The learning rate for this problem was set to 0.8 for both the hidden and output layers.  nsl set backPropModel.network.update.lRate 0.8 The training rate is typically set between 0.01 to 1.0: If the training step is too large - close to 1 - the network tends to oscillate and will likely jump over the minimum. If the training step is too small - close to 0 - it will require many cycles to complete the training, although it should eventually learn.

24 Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 24 Running Script for { set i 0 } { $i < 2 } {incr i} { for { set j 0 } { $j < 2 } {incr j} { set x [list $i $j] nsl set backPropModel.network.x $x nsl initRun nsl simRun nsl endRun puts "For { $i $j } the output is { [nsl get backPropModel.network.yo] } " }

25 Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 25 Homework #2 Self-Organization of the FARS Visual System u X: is the visual input [shape, color, width, height] u Z: is the identity of the object u Y1: is AIP’s grasp preference [side, power, precision, with aperture coding] u Y2: is RX’s grasp preference [side, power, precision, with aperture coding] u G: is the final grasp command [side, power, precision, with aperture coding] u WTA: is winner take all network (can be implemented non-neurally as well)  RX: is an unspecified region to be implemented as a backprop network similar to IT and AIP. AIP RX y1 x IT z WTA g y2

26 Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 26 Adding Protocols public void initModule(){ … nslDeclareProtocol("basic", "Basic on/off"); nslDeclareProtocol("transfer", "Overarm/Underarm transfer"); nslDeclareProtocol("calibration", "Two gaze-throw calibrations"); system.addProtocolToAll("basic"); system.addProtocolToAll("transfer"); system.addProtocolToAll("calibration"); } public void basicProtocol() { nslPrintln("Basic on/off protocol selected"); protocol = 0; }

27 Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 27 Adding Protocols 3  public void addProtocol(String name, NslModule module)  public String nslGetProtocol()  public void nslSetProtocol(String name)

28 Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 28 Adding Protocols 2 public void initTrainEpoch() { nslRemoveFromLocalProtocols("manual"); nslRemoveFromLocalProtocols("basic"); nslRemoveFromLocalProtocols("transfer"); nslRemoveFromLocalProtocols("calibration"); } public void initRunEpoch() { nslAddProtocolRecursiveUp("basic"); nslAddProtocolRecursiveUp("transfer"); nslAddProtocolRecursiveUp("calibration"); }


Download ppt "Michael Arbib CS564 - Brain Theory and Artificial Intelligence, USC, Fall 2001. Lecture 16. NSLJ Backprop 1 Michael Arbib: CS564 - Brain Theory and Artificial."

Similar presentations


Ads by Google