Presentation is loading. Please wait.

Presentation is loading. Please wait.

GENERATING TEXT WITH RECURRENT NEURAL NETWORKS Ilya Sutskever, James Martens, and Geoffrey Hinton, ICML 2011 2013-4-1 Institute of Electronics, NCTU 指導教授.

Similar presentations


Presentation on theme: "GENERATING TEXT WITH RECURRENT NEURAL NETWORKS Ilya Sutskever, James Martens, and Geoffrey Hinton, ICML 2011 2013-4-1 Institute of Electronics, NCTU 指導教授."— Presentation transcript:

1 GENERATING TEXT WITH RECURRENT NEURAL NETWORKS Ilya Sutskever, James Martens, and Geoffrey Hinton, ICML 2011 2013-4-1 Institute of Electronics, NCTU 指導教授 : 王聖智 S. J. Wang 學生 : 陳冠廷 K. T. Chen

2 Outline Introduction Motivation What is the RNN? Why do we choose RNN to solve this problem? How to train RNN? Contribution Character-Level language modeling The multiplicative RNN The Experiments Discussion 2

3 Outline Introduction Motivation What is the RNN? Why do we choose RNN to solve this problem? How to train RNN? Contribution Character-Level language modeling The multiplicative RNN The Experiments Discussion 3

4 Movation Read some sentences and then try to predict next character. ? Easter is a Christian festival and holiday celebrating the resurrection of Jesus Christ.. Easter is a Christian festival and holiday celebrating the resurrection of Jesus Christ on the third day after his crucifixion at Calvary as described in the New Testament. 4

5 Recurrent neural networks input hidden output Feed-forward neural network Recurrent neural network input hidden output A recurrent neural network (RNN) is a class of neural network where connections between units form a directed cycle 5

6 Why do we choose RNNs? RNNs are suitable to deal with sequential data.(memory) RNNs are neural network in time time t -1tt +1 predictions hiddens inputs 6

7 How to train RNN? Backpropagation through time (BPTT) The gradient is easy to compute with backpropagation. RNNs learn by minimizing the training error. time t -1t t +1 predictions hiddens inputs 7

8 Backpropagation through time 8

9 RNNs are hard to train They can be volatile and can exhibit long-range sensitivity to small parameter perturbations. The Butterfly Effect The “vanishing gradient problem” makes gradient descent ineffective. outputs hiddens inputs time 9

10 How to overcome vanishing gradient? Long-short term memory.(LSTM) Modify the architecture of neural network. Hessian-Free optimizer. (James Martens et al. 2011.) Base on the Newton’s method + conjugate gradient algorithm Echo State network. Only learn the hidden-output weighted. data write keep read 10

11 Outline Introduction Motivation What is the RNN? Why do we choose RNN to solve this problem? How to train RNN? Contribution Character-Level language modeling The multiplicative RNN The Experiments Discussion 11

12 Character-Level language modeling The RNN observes a sequence of characters. The target output at each time step is defined as the input character at the next time-step. H e “H” e l “He” l l “Hel” l o “Hell” o “Hello” target Hidden state stores relevant Information. 12

13 The standard RNN ……. …… H character: 1-of-86 Softmax Predict distribution for next character. 13

14 Some motivation from model a tree Each node is a hidden state vector. The next character must transform this to a new node. The next hidden state needs to depend on the conjunction of the current character and the current hidden representation...fix..fixi..fixe i e.fixin n 14

15 They tried several neural network architectures and found the “Multiplicative-RNN” (MRNN) to be more effective than the regular RNN The Multiplicative RNN Current input character The weight matrix is chosen by the current character 15

16 The Multiplicative RNN Naïve implementation : assign a matrix to each character This requires a lot of parameters. (86*1500*1500) This could make the net overfit. Difficult to parallelize on a GPU Factorize the matrices of each character Fewer parameters Easier to parallelize 16

17 The Multiplicative RNN We can get groups a and b to interact multiplicatively by using “factors” Group b Group a Group c f Scalar coefficient Outer product transition matrix with rank 1 17

18 The Multiplicative RNN ……. …… H character: 1-of-86 1500 hidden units Predict distribution for next character. f 1500 hidden units ……. 18

19 The Multiplicative RNN t - 1tt + 1t +2 Time Output Input characters 19

20 The Multiplicative RNN:Key advantages The MRNN combines conjunction of contexts and characters more easily: The MRNN has two nonlinearities per timestep,whick make its dynamics even ricker and more powerful. fix Predict “i,e,_” i fixi Predict “n” 20

21 Outline Introduction Motivation What is the RNN? Why do we choose RNN to solve this problem? How to train RNN? Contribution Character-Level language modeling The multiplicative RNN The Experiments Discussion 21

22 The Experiments Training on three large datasets ~1GB of the English Wikipedia ~1GB of articles from New York Times ~100MB of JMLR and AISTATS paper Compare with the Sequence Memorizer (Wood et al.) and PAQ (Mahoney et al.) 22

23 Training on subsequences This is an extremely long string of text…………………………………………………………………………. millions long This is an extre ……… 250 his is an extrem…….. is is an extreme……... s is an extremel……... is an extremely……... is an extremely....... s an extremely l....... ….. The subsequences Compute the gradient and the curvature on subset of the subsequences. Use a different subset at each iteration 23

24 Parallelization Use HF optimizer to evaluate the gradient and curvature on large minibatches of data. Data GPU + + gradient curvature 24

25 The architecture of model Use 1500 hidden units and 1500 multiplicative factors on 250- long sequences. Arguably the largest and deepest neural network ever trained. …. predicetions hiddens input 1500 500...... 25

26 Demo The MRNN extracts “higher level information”, stores it for many timesteps,and uses it to make a prediction. Parentheses sensitivity (Wlching et al. 2005) the latter has received numerical testimony without much deeply grow (Wlching, Wulungching, Alching, Blching, Clching et al." 2076) and Jill Abbas, The Scriptures reported that Achsia and employed a the sequence memoizer (Wood et al McWhitt), now called "The Fair Savings.'"" interpreted a critic. In t Wlching ethics, like virtual locations. The signature tolerator is necessary to en Wlching et al., or Australia Christi and an undergraduate work in over knowledge, inc They often talk about examples as of January 19,. The "Hall Your Way" (NRF film) and OSCIP Her image was fixed through an archipelago's go after Carol^^'s first century, but simply to 26

27 Outline Introduction Motivation What is the RNN? Why do we choose RNN to solve this problem? How to train RNN? Contribution Character-Level language modeling The multiplicative RNN The Experiments Discussion 27

28 Discussion The MRNN model generated text contains very few non- words. (e.g., “cryptoliation”, “homosomalist”). This let MRNN can deal with real words that it didn’t see in the training set. If they have more computational power, they could train much bigger MRNNs with millions of units and billions of connections. 28

29 Reference Generating Text with Recurrent Neural Networks, Ilya Sutskever, James Martens, and Geoffrey Hinton, ICML 2011 Factored Conditional Restricted Boltzmann Machines for Modeling Motion Style, GrahamW. Taylor, Geoffrey E. Hinton Coursera : Neural Networks for Machine Learning,Geoffrey Hinton http://www.cs.toronto.edu/~ilya/rnn.html 29


Download ppt "GENERATING TEXT WITH RECURRENT NEURAL NETWORKS Ilya Sutskever, James Martens, and Geoffrey Hinton, ICML 2011 2013-4-1 Institute of Electronics, NCTU 指導教授."

Similar presentations


Ads by Google