Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Updated experiment based on LSTM

Similar presentations


Presentation on theme: "The Updated experiment based on LSTM"— Presentation transcript:

1 The Updated experiment based on LSTM
Raymond ZHAO Wenlong

2 Content Introduction The Updated experiments based on LSTM (long short-Term memory) TODO

3 A large Screen Size laptop
Introduction Develop a new product configuration approach in e-commerce industry to elicit customer needs Collect online user reviews (laptop) as inputs query-to-attributes mapping: map user inputs (the functional requirements in unstructured query) into product parameters or features (structured attributes) Text classification => Similar to Sentiment Classification (SentiC) on Stanford Sentiment Treebank of movie reviews A large Screen Size laptop

4 The Updated experiments
epoch = 4 (in this experiment) generally defined as "one pass over the entire training dataset" (reference from keras) But Why we use more than one Epoch? The data (in ML) is too big to feed to the computer at once we divide it in number of batches (each step-> update the weights based on loss function)

5 Gradient Descent Alg (Reference from quora) (Reference from quora)
When the data is too big (in ML) and we cannot pass all the data to the computer at one epoch (once). => divide it in number of batches, give it to our computer batch by batch and update the weights of the neural networks at the end of every step to fit it to the data given. A limited dataset (batches) and an iterative optimization algs (like SGD, AdaGrad) used in ML to find the best results (minima of a curve). Loss function is decreasing while learning rate parameter in Gradient Descent Algs becomes more smaller by the shorter size of steps (Reference from quora) (Reference from quora)

6 Gradient Descent Alg The experiment on our server
Why we use more than one Epoch?

7 Epoch (Reference from quora)
Update the weights with one epoch (single pass) is not enough use a limited dataset (batches) and to optimise the learning and the graph we are using Gradient Descent algs which is an iterative process one epoch leads to underfitting of the curve in the graph Need to pass the full dataset multiple times to the same neural network As the number of epochs increases, more number of times the weight are changed in the neural network and the curve goes from underfitting to optimal to overfitting curve. => What is the right numbers of epochs? <- From the experiments based on your data (Reference from quora)

8 TODO ALL Experiments RNN-LSTM LSTM with attention

9 Thanks


Download ppt "The Updated experiment based on LSTM"

Similar presentations


Ads by Google