Gradient Checks for ANN

Slides:



Advertisements
Similar presentations
NEURAL NETWORKS Backpropagation Algorithm
Advertisements

Artificial Intelligence 13. Multi-Layer ANNs Course V231 Department of Computing Imperial College © Simon Colton.
1 Using ANN to solve the Navier Stoke Equations Motivation: Solving the complete Navier Stokes equations using direct numerical simulation is computationally.
Artificial Neural Networks
2. Numerical differentiation. Approximate a derivative of a given function. Approximate a derivative of a function defined by discrete data at the discrete.
S. Mandayam/ ANN/ECE Dept./Rowan University Artificial Neural Networks ECE /ECE Fall 2006 Shreekanth Mandayam ECE Department Rowan University.
Ch 8.1 Numerical Methods: The Euler or Tangent Line Method
Principle Component Analysis (PCA) Networks (§ 5.8) PCA: a statistical procedure –Reduce dimensionality of input vectors Too many features, some of them.
Classification Part 3: Artificial Neural Networks
Appendix B: An Example of Back-propagation algorithm
Classification / Regression Neural Networks 2
ECE 6504: Deep Learning for Perception Dhruv Batra Virginia Tech Topics: –Neural Networks –Backprop –Modular Design.
Derivatives In modern structural analysis we calculate response using fairly complex equations. We often need to solve many thousands of simultaneous equations.
M Machine Learning F# and Accord.net. Alena Dzenisenka Software architect at Luxoft Poland Member of F# Software Foundation Board of Trustees Researcher.
Non-Bayes classifiers. Linear discriminants, neural networks.
11 1 Backpropagation Multilayer Perceptron R – S 1 – S 2 – S 3 Network.
Introduction to Testing CSIS 1595: Fundamentals of Programming and Problem Solving 1.
EEE502 Pattern Recognition
Neural Networks Vladimir Pleskonjić 3188/ /20 Vladimir Pleskonjić General Feedforward neural networks Inputs are numeric features Outputs are in.
Ch 8.2: Improvements on the Euler Method Consider the initial value problem y' = f (t, y), y(t 0 ) = y 0, with solution  (t). For many problems, Euler’s.
Convolutional Neural Network
Today’s Topics 11/10/15CS Fall 2015 (Shavlik©), Lecture 21, Week 101 More on DEEP ANNs –Convolution –Max Pooling –Drop Out Final ANN Wrapup FYI:
Lecture 3a Analysis of training of NN
Multinomial Regression and the Softmax Activation Function Gary Cottrell.
Learning with Neural Networks Artificial Intelligence CMSC February 19, 2002.
Lecture 7 Learned feedforward visual processing
Deep Feedforward Networks
The Gradient Descent Algorithm
Artificial Neural Networks I
Data Mining, Neural Network and Genetic Programming
Learning with Perceptrons and Neural Networks
Lecture 2. Basic Neurons To model neurons we have to idealize them:
Matt Gormley Lecture 16 October 24, 2016
Numerical Differentiation
Introduction to CuDNN (CUDA Deep Neural Nets)
Neural Networks and Backpropagation
Classification / Regression Neural Networks 2
Prof. Carolina Ruiz Department of Computer Science
cs638/838 - Spring 2017 (Shavlik©), Week 7
Class Notes 18: Numerical Methods (1/2)
Goodfellow: Chap 6 Deep Feedforward Networks
RS – Reed Solomon List Decoding.
cs540 - Fall 2016 (Shavlik©), Lecture 18, Week 10
of the Artificial Neural Networks.
Convolutional networks
Neural Networks Geoff Hulten.
MATH 2140 Numerical Methods
ML – Lecture 3B Deep NN.
Backpropagation.
Forward Divided Difference
Neural Networks References: “Artificial Intelligence for Games”
Multilayer Perceptron: Learning : {(xi, f(xi)) | i = 1 ~ N} → W
Forward and Backward Max Pooling
Done Source:
Neural Networks II Chen Gao Virginia Tech ECE-5424G / CS-5824
Backpropagation Disclaimer: This PPT is modified based on
Neural Networks.
Mihir Patel and Nikhil Sardana
Image Classification & Training of Neural Networks
Ensembles An ensemble is a set of classifiers whose combined results give the final decision. test feature vector classifier 1 classifier 2 classifier.
Rate of Change The rate of change is the change in y-values over the change in x-values.
Backpropagation and Neural Nets
Neural Networks II Chen Gao Virginia Tech ECE-5424G / CS-5824
Forward Divided Difference
Image recognition.
Principles of Back-Propagation
Derivatives and Gradients
Prof. Carolina Ruiz Department of Computer Science
Outline Announcement Neural networks Perceptrons - continued
Overall Introduction for the Lecture
Presentation transcript:

Gradient Checks for ANN Yujia Bao Mar 7, 2017

Finite Difference Let 𝑓(𝑥) be any differentiable function, we can approximate its derivative by d𝑓(𝑥) d𝑥 = 𝑓 𝑥+𝜖 −𝑓(𝑥−𝜖) 2𝜖 +𝑂( 𝜖 2 ) for some very small number 𝜖.

How to compare the numerical gradient with the analytic gradient?

Relative Error Let 𝑓 𝑛 ′ be the numerical gradient calculated using finite difference, and 𝑓 𝑎 ′ be the analytic gradient calculated using back prop. Define the relative error 𝑒𝑟𝑟𝑜𝑟= 𝑓 𝑎 ′ − 𝑓 𝑛 ′ max 𝑓 𝑎 ′ , 𝑓 𝑛 ′ , 𝑒𝑝𝑠𝑖𝑙𝑜𝑛

Relative Error 𝑒𝑟𝑟𝑜𝑟> 10 −4 usually means the analytic gradient is wrong. 𝑒𝑟𝑟𝑜𝑟< 10 −4 is fine for sigmoid activation (including logistic, tanh, softmax). But if you are using (leaky) ReLU, then 10 −4 might be too large. 𝑒𝑟𝑟𝑜𝑟< 10 −7 means your analytic gradient is correct.

Debugging Procedure Goal: Check the gradient for a single weight 𝑤 is computed correctly. Given: One example (Input features with a label) Forward prop and Backward prop to get the gradient for 𝑤. Let 𝑤←𝑤+𝜖 (I usually choose 𝜖= 10 −5 ). Forward prop to get the output, and then compute the loss. Let 𝑤←𝑤−2𝜖 (Now 𝑤 is 𝑤 𝑜𝑟𝑖𝑔𝑖𝑛 −𝜖). Check the relative error. Recover the origin weight by 𝑤←𝑤+𝜖.

Debugging Procedure Suppose our network has the following structure: Input -> Conv1 -> Pool1 -> Conv2 -> Pool2 -> ReLU -> Output If the gradients from Input to Conv1 are correct, then we are done! Otherwise, we check the gradients from Pool1 to Conv2 (since there is no weights from Conv1 to Pool1). If it is correct, then this means there are some bugs in our Back prop code from Pool1 to Input. And so on…

Thanks.