Presentation is loading. Please wait.

Presentation is loading. Please wait.

A Neural Network Implementation on the GPU By Sean M. O’Connell CSC 7333 Spring 2008.

Similar presentations


Presentation on theme: "A Neural Network Implementation on the GPU By Sean M. O’Connell CSC 7333 Spring 2008."— Presentation transcript:

1 A Neural Network Implementation on the GPU By Sean M. O’Connell CSC 7333 Spring 2008

2 Introduction Neural Network processing CPUs vs GPUs Modern GPU parallelization Applying GPU architecture to NN Exploiting parallel NN node computations Mappings to GPU

3 NN Implementation Details Each layer fully connected to next one Step activation function Back-propagation

4 GPU Architecture Very different from CPU Memory layout Textures Vertex arrays Matrices Devise a new GPU framework / arch.

5 Node Weights

6 Node Output Node input uses previous layer’s output

7 Neural Network Layers Back-propagation error data stored in ‘error’ texture

8 Implementation Details OpenGL 2.0 Pixels plotted to screen GLSL pixel shaders Frame Buffer Objects Vertex Buffer Objects

9 Pseudo Code TrainGPUNeuralNetwork(input) Copy training input to input layer’s output texture Run input through network a.Bind FeedForward pixel shader and associated parameters b.For each layer in network except input layer i.Set layer.outputTexture as rendering target ii.Bind layer.weightsTexture iii.Bind previousLayer.outputTexture iv.Render node (x, y) points to the screen for pixel shader processing v.Copy output to layer.outputTexture Calculate errors for output layer a.Bind CalcErrors pixel shader and associated parameters b.Bind outputLayer.errorTexture as rendering target c.Bind outputLayer.outputTexture d.Bind expectedOutputTexture e.Render node (x, y) points to the screen for pixel shader processing f.Copy output to outputLayer.errorTexture Backpropagate results to hidden layers a.Bind Backpropagate pixel shader and associated parameters b.For each hidden layer in network i.Set layer.errorTexture as rendering target ii.Bind nextLayer.weightsTexture iii.Bind nextLayer.errorTexture iv.Bind layer.outputTexture v.Render node (x, y) points to the screen for pixel shader processing vi.Copy output to layer.errorTexture Update weights a.Bind UpdateWeights pixel shader and associated parameters b.For each layer in network except input layer i.Set layer.weightsTexture as rendering target ii.Bind layer.weightsTexture iii.Bind layer.errorTexture iv.Bind layer.outputTexture v.Render node(x, y) points to the screen for each weight value in layer.weightsTexture for pixel shader processing vi.Copy output to layer.weightsTexture

10 Test Hardware Intel Core Duo 2.2Ghz 2GB DDR600 RAM Nvidia Geforce 7900GTX 512MB

11 Results # Nodes / HLTrial 1 (s)Trial 2 (s)Trial 3 (s)Average Time (s) 2500.0133680.0097530.0097650.010962 5000.0389460.0387180.0398130.039159 10000.1582220.1620310.1667220.162325 20000.6499590.6277940.6120340.629929 40002.3522962.3311962.3416662.341719 800018.345618.068718.5573618.20869 # Nodes / HLTrial 1 (s)Trial 2 (s)Trial 3 (s)Average Time (s) 2500.0088480.0141080.0108490.009996 5000.0123630.0082190.0106190.009714 10000.0109380.0087030.008930.009451 20000.0091360.0090570.008730.009332 40000.0087440.0106620.0091730.014823 CPU Neural Network Training GPU Neural Network Training

12 Results

13 Conclusion GPU 157x FASTER for 4000 nodes Lots of improvements can be made GPU well suited for A.I.

14 Questions? References [1]Machine Learning. Tom M. Mitchell. The McGraw Hill Companies, 1997. [2]OpenGL – The Industry Standard for High Performance Graphics. http://www.opengl.org


Download ppt "A Neural Network Implementation on the GPU By Sean M. O’Connell CSC 7333 Spring 2008."

Similar presentations


Ads by Google