Paul D. Reynolds Russell W. Duren Matthew L. Trumbo Robert J. Marks II

Slides:



Advertisements
Similar presentations
Enhanced matrix multiplication algorithm for FPGA Tamás Herendi, S. Roland Major UDT2012.
Advertisements

NEURAL NETWORKS Backpropagation Algorithm
Computer Interfacing and Protocols
also known as the “Perceptron”
1 Neural networks. Neural networks are made up of many artificial neurons. Each input into the neuron has its own weight associated with it illustrated.
Neural Networks  A neural network is a network of simulated neurons that can be used to recognize instances of patterns. NNs learn by searching through.
GATE Reactive Behavior Modeling Neural Networks (GATE-561) Dr.Çağatay ÜNDEĞER Instructor Middle East Technical University, GameTechnologies Bilkent.
B.Macukow 1 Lecture 12 Neural Networks. B.Macukow 2 Neural Networks for Matrix Algebra Problems.
Lecture 13 – Perceptrons Machine Learning March 16, 2010.
Characterization Presentation Neural Network Implementation On FPGA Supervisor: Chen Koren Maria Nemets Maxim Zavodchik
The back-propagation training algorithm
A Systolic FFT Architecture for Real Time FPGA Systems.
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
Gene Clustering Using Self- Organizing Maps and Particle Swarm Optimization R. Earl Lewis, Jr. CMSC 838 Presentation.
Digital Kommunikationselektronik TNE027 Lecture 3 1 Multiply-Accumulator (MAC) Compute Sum of Product (SOP) Linear convolution y[n] = f[n]*x[n] = Σ f[k]
Final Presentation Neural Network Implementation On FPGA Supervisor: Chen Koren Maria Nemets Maxim Zavodchik
Characterization Presentation Neural Network Implementation On FPGA Supervisor: Chen Koren Maria Nemets Maxim Zavodchik
Image Compression Using Neural Networks Vishal Agrawal (Y6541) Nandan Dubey (Y6279)
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
GPGPU platforms GP - General Purpose computation using GPU
CS231 Fundamentals1 Fundamentals What kind of data do computers work with? – Deep down inside, it’s all 1s and 0s What can you do with 1s and 0s? – Boolean.
1 Miodrag Bolic ARCHITECTURES FOR EFFICIENT IMPLEMENTATION OF PARTICLE FILTERS Department of Electrical and Computer Engineering Stony Brook University.
Integrating Neural Network and Genetic Algorithm to Solve Function Approximation Combined with Optimization Problem Term presentation for CSC7333 Machine.
Artificial Neural Network Theory and Application Ashish Venugopal Sriram Gollapalli Ulas Bardak.
Using Neural Networks in Database Mining Tino Jimenez CS157B MW 9-10:15 February 19, 2009.
Chapter 8 Problems Prof. Sin-Min Lee Department of Mathematics and Computer Science.
Back-Propagation MLP Neural Network Optimizer ECE 539 Andrew Beckwith.
NEURAL NETWORKS FOR DATA MINING
Research on Reconfigurable Computing Using Impulse C Carmen Li Shen Mentor: Dr. Russell Duren February 1, 2008.
HW/SW PARTITIONING OF FLOATING POINT SOFTWARE APPLICATIONS TO FIXED - POINTED COPROCESSOR CIRCUITS - Nalini Kumar Gaurav Chitroda Komal Kasat.
Artificial Intelligence Methods Neural Networks Lecture 4 Rakesh K. Bissoondeeal Rakesh K. Bissoondeeal.
The Particle Swarm Optimization Algorithm Nebojša Trpković 10 th Dec 2010.
Robin McDougall Scott Nokleby Mechatronic and Robotic Systems Laboratory 1.
Mechanical Engineering Department 1 سورة النحل (78)
Algorithm and Programming Considerations for Embedded Reconfigurable Computers Russell Duren, Associate Professor Engineering And Computer Science Baylor.
Blondie24 Presented by Adam Duffy and Josh Hill. Overview Introduction to new concepts Design of Blondie24 Testing and results Other approaches to checkers.
Spatiotemporal Saliency Map of a Video Sequence in FPGA hardware David Boland Acknowledgements: Professor Peter Cheung Mr Yang Liu.
Over-Trained Network Node Removal and Neurotransmitter-Inspired Artificial Neural Networks By: Kyle Wray.
Introduction to Neural Networks Introduction to Neural Networks Applied to OCR and Speech Recognition An actual neuron A crude model of a neuron Computational.
Neural Networks - lecture 51 Multi-layer neural networks  Motivation  Choosing the architecture  Functioning. FORWARD algorithm  Neural networks as.
Copyright © 2004, Dillon Engineering Inc. All Rights Reserved. An Efficient Architecture for Ultra Long FFTs in FPGAs and ASICs  Architecture optimized.
NEURAL NETWORKS LECTURE 1 dr Zoran Ševarac FON, 2015.
Chapter 6 Neural Network.
Breeding Swarms: A GA/PSO Hybrid 簡明昌 Author and Source Author: Matthew Settles and Terence Soule Source: GECCO 2005, p How to get: (\\nclab.csie.nctu.edu.tw\Repository\Journals-
April 5, 2016Introduction to Artificial Intelligence Lecture 17: Neural Network Paradigms II 1 Capabilities of Threshold Neurons By choosing appropriate.
Artificial Neural Networks This is lecture 15 of the module `Biologically Inspired Computing’ An introduction to Artificial Neural Networks.
Machine Learning Supervised Learning Classification and Regression
Neural networks.
When deep learning meets object detection: Introduction to two technologies: SSD and YOLO Wenchi Ma.
Hardware Descriptions of Multi-Layer Perceptions with Different Abstraction Levels Paper by E.M. Ortigosa , A. Canas, E.Ros, P.M. Ortigosa, S. Mota , J.
CLUster TIMing Electronics Part II
Hiba Tariq School of Engineering
DIGITAL SIGNAL PROCESSING ELECTRONICS
Other Classification Models: Neural Network
Particle Swarm Optimization
PSO -Introduction Proposed by James Kennedy & Russell Eberhart in 1995
One-layer neural networks Approximation problems
Artificial Intelligence (CS 370D)
Other Classification Models: Neural Network
Real Neurons Cell structures Cell body Dendrites Axon
Cache Memory Presentation I
Classification with Perceptrons Reading:
Neural Networks A neural network is a network of simulated neurons that can be used to recognize instances of patterns. NNs learn by searching through.
with Daniel L. Silver, Ph.D. Christian Frey, BBA April 11-12, 2017
FUNDAMENTAL CONCEPT OF ARTIFICIAL NETWORKS
RECONFIGURABLE PROCESSING AND AVIONICS SYSTEMS
MSECE Thesis Presentation Paul D. Reynolds
Zip Codes and Neural Networks: Machine Learning for
Shih-Wei Lin, Kuo-Ching Ying, Shih-Chieh Chen, Zne-Jung Lee
Akram Bitar and Larry Manevitz Department of Computer Science
Presentation transcript:

Paul D. Reynolds Russell W. Duren Matthew L. Trumbo Robert J. Marks II High Speed Implementation of Particle Swarm Optimization for Neural Networks Paul D. Reynolds Russell W. Duren Matthew L. Trumbo Robert J. Marks II

Original Problem To determine the optimal sonar setup to maximize the ensonification of a grid of water. Influences to ensonification: Environmental Conditions – Temperature, Wind Speed Bathymetry – Bottom Type, Shape of Bottom Sonar System Total of 27 different factors accounted for

Ensonification Example 15 by 80 pixel grid Red: High signal to interference ratio Blue: Low signal to interference ratio Bottom: No signal

Original Solution Take current conditions Match to previous optimum sonar setups with similar conditions Run acoustic model using current conditions and previous optimum setups Use sonar setup with highest signal to interference ratio

New Problem Problem: Solution One acoustic model run took tens of seconds Solution Train a Neural Network on the acoustic model (APL & University of Washington)

Neural Network Overview Inspired by the human ability to recognize patterns. Mathematical structure able to mimic a pattern Trained using known data Show the network several examples and identify each example The network learns the pattern Show the network a new case and let the network identify it.

Neural Network Structure OUTPUTS Each neuron is the squashed sum of the inputs to that neuron A squash is a non-linear function that restricts outputs to between 0 and 1 Each arrow is a weight times a neuron output WEIGHT LAYER NEURON INPUTS

Ensonification Neural Network Taught using examples from the acoustical model. Recognizes a pattern between the 27 given inputs and 15 by 80 grid output 27-40-50-70-1200 Architecture Squash =

Did the neural network solve the problem? Yes: Neural network acoustic model approximation: 5 ms However- Same method of locating best: Run many possible setups in neural network Choose best Problem: Better, but still not real time

How to find a good setup solution: Particle Swarm Optimization Idea Several Particles Wandering over a Fitness Surface Math xk+1 = xk + vk vk+1 = vk + rand*w1*(Gb-xk)+rand*w2*(Pb-xk) Theory Momentum pushes particles around surface Pulled towards Personal Best Pulled towards Global Best Eventually particles oscillate around Global Best

Particle Swarm in Operation

Particle Swarm Optimization 27 Inputs to Neural Network, Sonar System Setup Fitness Surface Calculated from neural network output Two Options Match a desired output Sum of the difference from desired output Minimize the difference Maximize signal to interference ratio in an area Ignore output in undesired locations

Particle Swarm in Operation

New Problem Enters Time for 100k step particle swarm using a 2.2Ghz Pentium: nearly 6 minutes Desire a real time version Solution: Implement the neural network and particle swarm optimization in parallel on reconfigurable hardware

Implementation Hardware SRC-6e Reconfigurable Computing Environment 2 Intel Microprocessors 2 Xilinx Virtex II Pro 6000 FPGAs 100 Mhz 76,032 Logic Gates 144 18 x 18 multipliers

Three Design Stages Activation Function Design Neural Network Design Sigmoid not efficient to calculate Neural Network Design Parallel Design Particle Swarm Optimization Hardware Implementation

Activation Function Design Fixed Point Design Sigmoid Accuracy Level Weight Accuracy Level

Fixed Point Design Data Range of -50 to 85 Fractional Portion 2’s Complement 7 integer bits 1 sign bit Fractional Portion Sigmoid outputs less than 1 Some number of fractional bits

Sigmoid Accuracy Level

Weight Accuracy Level

Total Accuracy

Fixed Point Results 16-bit Number Advantages 1 Sign Bit 7 Integer Bits 8 Fractional Bits Advantages 18 x 18 multipliers 64-bit input banks

Activation Function Approximation Compared 4 Designs Look-up Table Shift and Add CORDIC Taylor Series

Look-up Table Advantages Disadvantages Unlimited Accuracy Short Latency of 3 Disadvantages Desire entirely in chip design LUT will not fit on chip with 92,000 Weights

Look-up Table

Shift and Add Y(x)=2-n*x + b Advantages Disadvantages Small Design Short Latency of 5 Disadvantages Piecewise Outputs Limited Accuracy

Shift and Add

CORDIC Computation Divide Argument By 2 Series of Rotations Sinh(x) Cosh(x) Division for Tanh(x) Shift and Add for Result

CORDIC Advantages Disadvantages Unlimited Accuracy Real Calculation Long Latency of 50 Large Design

CORDIC

Taylor Series Y(x) = a+b(x-x0)+c(x-x0)2 Advantages Average Unlimited Accuracy Average Latency of 10 Medium Size Design Disadvantages 3 multipliers

Taylor Series

Neural Network Design Desired Limitations 27-40-50-70-1200 Architecture Maximum Parallel Design Entirely on Chip design Limitations 92,000 16-bit weights in 144 RAMB16s Layers are Serial 144 18x18 Multipliers

Neural Network Design Initial Test Design Serial Pipeline One Multiply per Clock 92,000 Clocks 1 ms=PC equivalent

Test Output FPGA output Real output

Test Output FPGA output Real output

Test Output FPGA output Real output

Neural Network Design Maximum Parallel Version 71 Multiplies in Parallel Zero weight padding Treat all layers as the same length 71 25 clock wait for Pipeline Total 1475 clocks per Network Evaluation 15 microseconds 60,000 Networks Evaluations per Second

Neural Network Design

Particle Swarm Optimization 2 Chips in SRC Particle Swarm Controls inputs Sends to Fitness Chip Receives a fitness back Fitness Function Calculates Network Compares to Desired Output

Particle Swarm Implementation Problem - randomness vk+1 = vk + rand*w1*(Gb-xk)+rand*w2*(Pb-xk) Solution - remove randomness? vk+1 = vk + w1*(Gb-xk) + w2*(Pb-xk) Does it work? Yes, but not as well Optimization takes more fitness evaluations

Random vs. Deterministic Deterministic – Blue Random – Green/Red

Particle Swarm Chip 10 Agents Restrictions Preset Starting Points and Velocities 8 from Previous Data, Random Velocities 1 at maximum range, aimed down 1 at minimum range, aimed up Restrictions Maximum Velocity Range

Update Equation Implementation Xmaxk Xmink XnDimk VnDimk PnDimk Gk Vmaxk Xmaxk Xmink X+V VnDimk P-X G-X Vmaxk Compare V+1/8(P-X)+1/16(G-X) Vmaxk New XnDimk Compare New XnDimk New VnDimk xk+1 = xk + vk vk+1 = vk + w1*(Gb-xk)+w2*(Pb-xk)

Results – Output Matching 100k iteration PSO ->1.76 s SWARM REAL

Results – Output Matching 100k iteration PSO ->1.76 s SWARM REAL

Results – Output Matching 100k iteration PSO ->1.76 s SWARM REAL

Particle Swarm-Area Specific 100k iteration PSO ->1.76 s

Particle Swarm-Area Specific 100k iteration PSO ->1.76 s

Particle Swarm-Area Specific 100k iteration PSO ->1.76 s

ANY QUESTIONS?