Ghent University Compact hardware for real-time speech recognition using a Liquid State Machine Benjamin Schrauwen – Michiel D’Haene David Verstraeten.

Slides:



Advertisements
Similar presentations
Chrisantha Fernando & Sampsa Sojakka
Advertisements

Introduction to Artificial Neural Networks
INTRODUCTION TO Machine Learning ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
Distributed Arithmetic
Neural Network I Week 7 1. Team Homework Assignment #9 Read pp. 327 – 334 and the Week 7 slide. Design a neural network for XOR (Exclusive OR) Explore.
An Introduction to Reconfigurable Computing Mitch Sukalski and Craig Ulmer Dean R&D Seminar 11 December 2003.
Artificial Spiking Neural Networks
Introduction: Neurons and the Problem of Neural Coding Laboratory of Computational Neuroscience, LCN, CH 1015 Lausanne Swiss Federal Institute of Technology.
Functional Link Network. Support Vector Machines.
Neural Cross Correlation For Radio Astronomy Chipo N Ngongoni Supervisor: Professor J Tapson Department of Electrical Engineering, University of Cape Town.
Characterization Presentation Neural Network Implementation On FPGA Supervisor: Chen Koren Maria Nemets Maxim Zavodchik
Introduction CS/CMPE 537 – Neural Networks. CS/CMPE Neural Networks (Sp 2004/2005) - Asim LUMS2 Biological Inspiration The brain is a highly.
Hybrid Pipeline Structure for Self-Organizing Learning Array Yinyin Liu 1, Ding Mingwei 2, Janusz A. Starzyk 1, 1 School of Electrical Engineering & Computer.
Isolated word recognition with the Liquid State Machine: a case study
Connectionist Modeling Some material taken from cspeech.ucd.ie/~connectionism and Rich & Knight, 1991.
Modeling The quadratic integrate and fire follows from a reduced form (1) where F(V) is a voltage dependant function which aims to capture the voltage.
Presenting: Itai Avron Supervisor: Chen Koren Final Presentation Spring 2005 Implementation of Artificial Intelligence System on FPGA.
1 FPGA Lab School of Electrical Engineering and Computer Science Ohio University, Athens, OH 45701, U.S.A. An Entropy-based Learning Hardware Organization.
Artificial Neural Networks (ANNs)
Associative Learning in Hierarchical Self Organizing Learning Arrays Janusz A. Starzyk, Zhen Zhu, and Yue Li School of Electrical Engineering and Computer.
Final Presentation Neural Network Implementation On FPGA Supervisor: Chen Koren Maria Nemets Maxim Zavodchik
Characterization Presentation Neural Network Implementation On FPGA Supervisor: Chen Koren Maria Nemets Maxim Zavodchik
Neural Networks Primer Dr Bernie Domanski The City University of New York / CSI 2800 Victory Blvd 1N-215 Staten Island, New York 10314
Parallel Implementation of a Biologically Realistic NeoCortical Simulator E.Courtenay Wilson.
Self-Organized Recurrent Neural Learning for Language Processing April 1, March 31, 2012 State from June 2009.
MSE 2400 EaLiCaRA Spring 2015 Dr. Tom Way
A Bit-Serial Method of Improving Computational Efficiency of Dot-Products 1.
Presentation on Neural Networks.. Basics Of Neural Networks Neural networks refers to a connectionist model that simulates the biophysical information.
Floating Point vs. Fixed Point for FPGA 1. Applications Digital Signal Processing -Encoders/Decoders -Compression -Encryption Control -Automotive/Aerospace.
Artificial Neural Nets and AI Connectionism Sub symbolic reasoning.
DLS Digital Controller Tony Dobbing Head of Power Supplies Group.
IE 585 Introduction to Neural Networks. 2 Modeling Continuum Unarticulated Wisdom Articulated Qualitative Models Theoretic (First Principles) Models Empirical.
What is a neural network? Collection of interconnected neurons that compute and generate impulses. Components of a neural network include neurons, synapses,
Minimum Mean Squared Error Time Series Classification Using an Echo State Network Prediction Model Mark Skowronski and John Harris Computational Neuro-Engineering.
NEURAL NETWORKS FOR DATA MINING
Microcontroller Presented by Hasnain Heickal (07), Sabbir Ahmed(08) and Zakia Afroze Abedin(19)
Research on Reconfigurable Computing Using Impulse C Carmen Li Shen Mentor: Dr. Russell Duren February 1, 2008.
Programming Concepts in GPU Computing Dušan Gajić, University of Niš Programming Concepts in GPU Computing Dušan B. Gajić CIITLab, Dept. of Computer Science.
Supervisor: Dr. Eddie Jones Co-supervisor: Dr Martin Glavin Electronic Engineering Department Final Year Project 2008/09 Development of a Speaker Recognition/Verification.
Neural Network Architectures Aydın Ulaş 02 December 2004
Simultaneous Recurrent Neural Networks for Static Optimization Problems By: Amol Patwardhan Adviser: Dr. Gursel Serpen August, 1999 The University of.
Artificial Neural Networks An Introduction. What is a Neural Network? A human Brain A porpoise brain The brain in a living creature A computer program.
Distributed computing using Projective Geometry: Decoding of Error correcting codes Nachiket Gajare, Hrishikesh Sharma and Prof. Sachin Patkar IIT Bombay.
Neural Networks II By Jinhwa Kim. 2 Neural Computing is a problem solving methodology that attempts to mimic how human brain function Artificial Neural.
Neural Networks in Computer Science n CS/PY 231 Lab Presentation # 1 n January 14, 2005 n Mount Union College.
Lecture 5 Neural Control
Computer Architecture Lecture 26 Past and Future Ralph Grishman November 2015 NYU.
Ghent University Pattern recognition with CNNs as reservoirs David Verstraeten 1 – Samuel Xavier de Souza 2 – Benjamin Schrauwen 1 Johan Suykens 2 – Dirk.
Neural Networks Si Wu Dept. of Informatics PEV III 5c7 Spring 2008.
July 23, BSA, a Fast and Accurate Spike Train Encoding Scheme Benjamin Schrauwen.
Nicolas Galoppo von Borries COMP Motion Planning Introduction to Artificial Neural Networks.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
CSC321: Neural Networks Lecture 1: What are neural networks? Geoffrey Hinton
IJCNN, July 27, 2004 Extending SpikeProp Benjamin Schrauwen Jan Van Campenhout Ghent University Belgium.
Ghent University On Implementing Reservoir Computing Benjamin Schrauwen Electronics and Information Systems Department Ghent University – Belgium December.
Ghent University An overview of Reservoir Computing: theory, applications and implementations Benjamin Schrauwen David Verstraeten and Jan Van Campenhout.
Branch Prediction Perspectives Using Machine Learning Veerle Desmet Ghent University.
Machine Learning Artificial Neural Networks MPλ ∀ Stergiou Theodoros 1.
Ghent University Backpropagation for Population-Temporal Coded Spiking Neural Networks July WCCI/IJCNN 2006 Benjamin Schrauwen and Jan Van Campenhout.
Ghent University Accelerating Event Based Simulation for Multi-Synapse Spiking Neural Networks September 13, ICANN 2006 Michiel D'Haene, Benjamin.
Optical RESERVOIR COMPUTING
Topics SRAM-based FPGA fabrics: Xilinx. Altera..
Randomness in Neural Networks
Implementing a sequence machine using spiking neurons
27 April ESANN 2006 Benjamin Schrauwen and Jan Van Campenhout
Intelligent Information System Lab
Introduction to Neural Networks And Their Applications
Computer Science Department Brigham Young University
OVERVIEW OF BIOLOGICAL NEURONS
August 8, 2006 Danny Budik, Itamar Elhanany Machine Intelligence Lab
Presentation transcript:

Ghent University Compact hardware for real-time speech recognition using a Liquid State Machine Benjamin Schrauwen – Michiel D’Haene David Verstraeten – Jan Van Campenhout Electronics and Information Systems Department Ghent University – Belgium IJCNN 2007

2/18 Compact hardware for real-time speech recognition using a LSM IJCNN – August 13, 2007 Intro Goal: Create isolated digits speech recognition in digital FPGA hardware Real-time processing As small as possible First introduce LSM based speech recognition Investigate two existing hardware architectures (to fast, to large) Introduce new hardware architecture

3/18 Compact hardware for real-time speech recognition using a LSM IJCNN – August 13, 2007 LSM based speech recognition

4/18 Compact hardware for real-time speech recognition using a LSM IJCNN – August 13, 2007 Ear model

5/18 Compact hardware for real-time speech recognition using a LSM IJCNN – August 13, 2007 Liquid State Machine Recurrent structures without the training: Reservoir Computing Jaeger (2001): Echo State Networks (engineering) Maass (2002): Liquid State Machines (neuroscience) Steil (2003): weight dynamics of Atiya-Parlos equivalent Fixed, random topology operated in correct dynamic regime Different node types possible: THG, linear, tanh, spiking, … Linear “readout” function which is trained (No local minima, no problems with recurrent structure, one shot learning) On-line computing: prediction at every time-step Any time-invariant filter with fading memory can be learned (with output feedback, universal computing)

6/18 Compact hardware for real-time speech recognition using a LSM IJCNN – August 13, 2007 Reservoir computing

7/18 Compact hardware for real-time speech recognition using a LSM IJCNN – August 13, 2007 Influence of parameters Not very important: Connection fraction Exact topology Weight distribution timescale error dynamic regime chaos error reservoir size error

8/18 Compact hardware for real-time speech recognition using a LSM IJCNN – August 13, 2007 Spiking Neural Networks Incoming spikes influence the membrane potential When certain threshold θ is reached: reset and fire t input output membr θ

9/18 Compact hardware for real-time speech recognition using a LSM IJCNN – August 13, 2007 Readout and post-processing

10/18 Compact hardware for real-time speech recognition using a LSM IJCNN – August 13, 2007 Digital spiking neurons SNN: mathematically a more complex model than ANN But: better implementable in hardware No weight multiplications: table look-up Filtering can be implemented using shifts and adds Interconnection only single bit, and sparse communication Asynchronous communication easily implementable

11/18 Compact hardware for real-time speech recognition using a LSM IJCNN – August 13, 2007 Digital spiking neurons Hardware can take advantage of parallelism But area-speed trade-off: we don’t have to make the implementation faster than needed by the application For trade-off: different implementations with other area-speed needed Possible parallelisms: Network parallelism Neuron/synapse parallelism Arithmetic parallelism We implemented: SPPA: network parallel, neuron serial, arithmetic parallel PPSA: network parallel, neuron parallel, arithmetic serial SPSA: network serial or parallel, neuron serial, arithmetic serial

12/18 Compact hardware for real-time speech recognition using a LSM IJCNN – August 13, 2007 SPPA Much like classical CPU [Roggen 2003][Upegui 2005]

13/18 Compact hardware for real-time speech recognition using a LSM IJCNN – August 13, 2007 PPSA Use FPGA features [Girau2006] [Schrauwen2006] SRL16 Serial adder

14/18 Compact hardware for real-time speech recognition using a LSM IJCNN – August 13, 2007 SPSA Everything serial PEs are very small (4 LUTs) Many parallel PEs possible SIMD controller architecture Memory based interconnect

15/18 Compact hardware for real-time speech recognition using a LSM IJCNN – August 13, 2007 Area-speed trade-off for speech task Speech task in hardware LSM with 200 neurons 16 kHz processing speed Real-time requirement LUTsmemoryReal-time SPPA kbit347 PPSA kbit205 SPSA 10PE kbit2.2 SPSA 5PE kbit1.1 SPSA 1PE kbit0.23

16/18 Compact hardware for real-time speech recognition using a LSM IJCNN – August 13, 2007 RC Toolbox Freely available RC Matlab toolbox ( Simulation models for hardware quantization HW design methodology: Generic network and node settings Readout pipeline Generate network with hardware constraints Evaluate node quantization effects Automatically export to HW description

17/18 Compact hardware for real-time speech recognition using a LSM IJCNN – August 13, 2007 System 12 MHz 100 MHz 150 MHz interc 50 MHz Xilinx ML 401

18/18 Compact hardware for real-time speech recognition using a LSM IJCNN – August 13, 2007 Conclusions Hardware real-time speech recognition in HW is possible with very limited hardware Presented novel architecture for SNN implementation in HW Enlarges area/speed design space drastically Uses RC toolbox simulation environment for easy porting Future work: experiment with different applications; add further SNN features such as SDTP, IP, dyn. synapses; and add the possibility to change the weights