John Mountney Co-advisors: Iyad Obeid and Dennis Silage

Slides:



Advertisements
Similar presentations
Mobile Robot Localization and Mapping using the Kalman Filter
Advertisements

Jose-Luis Blanco, Javier González, Juan-Antonio Fernández-Madrigal University of Málaga (Spain) Dpt. of System Engineering and Automation May Pasadena,
Probabilistic Reasoning over Time

1ASM Algorithmic State Machines (ASM) part 1. ASM2 Algorithmic State Machine (ASM) ‏ Our design methodologies do not scale well to real-world problems.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The Linear Prediction Model The Autocorrelation Method Levinson and Durbin.
EKF, UKF TexPoint fonts used in EMF.
Pattern Recognition and Machine Learning
Introduction to Sampling based inference and MCMC Ata Kaban School of Computer Science The University of Birmingham.
Introduction to Mobile Robotics Bayes Filter Implementations Gaussian filters.
Evolutionary Feature Extraction for SAR Air to Ground Moving Target Recognition – a Statistical Approach Evolving Hardware Dr. Janusz Starzyk Ohio University.
Introduction to Kalman Filter and SLAM Ting-Wei Hsu 08/10/30.
Nonlinear and Non-Gaussian Estimation with A Focus on Particle Filters Prasanth Jeevan Mary Knox May 12, 2006.
Particle Filters for Mobile Robot Localization 11/24/2006 Aliakbar Gorji Roborics Instructor: Dr. Shiri Amirkabir University of Technology.
Today Introduction to MCMC Particle filters and MCMC
Probabilistic Robotics
Comparative survey on non linear filtering methods : the quantization and the particle filtering approaches Afef SELLAMI Chang Young Kim.
Course AE4-T40 Lecture 5: Control Apllication
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Novel approach to nonlinear/non- Gaussian Bayesian state estimation N.J Gordon, D.J. Salmond and A.F.M. Smith Presenter: Tri Tran
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Neural Networks. Background - Neural Networks can be : Biological - Biological models Artificial - Artificial models - Desire to produce artificial systems.
RLSELE Adaptive Signal Processing 1 Recursive Least-Squares (RLS) Adaptive Filters.
Muhammad Moeen YaqoobPage 1 Moment-Matching Trackers for Difficult Targets Muhammad Moeen Yaqoob Supervisor: Professor Richard Vinter.
Introduction to Adaptive Digital Filters Algorithms
Particle Filtering in Network Tomography
1 Miodrag Bolic ARCHITECTURES FOR EFFICIENT IMPLEMENTATION OF PARTICLE FILTERS Department of Electrical and Computer Engineering Stony Brook University.
1 Mohammed M. Olama Seddik M. Djouadi ECE Department/University of Tennessee Ioannis G. PapageorgiouCharalambos D. Charalambous Ioannis G. Papageorgiou.
Markov Localization & Bayes Filtering
Computer vision: models, learning and inference Chapter 19 Temporal models.
SIS Sequential Importance Sampling Advanced Methods In Simulation Winter 2009 Presented by: Chen Bukay, Ella Pemov, Amit Dvash.
Particle Filtering (Sequential Monte Carlo)
STUDY, MODEL & INTERFACE WITH MOTOR CORTEX Presented by - Waseem Khatri.
Mapping and Localization with RFID Technology Matthai Philipose, Kenneth P Fishkin, Dieter Fox, Dirk Hahnel, Wolfram Burgard Presenter: Aniket Shah.
Kalman Filter (Thu) Joon Shik Kim Computational Models of Intelligence.
Jamal Saboune - CRV10 Tutorial Day 1 Bayesian state estimation and application to tracking Jamal Saboune VIVA Lab - SITE - University.
Probabilistic Robotics Bayes Filter Implementations Gaussian filters.
Probabilistic Robotics Bayes Filter Implementations.
Particle Filters for Shape Correspondence Presenter: Jingting Zeng.
Overview Particle filtering is a sequential Monte Carlo methodology in which the relevant probability distributions are iteratively estimated using the.
Forward-Scan Sonar Tomographic Reconstruction PHD Filter Multiple Target Tracking Bayesian Multiple Target Tracking in Forward Scan Sonar.
-Arnaud Doucet, Nando de Freitas et al, UAI
BCS547 Neural Decoding. Population Code Tuning CurvesPattern of activity (r) Direction (deg) Activity
BCS547 Neural Decoding.
Boosted Particle Filter: Multitarget Detection and Tracking Fayin Li.
Sequential Monte-Carlo Method -Introduction, implementation and application Fan, Xin
Mixture Kalman Filters by Rong Chen & Jun Liu Presented by Yusong Miao Dec. 10, 2003.
Computer simulation Sep. 9, QUIZ 2 Determine whether the following experiments have discrete or continuous out comes A fair die is tossed and the.
An Introduction To The Kalman Filter By, Santhosh Kumar.
Short Introduction to Particle Filtering by Arthur Pece [ follows my Introduction to Kalman filtering ]
Nonlinear State Estimation
Brain-Machine Interface (BMI) System Identification Siddharth Dangi and Suraj Gowda BMIs decode neural activity into control signals for prosthetic limbs.
Page 0 of 7 Particle filter - IFC Implementation Particle filter – IFC implementation: Accept file (one frame at a time) Initial processing** Compute autocorrelations,
1 Chapter 8: Model Inference and Averaging Presented by Hui Fang.
The Unscented Particle Filter 2000/09/29 이 시은. Introduction Filtering –estimate the states(parameters or hidden variable) as a set of observations becomes.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Kalman Filter with Process Noise Gauss- Markov.
Kevin Stevenson AST 4762/5765. What is MCMC?  Random sampling algorithm  Estimates model parameters and their uncertainty  Only samples regions of.
Kalman Filter and Data Streaming Presented By :- Ankur Jain Department of Computer Science 7/21/03.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 1: INTRODUCTION.
Introduction to Sampling based inference and MCMC
Deep Feedforward Networks
Probabilistic Robotics
PSG College of Technology
Auxiliary particle filtering: recent developments
Filtering and State Estimation: Basic Concepts
Digital Image Processing Week IV
Parametric Methods Berlin Chen, 2005 References:
Statistical Methods for Data Analysis Random number generators
Unfolding with system identification
Presentation transcript:

John Mountney Co-advisors: Iyad Obeid and Dennis Silage Parallel FPGA Particle Filtering for Real-Time Neural Signal Processing John Mountney Co-advisors: Iyad Obeid and Dennis Silage

Outline Introduction to Brain Machine Interfaces Decoding Algorithms Evaluation of the Bayesian Auxiliary Particle Filter Algorithm Implementation in Hardware Proposed Future Work

Brain Machine Interface (BMI) A BMI is a device which directly interacts with ensembles of neurons in the central nervous system

Applications of the BMI Gain knowledge of the operation and functionality of the brain Decode neural activity to estimate intended biological signals (neuroprosthetics) Encode signals which can be interpreted by the brain (cochlear, retinal implants)

Interpreting Neural Activity The neural tuning model is the key component to encoding and decoding biological signals Given the current state x(t) of a neuron, the model describes its firing behavior in response to a stimulus

Tuning Function Example Place cells fire when an animal is in a specific location and are responsible for spatial mapping. Assumed firing model: Maximum firing rate: Center of the receptive field: Width of the receptive field:

Neural Plasticity Neural plasticity can be the result of environmental changes, learning, acting or brain injury Based on how active a neuron is during an experience, the synapses grow stronger or weaker Plasticity results in a dynamic state vector of the neural tuning model

Time-varying Tuning Function Dynamic firing model: Dynamic state vector:

Decoding Algorithms

Wiener Filter Linear transversal filter Coefficients minimize the error between filter output and a desired response Applied in recreating center out reaching tasks and 2D cursor movements (Gao, 2002) Assumes the input signal is stationary and also has an invertible autocorrelation matrix

Least Mean Square (LMS) Iterative algorithm that converges to the Weiner solution Avoids inverting the input autocorrelation matrix to provide computational savings If the autocorrelation matrix is ill conditioned, a large number of iterations may be required for convergence

Kalman Filter Solves the same problem as the Wiener filter without the constraint of stationarity Recursively updates the state estimate using current observations Applied in arm movement reconstruction experiments (Wu, 2002) Assumes all noise processes have a known Gaussian distribution

Extended Kalman Filter Attempts to linearize the model around the current state through a first-order Taylor expansion Successfully implemented in the control and tracking of spatiotemporal cortical activity (Schiff, 2008) State transition and measurement matrices must be differentiable Requires evaluation of Jacobians at each iteration

Unscented Kalman Filter The probability density is approximated by transforming a set of sigma points through the nonlinear prediction and update functions Easier to approximate a probability distribution than it is to approximate an arbitrary nonlinear transformation Recently applied in real-time closed loop BMI experiments (Li, 2009)

Unscented Kalman Filter (cont.) Statistical properties of the transformed sigma points become distorted through the linearization process If the initial state estimates are incorrect, filter divergence can quickly become an issue Gaussian environment is still assumed

Particle Filtering Numerical solution to nonlinear non-Gaussian state-space estimation Use Monte Carlo integration to approximate analytically intractable integrals Represent the posterior density by a set of randomly chosen weighted samples or particles Based on current observations, how likely does a particle represent the posterior

Resampling Replicate particles with high weights, discard particles with small weights Higher weighted particles are more likely to approximate the posterior with better accuracy Known as the sampling importance resampling (SIR) particle filter (Gordon, 1993)

SIR Particle Filtering Algorithm Sample each particle from a proposal density π that approximates the current posterior: Assign particle weights based on how probable a sample drawn from the target posterior has been:

SIR Particle Filtering Algorithm Normalize the particle weights: Perform Resampling Re-initialize weights:

SIR Particle Filtering Algorithm Form an estimate of the state as a weighted sum Repeat

SIR Particle Filtering Applied to reconstruct hand movement trajectories (Eden, 2004) SIR particle filters suffer from degeneracy Particles with high weights are duplicated many times May collapse to a single point (loss of diversity) Computationally expensive

Bayesian Auxiliary Particle Filter (BAPF) Addresses two limitations of the SIR particle filter Poor outlier performance Degeneracy Introduced by Pitt & Shephard (1999), later extended by Liu & West (2002) to include a smoothing factor

BAPF Favor particles that are likely to survive at the next iteration of the algorithm Perform resampling at time tk-1 using the available measurements at time tk Use a two-stage weighting process to compensate for the predicted point and the actual sample

BAPF Algorithm Sample each particle from a proposal density π that approximates the current posterior: Assign 1st stage weights g(t) based on how probable a sample drawn from the target posterior has been:

BAPF Algorithm Normalize the importance weights Resample according to g(t) Sample each particle from a second proposal density q

BAPF Algorithm Assign the 2nd stage weights Compute an estimate as a weighted sum Repeat

Evaluation of the Bayesian Auxiliary Particle Filter

Gaussian Shaped Tuning Function

Simulation Results Preliminary Data Observe an ensemble of hippocampal place cells whose firing times have an inhomogeneous Poisson arrival rate Estimate the animal’s position on a one dimensional 300 cm track, generated as random walk Evaluated under noisy conditions Performance is compared to the Wiener filter and sampling importance resampling particle filter

Mean Square Error vs. Number of Neurons

Signal Estimation 100 particles 100 neurons

95% Confidence Intervals 100 particles 50 neurons 100 simulations of a single data set Black: true position Red: BAPF interval Green: PF interval

Mean Square Error vs. Missed Firings 100 particles 50 neurons

Mean Square Error vs. Rate of False Detections 100 particles 50 neurons

Mean Square Error vs. Spike Sorting Error 100 particles 50 neurons

Algorithm Implementation in Hardware

Algorithm Implementation The target hardware is a field programmable gate array (FPGA) Dedicated hardware avoids fetching and decoding of instructions FPGAs are capable of executing multiple computations simultaneously

FPGA Resources Configurable logic blocks (CLB) Look-up tables (LUT) Multiplexers Flip-flops Logic gates (AND, OR, NOT) Programmable interconnects Routing matrix controls signal routing Input-Output cells Latch data at the I/O pins

FPGA Resources Embedded fixed-point multipliers (DSP48E) 24-bit x 18-bit inputs On-chip memory Up to 32 MB Digital clock managers Multirate signal processing Phase locked loops

ML506SX50T Resource Available Slices 8160 Embedded Multipliers 288 RAM 3.8 Gb/s Transceivers 12 I/O Pins 480 Maximum Clock Rate 550 MHz

Design Flow 1. 2. 3. 4.

Hardware Co-Simulation

Top-Level Block Diagram

Top-Level Block Diagram

Box-Muller Transformation Generates two orthogonal standard normal sequences from two uniform distributions

Box-Muller Transformation

Box-Muller Transformation

Linear Feedback Shift Register (LFSR) Shift register made of m flip-flops Mod-2 adders configured according to a generator polynomial Represent a value between 0 and 1:

LFSR (cont.) LFSR output has correlation Bits are only shifted one position Has a lowpass effect on the output sequence

Linear Feedback Shift Register with Skip-ahead Logic Advances the state of the LFSR multiple states Bits are shifted multiple positions Removes correlation in the uniform distribution

Box-Muller Transformation

Top-Level Block Diagram

Top-Level Block Diagram

Particle Block Diagram

Steps 1 and 2 of the BAPF Algorithm

Particle Block Diagram

Compute the 1st Stage Weights

Compute the 1st Stage Weights fraction integer For x<0:

Compute the 1st Stage Weights

Resample the 1st Stage Weights

Particle Block Diagram

Estimated Output Signal as a Weighted Sum

Synthesis Results Slices DSP48Es Clock-cycles Latency Random Number Generator 3506 1 (after pipelining) 3.7 ns Exponential 55 1 5 1.4 ns Exponential Quantity 12 2 3 3.0 ns Raise to Integer Power 51 4 per sample 1.6 ns

Proposed Future Work

Parallel Resampling Particles with high weights are retained Particles with low weights are discarded All particles can be resampled in two clock cycles On the first cycle, all particles are copied to temporary registers On the second cycle, all particles are compared and assigned new values

Automated Controller Design as a finite state machine (FSM) Sampling period, block size, number of neurons and number of particles determine control signals Signals include: enable lines for data registers, multipliers and counters, select lines for multiplexers and reset signals Build the FSM from counters, comparators and multiplexers

Verification Filter output compared to the MATLAB simulations Quantization error is expected Determine the number of bits needed for acceptable precision of the estimated signal Further evaluation of the filter with an increase in particles and neurons

Throughput Comparison The parallel processing architecture will be compared to a sequential implementation Current benchmark is MATLAB running on the Java Virtual Machine (not a true comparison) Comparisons will be made for throughput as a function of particles as well as neurons

Timeline Throughput Comparison Verification Evaluation of the number of particles/neurons Synthesize Controller Simulate Controller Synthesize Modules May June July Aug Sept Oct Nov Dec

Acknowledgements Thank you, advisors and committee members. Dr. Iyad Obeid Dr. Dennis Silage Dr. Joseph Picone Dr. Marc Sobel

Questions?