Authors: Hochberg, Leigh R. Bacher, Daniel Jarosiewicz, Beata Masse, Nicolas Y. Simeral, John D. Vogel, Joern Haddadin, Sami Liu, Jie Cash, Sydney S. van.

Slides:



Advertisements
Similar presentations
What is the neural code?. Alan Litke, UCSD Reading out the neural code.
Advertisements

Artificial Intelligence 13. Multi-Layer ANNs Course V231 Department of Computing Imperial College © Simon Colton.
RESEARCHERS ARE DEVELOPING NEW METHODS OF TESTING THE OPERABILITY OF PROSTHETICS VIA THE BRAIN. Controlling a Computer With Thought.
All materials are based on the following paper Meel Velliste, Sagi Perel, M. Chance Spalding, Andrew S. Whitford and Andrew B. Schwartz, Cortical control.
Brain-computer interfaces: classifying imaginary movements and effects of tDCS Iulia Comşa MRes Computational Neuroscience and Cognitive Robotics Supervisors:
1 Decoding Neural Activity from an Intracortical Implant in Humans with Tetraplegia Chad E. Bouton Battelle Health and Life Sciences Division.
Population vector algorithm
Inferring Hand Motion from Multi-Cell Recordings in Motor Cortex using a Kalman Filter Wei Wu*, Michael Black †, Yun Gao*, Elie Bienenstock* §, Mijail.
Tracking Migratory Birds Around Large Structures Presented by: Arik Brooks and Nicholas Patrick Advisors: Dr. Huggins, Dr. Schertz, and Dr. Stewart Senior.
CSE 490i: Design in Neurobotics Yoky Matsuoka (instructor) Lecture: TTH 10:30-11:20 EEB 003 Labs: TTH 11:30-1:20 CSE 003E.
A brain-machine interface instructed by direct intracortical microstimulation Joseph E. O’Doherty, Mikhail A. Lebedev, Timothy L. Hanson, Nathan A. Fitzsimmons.
Tracking a maneuvering object in a noisy environment using IMMPDAF By: Igor Tolchinsky Alexander Levin Supervisor: Daniel Sigalov Spring 2006.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Effect of Mutual Coupling on the Performance of Uniformly and Non-
3D SLAM for Omni-directional Camera
Lecture 10: Mean Field theory with fluctuations and correlations Reference: A Lerchner et al, Response Variability in Balanced Cortical Networks, q-bio.NC/ ,
REACH AND GRASP USING A NEURALLY CONTROLLED ROBOT MARCOS MALLO LÓPEZ MIND CONTROLLED ROBOTS.
STUDY, MODEL & INTERFACE WITH MOTOR CORTEX Presented by - Waseem Khatri.
BRAINGATE NEURAL- INTERFACE SYSTEM BY
Graz-Brain-Computer Interface: State of Research By Hyun Sang Suh.
Operant Conditioning of Cortical Activity E Fetz, 1969.
Encoding/Decoding of Arm Kinematics from Simultaneously Recorded MI Neurons Y. Gao, E. Bienenstock, M. Black, S.Shoham, M.Serruya, J. Donoghue Brown Univ.,
Learning to perceive how hand-written digits were drawn Geoffrey Hinton Canadian Institute for Advanced Research and University of Toronto.
Vital Signs Monitor UConn BME 4900 Vital Signs Monitor Purpose As the population ages, many people are required by their doctors to take vital signs.
Statistical learning and optimal control: A framework for biological learning and motor control Lecture 4: Stochastic optimal control Reza Shadmehr Johns.
A High-Performance Brain- Computer Interface
BCS547 Neural Decoding. Population Code Tuning CurvesPattern of activity (r) Direction (deg) Activity
BCS547 Neural Decoding.
4. For each unit one lag, one polynomial, and one axis were selected based on the quality of fit of the posterior. Thus, the firing rate of each unit is.
Brain-Machine Interface (BMI) System Identification Siddharth Dangi and Suraj Gowda BMIs decode neural activity into control signals for prosthetic limbs.
Presentation by A.Sudheer kumar Dept of Information Technology 08501A1201.
What are Neuromotor Prosthetics? A neuromotor prosthetic is a brain-machine interface that records neural activity from the brain and decodes cellular.
BRAIN GATE TECHNOLOGY.. Brain gate is a brain implant system developed by the bio-tech company Cyberkinetics in 2003 in conjunction with the Department.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
EMG-HUMAN MACHINE INTERFACE SYSTEM
References and Related Work
Sinhgad College of Engineering Department of Information Technology
Sensing with the Motor Cortex
Toward More Versatile and Intuitive Cortical Brain–Machine Interfaces
Joshua P. Bassett, Thomas J. Wills, Francesca Cacucci  Current Biology 
Volume 36, Issue 5, Pages (December 2002)
Volume 82, Issue 1, Pages (April 2014)
Ian M. Finn, Nicholas J. Priebe, David Ferster  Neuron 
Volume 66, Issue 6, Pages (June 2010)
Volume 64, Issue 4, Pages (November 2009)
Roman F. Loonis, Scott L. Brincat, Evan G. Antzoulatos, Earl K. Miller 
Volume 66, Issue 4, Pages (May 2010)
Volume 74, Issue 3, Pages (May 2012)
Hippocampal “Time Cells”: Time versus Path Integration
Spatially Periodic Activation Patterns of Retrosplenial Cortex Encode Route Sub-spaces and Distance Traveled  Andrew S. Alexander, Douglas A. Nitz  Current.
Motor Learning with Unstable Neural Representations
Guifen Chen, Daniel Manson, Francesca Cacucci, Thomas Joseph Wills 
Volume 34, Issue 5, Pages (May 2002)
Brain-Computer Interfaces in Medicine
Volume 82, Issue 6, Pages (June 2014)
Vivek R. Athalye, Karunesh Ganguly, Rui M. Costa, Jose M. Carmena 
Volume 71, Issue 4, Pages (August 2011)
Fast Sequences of Non-spatial State Representations in Humans
Volume 24, Issue 8, Pages e4 (August 2018)
Greg Schwartz, Sam Taylor, Clark Fisher, Rob Harris, Michael J. Berry 
Xiaomo Chen, Marc Zirnsak, Tirin Moore  Cell Reports 
Jozsef Csicsvari, Hajime Hirase, Akira Mamiya, György Buzsáki  Neuron 
Timescales of Inference in Visual Adaptation
The Normalization Model of Attention
Ilan Lampl, Iva Reichova, David Ferster  Neuron 
Encoding of Stimulus Probability in Macaque Inferior Temporal Cortex
Volume 87, Issue 7, Pages (December 1996)
Valerio Mante, Vincent Bonin, Matteo Carandini  Neuron 
Rony Azouz, Charles M. Gray  Neuron 
Dynamics of Orientation Selectivity in the Primary Visual Cortex and the Importance of Cortical Inhibition  Robert Shapley, Michael Hawken, Dario L. Ringach 
Presentation transcript:

Authors: Hochberg, Leigh R. Bacher, Daniel Jarosiewicz, Beata Masse, Nicolas Y. Simeral, John D. Vogel, Joern Haddadin, Sami Liu, Jie Cash, Sydney S. van der Smagt, Patrick Donoghue, John P. Reach and Grasp by People with Tetraplegia Using a Neurally Controlled Robotic Arm Published in Nature, May 2012

Background Previous studies including: Hochberg, L. R. et al. Neuronal ensemble control of prosthetic devices by a human with tetraplegia.Nature 442, 164–171 (2006) Simeral, J. D., Kim, S. P., Black, M. J., Donoghue, J. P. & Hochberg, L. R. Neural control of cursor trajectory and click by a human with tetraplegia 1000 days after implant of an intracortical microelectrode array. J. Neural Eng. 8, (2011) Kim, S. P. et al. Point-and-click cursor control with an intracortical neural interface system by humans with tetraplegia. IEEE Trans. Neural Syst. Rehabil. Eng. 19, 193–203 (2011) Have shown that people with long-standing tetraplegia can use neural interface systems to move and click a computer cursor and to control physical devices (open and close a prosthetic hand)

Velliste, M., Perel, S., Spalding, M. C., Whitford, A. S. & Schwartz, A. B. Cortical control of a prosthetic arm for self-feeding. Nature 453, 1098–1101 (2008) Also showed that monkeys can use a neural interface system to control a robotic arm. It was previously unknown whether people with severe upper extremity paralysis could use cortical neuronal ensemble signals to direct useful arm actions. The authors of this article set out to show that people with long-standing tetraplegia can learn to use neural interface system-based control of a robotic arm to perform useful three-dimensional reach and grasp movements.

The study participants, referred to as S3 and T2 (a 58- year-old woman, and a 66-year-old man, respectively), were each tetraplegic and anarthric as a result of a brainstem stroke. Both were enrolled in the BrainGate2 pilot clinical trial Neural signals were recorded using a 4 mm × 4 mm, 96- channel microelectrode array, which was implanted in the dominant MI hand area (for S3, in November 2005, 5 years before the beginning of this study; for T2, in June 2011, 5 months before this study) Participants performed sessions on a near-weekly basis to perform point and click actions of a computer cursor using decoded MI ensemble spiking signals to gain familiarity with the system.

The DLR Light-Weight Robot III (German Aerospace Center) is designed to be an assistive device that can reproduce complex arm and hand actions. The DEKA Arm System (DEKA Research and Development;) is a prototype advanced upper limb replacement for people with arm amputation Across four sessions S3 used her neural signals to perform reach and grasp movements of either of two differently purposed right-handed robot arms. Another session had S3 perform the coffee cup experiment. T2 controlled the DEKA prosthetic limb on one session day.

Methods Before each session A technician connected the 96-channel recording cable to the percutaneous pedestal and then viewed neural signal waveforms using commercial software. The waveforms were used to identify channels that were not recording signals or were contaminated with noise. For S3, those channels were manually excluded and remained off for the remainder of the recording session. For T2, noise in the raw signal was reduced using common- average referencing. They took the 50 channels with the lowest impedance and selected the 20 with the lowest firing rates. The mean signal from these 20 channels was subtracted from all 96 channels. Targets were one of seven, six cm foam balls attached to a spring loaded dowel rod. Due to target placement error, scoring was done by visual inspection of the videos by three investigators and then again by an independent fourth investigator Both robots were operated under continuous user-driven neuronal ensemble control of arm endpoint (hand) velocity in three-dimensional space and a simultaneously decoded neural state executed a hand action.

The raw neural signals for each channel were sampled at 30 kHz and fed through custom Simulink software in 100 ms bins (S3) or 20 ms bins (T2, more noise). To find the threshold crossing rates, signals in each bin were filtered with a fourth-order Butterworth filter with corners at 250 and 5,000 Hz, temporally reversed and filtered again. Neural signals were buffered for 4 ms before filtering to avoid edge effects. Threshold crossings were counted by dividing the signals into 2.5 ms (for S3) or 0.33 ms (for T2) sub-bins. In each sub-bin, the minimum value was calculated and compared with a threshold. For S3, this threshold was set at −4.5 times the filtered signal’s root mean square value in the previous block. For T2, this threshold was set at −5.5 times the root mean square of the distribution of minimum values collected from each sub- bin. The number of minima that exceeded the channel’s threshold was then counted in each bin, and these threshold crossing rates were used as the neural features for real-time decoding and filter calibration.

Filter calibration was performed at the beginning of each session using data acquired over several blocks of trials (each block was about trials and 3-6 mins). The process began with one open-loop filter initialization block, in which the participants were instructed to imagine that they were controlling the movements of the robot arm as it performed pre-programmed movements along the cardinal axes. The trial sequence was a centre–out–back pattern. To caliberate the Kalman filter a tuning function was estimated for each unit by regressing its threshold crossing rates against instantaneous target directions. Open-loop filter initialization was followed by several blocks of closed-loop filter calibration in which the participant actively controlled the robot themselves to acquire targets, in a similar home–out–back pattern

During each closed-loop filter calibration block, the participant’s intended movement direction at each moment was inferred to be from the current endpoint of the robot hand towards the center of the target. Time bins from 0.2 to 3.2 s after the trial start were used to calculate tuning functions and the baseline rates. This was done by regressing threshold crossing rates from each bin against the corresponding unit vector pointing in the intended movement direction. In each closed-loop filter calibration block, the error in the participant’s decoded trajectories was attenuated by scaling down decoded movement commands orthogonal to the instantaneous target direction by a fixed percentage. Then, the amount of error attenuation was decreased across filter calibration blocks until it was zero, giving the participant full three-dimensional control of the robot.

The state decoder used to control the grasping action of the robot hand was also calibrated during the same open- and closed-loop blocks. During open-loop blocks, after each trial ending at the home target, the robot hand would close for 2 s. During this time, the participant was instructed to imagine that they were closing their own hand. State decoder calibration was very similar during closed-loop calibration blocks

Endpoint velocity and grasp state were decoded based on the deviation of each unit’s neural activity from its baseline rate. Errors in estimating the baseline rate itself may create a bias in the decoded velocity or grasp state. To reduce such biases including potential drifts in baseline rates over time, the baseline rates were re-estimated after every block using the previous block’s data. During filter calibration, in which the participant was instructed to move the endpoint of the hand directly towards the target, they determined the baseline rate of a channel by modelling neural activity as a linear function of the intended movement direction plus the baseline rate. The following equation was fitted: z = baseline + Hd z is the threshold crossing rate, H is the channel’s preferred direction and d is the intended movement direction.

The Kalman filter requires four sets of parameters, two of which were calculated based on the mean- subtracted threshold crossing rate (z) and the intended direction (d). The other two parameters were hard coded. The first parameter was the directional tuning, H, calculated as The second parameter, Q, was the error covariance matrix in linearly reconstructing the neural activity, The two hard-coded parameters were the state transition matrix A, which predicts the intended direction given the previous estimate d(t) = Ad(t − 1), and the error in this model,

Videos xtref/nature11076-s2.mov xtref/nature11076-s5.mov

Results

Results / Conclusions They are the first to prove that neural inputs from BCI can be used to operate a robotic arm and perform complex manual skills. While not perfect, this suggests that the experimental system can be used to restore lost function in people with paralysis of the arms including the ability to perform everyday activities again. Furthermore, the robotic control was much better than that previously observed in normal non- human primates. Patient S3 was able to operate this system nearly 15 years after her stroke and paralysis with a small array of electrodes, suggesting that this technology may be able to be applied to a large proportion of the patient population

Discussion With more experience will patients learn to control the robotic arm better? Can patients perform movements not possible with a normal arm? How would a dual robotic arm system work? Would it be able to completely separate left and right arm movements? Is it possible to wire the outputs of the neural control to electrically stimulate muscles for movement instead of a robotic arm? Will the technology ever become so refined that patients are able to write with pen on paper?

Future Directions/ Applications Whole body neurally controlled exoskeletons? Wireless? Military Hazardous materials Wheelchair mounted At home