More MR Fingerprinting

Slides:



Advertisements
Similar presentations
Active Appearance Models
Advertisements

Active Shape Models Suppose we have a statistical shape model –Trained from sets of examples How do we use it to interpret new images? Use an “Active Shape.
July 31, 2013 Jason Su. Background and Tools Cramér-Rao Lower Bound (CRLB) Automatic Differentiation (AD) Applications in Parameter Mapping Evaluating.
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Compressive Sensing IT530, Lecture Notes.
Medical Image Registration Kumar Rajamani. Registration Spatial transform that maps points from one image to corresponding points in another image.
Journal Club: mcDESPOT with B0 & B1 Inhomogeneity.
Slide 1 Bayesian Model Fusion: Large-Scale Performance Modeling of Analog and Mixed- Signal Circuits by Reusing Early-Stage Data Fa Wang*, Wangyang Zhang*,
Joint work with Irad Yavneh
Operated by Los Alamos National Security, LLC for the U.S. Department of Energy’s NNSA U N C L A S S I F I E D Physics-Based Constraints in the Forward.
Multi-Task Compressive Sensing with Dirichlet Process Priors Yuting Qi 1, Dehong Liu 1, David Dunson 2, and Lawrence Carin 1 1 Department of Electrical.
1 Micha Feigin, Danny Feldman, Nir Sochen
Bayesian Robust Principal Component Analysis Presenter: Raghu Ranganathan ECE / CMR Tennessee Technological University January 21, 2011 Reading Group (Xinghao.
Robust Network Compressive Sensing Lili Qiu UT Austin NSF Workshop Nov. 12, 2014.
“Random Projections on Smooth Manifolds” -A short summary
Master thesis by H.C Achterberg
The value of kernel function represents the inner product of two training points in feature space Kernel functions merge two steps 1. map input data from.
Richard Baraniuk Rice University dsp.rice.edu/cs Compressive Signal Processing.
Compressive Signal Processing
Basic Concepts and Definitions Vector and Function Space. A finite or an infinite dimensional linear vector/function space described with set of non-unique.
Markus Strohmeier Sparse MRI: The Application of
Rice University dsp.rice.edu/cs Distributed Compressive Sensing A Framework for Integrated Sensing and Processing for Signal Ensembles Marco Duarte Shriram.
6.829 Computer Networks1 Compressed Sensing for Loss-Tolerant Audio Transport Clay, Elena, Hui.
Compressed Sensing for Chemical Shift-Based Water-Fat Separation Doneva M., Bornert P., Eggers H., Mertins A., Pauly J., and Lustig M., Magnetic Resonance.
Radiofrequency Pulse Shapes and Functions
Compressed Sensing Compressive Sampling
Imaging Sequences part II
Jinhui Tang †, Shuicheng Yan †, Richang Hong †, Guo-Jun Qi ‡, Tat-Seng Chua † † National University of Singapore ‡ University of Illinois at Urbana-Champaign.
Computation of the Cramér-Rao Lower Bound for virtually any pulse sequence An open-source framework in Python: sujason.web.stanford.edu/quantitative/ SPGR.
Mining Discriminative Components With Low-Rank and Sparsity Constraints for Face Recognition Qiang Zhang, Baoxin Li Computer Science and Engineering Arizona.
Computer vision: models, learning and inference Chapter 19 Temporal models.
Compressive Sensing Based on Local Regional Data in Wireless Sensor Networks Hao Yang, Liusheng Huang, Hongli Xu, Wei Yang 2012 IEEE Wireless Communications.
Adaptive CSMA under the SINR Model: Fast convergence using the Bethe Approximation Krishna Jagannathan IIT Madras (Joint work with) Peruru Subrahmanya.
Cs: compressed sensing
Recovering low rank and sparse matrices from compressive measurements Aswin C Sankaranarayanan Rice University Richard G. Baraniuk Andrew E. Waters.
EE369C Final Project: Accelerated Flip Angle Sequences Jan 9, 2012 Jason Su.
Efficient Integration of Large Stiff Systems of ODEs Using Exponential Integrators M. Tokman, M. Tokman, University of California, Merced 2 hrs 1.5 hrs.
Shriram Sarvotham Dror Baron Richard Baraniuk ECE Department Rice University dsp.rice.edu/cs Sudocodes Fast measurement and reconstruction of sparse signals.
J OURNAL C LUB : Lankford and Does. On the Inherent Precision of mcDESPOT. Jul 23, 2012 Jason Su.
A CCELERATED V ARIABLE F LIP A NGLE T 1 M APPING VIA V IEW S HARING OF P SEUDO -R ANDOM S AMPLED H IGHER O RDER K-S PACE J.Su 1, M.Saranathan 1, and B.K.Rutt.
Mingyang Zhu, Huaijiang Sun, Zhigang Deng Quaternion Space Sparse Decomposition for Motion Compression and Retrieval SCA 2012.
Precise and Approximate Representation of Numbers 1.The Cartesian-Lagrangian representation of numbers. 2.The homotopic representation of numbers 3.Loops.
Sparse Signals Reconstruction Via Adaptive Iterative Greedy Algorithm Ahmed Aziz, Ahmed Salim, Walid Osamy Presenter : 張庭豪 International Journal of Computer.
Results Fig. 4a shows the raw, noise-free, simulation data for a sample with T 1 = 1 s and T 2 = 100 ms. A clear difference is seen between the cases where.
J OURNAL C LUB : Deoni et al. One Component? Two Components? Three? The Effect of Including a Nonexchanging ‘‘Free’’ Water Component in mcDESPOT. Jan 14,
Regularization and Feature Selection in Least-Squares Temporal Difference Learning J. Zico Kolter and Andrew Y. Ng Computer Science Department Stanford.
Optimal Design with Automatic Differentiation: Exploring Unbiased Steady-State Relaxometry Jan 13, 2014 Jason Su.
NONNEGATIVE MATRIX FACTORIZATION WITH MATRIX EXPONENTIATION Siwei Lyu ICASSP 2010 Presenter : 張庭豪.
Li-Wei Kang and Chun-Shien Lu Institute of Information Science, Academia Sinica Taipei, Taiwan, ROC {lwkang, April IEEE.
Magnetic Resonance Learning Objectives
Using Neumann Series to Solve Inverse Problems in Imaging Christopher Kumar Anand.
Jianchao Yang, John Wright, Thomas Huang, Yi Ma CVPR 2008 Image Super-Resolution as Sparse Representation of Raw Image Patches.
A CCELERATED V ARIABLE F LIP A NGLE T 1 M APPING VIA V IEW S HARING OF P SEUDO -R ANDOM S AMPLED H IGHER O RDER K-S PACE J.Su 1, M.Saranathan 1, and B.K.Rutt.
From Sparse Solutions of Systems of Equations to Sparse Modeling of Signals and Images Alfred M. Bruckstein (Technion), David L. Donoho (Stanford), Michael.
Date of download: 7/10/2016 Copyright © 2016 SPIE. All rights reserved. A graphical overview of the proposed compressed gated range sensing (CGRS) architecture.
Author: Vikas Sindhwani and Amol Ghoting Presenter: Jinze Li
Examining mcDESPOT Mar 12, 2013 Jason Su.
Dynamical Statistical Shape Priors for Level Set Based Tracking
Proposed (MoDL-SToRM)
Presenter: Xudong Zhu Authors: Xudong Zhu, etc.
Estimating Networks With Jumps
Optimal sparse representations in general overcomplete bases
Parallelization of Sparse Coding & Dictionary Learning
BSSFP Simulations The goal of these simulations is to show the effect of noise on different pcSSFP reconstruction schemes Further examine the uniformity.
INFONET Seminar Application Group
Sudocodes Fast measurement and reconstruction of sparse signals
LAB MEETING Speaker : Cheolsun Kim
DISCO-mcDESPOT Nov. 6, 2011 Jason Su.
Outline Sparse Reconstruction RIP Condition
Stochastic Methods.
Presentation transcript:

More MR Fingerprinting

Key Concepts Traditional parameter mapping has revolved around fitting signal equations to data with tractable analytical forms mcDESPOT is perhaps among the more complicated but still uses a multi-component matrix exponential model of the SPGR and SSFP signals Generate unique signal time courses for each set of T1/T2/M0/B0 parameters Vary the free variables in a bSSFP sequence (TR and flip angle) with inversions every 200 TRs The resulting signal can be numerically found via Bloch simulation

Key Concepts Use a dictionary and good lookup scheme to fit the acquired data The reconstruction cost of using complex signal models is often that the fitting becomes very expensive In mcDESPOT, the matrix exponential equation is calculated thousands of times for every voxel In MRF, the Bloch simulation would similarly have to be run many times to find a good fit Instead, if the parameter space is explored beforehand, we can store a dictionary of signal evolutions This frontend loads all the computation time Any change in the model or pulse sequence would require a recomputation of the entire dictionary

Interpretation To me, MRF is the generalization of parameter mapping from analytical equations to numerical simulations This poses two new problems: Excitation problem: what is the best choice of signal parameters to optimize parameter estimation? “optimize” is obviously a loaded word here, but we want a sequence that is robust to system imperfections and fast Reconstruction problem: how do we efficiently find the parameters from acquired data?

Excitation Problem This is not well addressed by the MRF abstract and they default to a random choice of sequence variables Most likely sub-optimal, resulting in long acquisition: 500 frames, 10min per slice Success depends on whether TR and flip angle choice of bSSFP produce enough incoherence between different tissues (i.e. T1/T2/M0/B0 sets) May have to generalize even further, with complete freedom in RF excitation and gradients to achieve reasonable times with a random approach Especially if they expand the model to include diffusion or multi-component behavior

Excitation Problem Could be seen as an optimization problem of the form: find 𝜌 𝑡 ,𝑔 𝑥 [𝑡], 𝑔 𝑦 [𝑡], 𝑔 𝑧 [𝑡] min. Σ 𝐹 +𝜆𝑇 where T is the total time and Σ is the correlation matrix of the signal evolutions for various tissues, e.g. WM, GM, CSF, fat, lesion The Frobenius norm is the root sum of squares of all the elements in a matrix In other words, find the sequence that best reduces total scan time and correlation between the signal evolutions of different tissues This is the general form, could of course constrain it to only choose variables within a bSSFP framework

Reconstruction Problem Orthogonal Matching Pursuit is their dictionary lookup method of choice Previously used by Doneva et al. in T1/T2 mapping from T1 Look-Locker and T2 spin echo data OMP solves the following problem: find 𝑥−𝐷𝑠 2 s.t. 𝑠 0 ≤𝐾 where D is the dictionary, s is a set of weights for the over-complete basis functions in D, and K is the sparsity

Orthogonal Matching Pursuit Essentially a CS reconstruction: the signal evolution is sparse in the dictionary space (ideally it’s only one entry) Find a K-sparse representation that best matches the data May be strange to think about but T1/T2/M0/B0 maps are a way to compress the acquired data set by using knowledge of the signal behavior Matching Pursuit works by successively adding the most correlated entries from the dictionary with each iteration Given a fixed dictionary, first find the one entry that has the biggest inner product with the signal Then subtract the contribution due to that, and repeat the process until the signal is satisfactorily decomposed. OMP is a refinement of this process that gives it additional useful properties

Orthogonal Matching Pursuit Properties: For random linear measurements, requires O(K ln N) samples – not sure how this applies to MRF For any finite size dictionary N, converges to projection onto span of D within N iterations After any n iterations, gives the optimal approximation for that subset of the dictionary Fast convergence, within K iterations Applicable to any dictionary scheme mcDESPOT could benefit from this

Spatial Acceleration Random spatial encoding like in CS can also be utilized in this framework Doneva et al. achieve such acceleration by taking advantage of the robustness of OMP Sampling incoherently in the spatial domain produces noise-like interference in the images, OMP can still fit well through this noise Of course sampling pattern must change between frames Can think of it like the inherent denoising in CS reconstruction Results would be improved by including a spatial constraint on sparsity in a transform domain Is the wavelet domain sparse for their sorts of images? Probably. Parallel imaging also provides information

Temporal Acceleration Typical temporal acceleration is achieved by exploiting smoothness in the temporal direction This is the case for both dynamic imaging and parameter mapping This seems tricky with a random pulse sequence If the signal evolution appears random, then this is inherently very hard to compress May be an advantage of a better solution to the Excitation problem Alternatively, perhaps random time course can achieve good results in shorter time than a solution that enforces smoothness and the net speed ends up being similar (seems likely)