Jeroen Hermans, Frederik Maes, Dirk Vandermeulen, Paul Suetens

Slides:



Advertisements
Similar presentations
A Growing Trend Larger and more complex models are being produced to explain brain imaging data. Bigger and better computers allow more powerful models.
Advertisements

SPM5 Segmentation. A Growing Trend Larger and more complex models are being produced to explain brain imaging data. Bigger and better computers allow.
OverviewOverview Motion correction Smoothing kernel Spatial normalisation Standard template fMRI time-series Statistical Parametric Map General Linear.
Segmentation and Fitting Using Probabilistic Methods
DATA MINING van data naar informatie Ronald Westra Dep. Mathematics Maastricht University.
ProbExplorer: Uncertainty-guided Exploration and Editing of Probabilistic Medical Image Segmentation Ahmed Saad 1,2, Torsten Möller 1, and Ghassan Hamarneh.
K-means clustering Hongning Wang
Hidden Variables, the EM Algorithm, and Mixtures of Gaussians Computer Vision CS 143, Brown James Hays 02/22/11 Many slides from Derek Hoiem.
Automatic Identification of Bacterial Types using Statistical Image Modeling Sigal Trattner, Dr. Hayit Greenspan, Prof. Shimon Abboud Department of Biomedical.
Hidden Variables, the EM Algorithm, and Mixtures of Gaussians Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 03/15/12.
Paper Discussion: “Simultaneous Localization and Environmental Mapping with a Sensor Network”, Marinakis et. al. ICRA 2011.
ETHEM ALPAYDIN © The MIT Press, Lecture Slides for.
First introduced in 1977 Lots of mathematical derivation Problem : given a set of data (data is incomplete or having missing values). Goal : assume the.
1 Learning Entity Specific Models Stefan Niculescu Carnegie Mellon University November, 2003.
Introduction to Bayesian Learning Bob Durrant School of Computer Science University of Birmingham (Slides: Dr Ata Kabán)
Expectation Maximization Algorithm
Expectation Maximization for GMM Comp344 Tutorial Kai Zhang.
Maximum Likelihood (ML), Expectation Maximization (EM)
Expectation-Maximization
Introduction to Bayesian Learning Ata Kaban School of Computer Science University of Birmingham.
Preprocessing II: Between Subjects John Ashburner Wellcome Trust Centre for Neuroimaging, 12 Queen Square, London, UK.
EM Algorithm Likelihood, Mixture Models and Clustering.
A Bidirectional Matching Algorithm for Deformable Pattern Detection with Application to Handwritten Word Retrieval by K.W. Cheung, D.Y. Yeung, R.T. Chin.
SegmentationSegmentation C. Phillips, Institut Montefiore, ULg, 2006.
DTU Medical Visionday May 27, 2009 Generative models for automated brain MRI segmentation Koen Van Leemput Athinoula A. Martinos Center for Biomedical.
Least-Mean-Square Training of Cluster-Weighted-Modeling National Taiwan University Department of Computer Science and Information Engineering.
COMMON EVALUATION FINAL PROJECT Vira Oleksyuk ECE 8110: Introduction to machine Learning and Pattern Recognition.
J OURNAL C LUB : Cardoso et al., University College London, UK “STEPS: Similarity and Truth Estimation for Propagated Segmentations and its application.
G AUSSIAN M IXTURE M ODELS David Sears Music Information Retrieval October 8, 2009.
Bayesian Inference and Posterior Probability Maps Guillaume Flandin Wellcome Department of Imaging Neuroscience, University College London, UK SPM Course,
BIRN Advantages in Morphometry  Standards for Data Management / Curation File Formats, Database Interfaces, User Interfaces  Uniform Acquisition and.
Using Inactivity to Detect Unusual behavior Presenter : Siang Wang Advisor : Dr. Yen - Ting Chen Date : Motion and video Computing, WMVC.
Spatio-Temporal Free-Form Registration of Cardiac MR Image Sequences Antonios Perperidis s /02/2006.
MIT AI Knowledge Based 3D Medical Image Segmentation Tina Kapur MIT Artificial Intelligence Laboratory
Mixture of Gaussians This is a probability distribution for random variables or N-D vectors such as… –intensity of an object in a gray scale image –color.
1 Modeling Long Distance Dependence in Language: Topic Mixtures Versus Dynamic Cache Models Rukmini.M Iyer, Mari Ostendorf.
National Alliance for Medical Image Computing Segmentation Foundations Easy Segmentation –Tissue/Air (except bone in MR) –Bone in CT.
Lecture 6 Spring 2010 Dr. Jianjun Hu CSCE883 Machine Learning.
Variational Bayesian Methods for Audio Indexing
Incorporating Non-rigid Registration into Expectation Maximization Algorithm to Segment MR Images By K.M. Pohl, W.M. Wells, A. Guimond, K. Kasai, M.E.
Lecture 2: Statistical learning primer for biologists
CSE 517 Natural Language Processing Winter 2015
Expectation-Maximization (EM) Algorithm & Monte Carlo Sampling for Inference and Approximation.
Probability and Statistics in Vision. Probability Objects not all the sameObjects not all the same – Many possible shapes for people, cars, … – Skin has.
Learning Sequence Motifs Using Expectation Maximization (EM) and Gibbs Sampling BMI/CS 776 Mark Craven
Information Bottleneck versus Maximum Likelihood Felix Polyakov.
Ch.9 Bayesian Models of Sensory Cue Integration (Mon) Summarized and Presented by J.W. Ha 1.
Visual Tracking by Cluster Analysis Arthur Pece Department of Computer Science University of Copenhagen
DENCLUE 2.0: Fast Clustering based on Kernel Density Estimation Alexander Hinneburg Martin-Luther-University Halle-Wittenberg, Germany Hans-Henning Gabriel.
Hidden Variables, the EM Algorithm, and Mixtures of Gaussians Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 02/22/11.
Bayesian Methods Will Penny and Guillaume Flandin Wellcome Department of Imaging Neuroscience, University College London, UK SPM Course, London, May 12.
National Alliance for Medical Image Computing Hierarchical Atlas Based EM Segmentation.
Bayesian Hierarchical Clustering Paper by K. Heller and Z. Ghahramani ICML 2005 Presented by David Williams Paper Discussion Group ( )
Machine Learning Expectation Maximization and Gaussian Mixtures CSE 473 Chapter 20.3.
Guillaume Flandin Wellcome Trust Centre for Neuroimaging University College London SPM Course Zurich, February 2008 Bayesian Inference.
Spatial processing of FMRI data And why you may care.
Machine Learning Expectation Maximization and Gaussian Mixtures CSE 473 Chapter 20.3.
Automatic segmentation of brain structures
NA-MIC National Alliance for Medical Image Computing Modeling Populations and Pathology Kayhan N. Batmanghelich PI: Polina Golland MIT.
A Study on Speaker Adaptation of Continuous Density HMM Parameters By Chin-Hui Lee, Chih-Heng Lin, and Biing-Hwang Juang Presented by: 陳亮宇 1990 ICASSP/IEEE.
Introduction We consider the data of ~1800 phenotype measurements Each mouse has a given probability distribution of descending from one of 8 possible.
Unsupervised Learning Part 2. Topics How to determine the K in K-means? Hierarchical clustering Soft clustering with Gaussian mixture models Expectation-Maximization.
Classification of unlabeled data:
Expectation-Maximization
Computational Neuroanatomy for Dummies
Model-based Symmetric Information Theoretic Large Deformation
'Linear Hierarchical Models'
Lecture 11 Generalizations of EM.
Anatomical Measures John Ashburner
EM Algorithm and its Applications
Presentation transcript:

Jeroen Hermans, Frederik Maes, Dirk Vandermeulen, Paul Suetens Unified Framework for Automatic Segmentation, Probabilistic Atlas Construction, Registration and Clustering of Brain MR Images Annemie Ribbens Jeroen Hermans, Frederik Maes, Dirk Vandermeulen, Paul Suetens

Introduction Computer–aided diagnosis

Introduction Segmentation

Introduction Atlas & Atlas-to-image registration Φ

Introduction Population Specific Atlases

Introduction Atlas Construction Images I Registrations Atlas previous iteration Registrations

Images I Deformed images New Atlas Averaging

Introduction Computer aided-diagnosis Registration Segmentation Prob. Atlases Registration

Framework Aspects: Advantages: Segmentation Clustering (i.e. computer-aided diagnosis) (+ Localization of cluster specific morphological differences) Groupwise registration (nonrigid probabilistic atlases per cluster) Atlas-to-image registration Advantages: Less prior information necessary Cooperation Statistical framework  convergence

Framework Segmentation Atlas-to-image registration Atlas formation & Clustering

Framework: model K = tissue classes  number of Gaussians Y = intensities Image i

Framework: model Atlas t (Gray matter map) Image i (Gray matter map)

Framework: model Uniform prior for all voxels in an image

Framework: model G1 G2 Deformations

Framework: MAP MAP: Jensen’s inequality Expectation maximization framework

Framework: EM algorithm: E-step i = images j = voxels k = tissue classes t = clusters Per cluster: atlas deformed towards image Gaussian prior on the deformations of each cluster Uniform prior on the cluster memberships Gaussian mixture model

Framework: EM Posterior Posterior = (clustering) * (segmentation using the atlas of the same cluster) Clustering = probability that voxel j of image i belongs to cluster t = sum over all tissue classes of the posterior = (prior of clustering) * (atlas is sharp & close to intensity model) * (subject specific registration close to groupwise) Segmentation = probability that a voxel belongs to a certain tissue class = sum over all clusters of the posterior = weighted sum of the segmentations using a specific atlas

Framework: EM algorithm: M-step Maximum likelihood Q-function  parameters All solutions close form (except registration) Solutions (e.g. atlas) ~ literature

Framework: EM algorithm: M-step Gaussian mixture parameters: Atlas Prior cluster memberships Groupwise registration Atlas-to-image registration No closed form solution Spatial regularization  Viscous fluid model on derivative  Weighting terms per voxel

w1 w8

Experiments Brainweb data: 20 simulated normal images One cluster  Segmentation & Atlas: Dice =

Experiments 8 brain MR images of healthy persons (normals) 8 brain MR images of Huntington disease patients (HD)  Cluster memberships: all correctly classified

Conclusion Statistical framework combining: Segmentation Clustering Atlas construction per cluster (weighted) Registration  Convergence & cooperation & less prior information needed Validation  promising Cluster specific morphological differences are found Easily extendable to incorporate clinical/spatial prior knowledge