Matching a 3D Active Shape Model on sparse cardiac image data, a comparison of two methods Marleen Engels Supervised by: dr. ir. H.C. van Assen Committee:

Slides:



Advertisements
Similar presentations
Active Appearance Models
Advertisements

L1 sparse reconstruction of sharp point set surfaces
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Automatic determination of skeletal age from hand radiographs of children Image Science Institute Utrecht University C.A.Maas.
P. Venkataraman Mechanical Engineering P. Venkataraman Rochester Institute of Technology DETC2013 – 12269: Continuous Solution for Boundary Value Problems.
P. Venkataraman Mechanical Engineering P. Venkataraman Rochester Institute of Technology DETC2014 – 35148: Continuous Solution for Boundary Value Problems.
Experimental Design, Response Surface Analysis, and Optimization
Optimization methods Review
Computer vision: models, learning and inference
A Short Introduction to Curve Fitting and Regression by Brad Morantz
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Face alignment using Boosted Appearance Model (BAM)
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Geometric Optimization Problems in Computer Vision.
12-Apr CSCE790T Medical Image Processing University of South Carolina Department of Computer Science 3D Active Shape Models Integrating Robust Edge.
Advanced data assimilation methods with evolving forecast error covariance Four-dimensional variational analysis (4D-Var) Shu-Chih Yang (with EK)
Tools for Shape Analysis of Vascular Response using Two Photon Laser Scanning Microscopy By Han van Triest Committee: Prof. Dr. Ir. B.M. ter Haar Romeny.
Unconstrained Optimization Problem
Active Appearance Models Computer examples A. Torralba T. F. Cootes, C.J. Taylor, G. J. Edwards M. B. Stegmann.
MSc project Janneke Ansems Intensity and Feature Based 3D Rigid Registration of Pre- and Intra-Operative MR Brain Scans Committee: Prof. dr.
12 1 Variations on Backpropagation Variations Heuristic Modifications –Momentum –Variable Learning Rate Standard Numerical Optimization –Conjugate.
Advanced Topics in Optimization
Computer Graphics Recitation The plan today Least squares approach  General / Polynomial fitting  Linear systems of equations  Local polynomial.
PhD Thesis. Biometrics Science studying measurements and statistics of biological data Most relevant application: id. recognition 2.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems Prof. Hao Zhu Dept. of Electrical and Computer Engineering University of Illinois at Urbana-Champaign.
Collaborative Filtering Matrix Factorization Approach
Graph-based consensus clustering for class discovery from gene expression data Zhiwen Yum, Hau-San Wong and Hongqiang Wang Bioinformatics, 2007.
Approximating the Algebraic Solution of Systems of Interval Linear Equations with Use of Neural Networks Nguyen Hoang Viet Michal Kleiber Institute of.
Efficient Model Selection for Support Vector Machines
1 Hybrid methods for solving large-scale parameter estimation problems Carlos A. Quintero 1 Miguel Argáez 1 Hector Klie 2 Leticia Velázquez 1 Mary Wheeler.
Chapter 15 Modeling of Data. Statistics of Data Mean (or average): Variance: Median: a value x j such that half of the data are bigger than it, and half.
Multimodal Interaction Dr. Mike Spann
Integrating Neural Network and Genetic Algorithm to Solve Function Approximation Combined with Optimization Problem Term presentation for CSC7333 Machine.
Non Negative Matrix Factorization
Segmental Hidden Markov Models with Random Effects for Waveform Modeling Author: Seyoung Kim & Padhraic Smyth Presentor: Lu Ren.
1 / 20 Arkadij Zakrevskij United Institute of Informatics Problems of NAS of Belarus A NEW ALGORITHM TO SOLVE OVERDEFINED SYSTEMS OF LINEAR LOGICAL EQUATIONS.
EMIS 8381 – Spring Netflix and Your Next Movie Night Nonlinear Programming Ron Andrews EMIS 8381.
Chem Math 252 Chapter 5 Regression. Linear & Nonlinear Regression Linear regression –Linear in the parameters –Does not have to be linear in the.
1 Unconstrained Optimization Objective: Find minimum of F(X) where X is a vector of design variables We may know lower and upper bounds for optimum No.
Overview of Supervised Learning Overview of Supervised Learning2 Outline Linear Regression and Nearest Neighbors method Statistical Decision.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
Data Modeling Patrice Koehl Department of Biological Sciences National University of Singapore
Review of fundamental 1 Data mining in 1D: curve fitting by LLS Approximation-generalization tradeoff First homework assignment.
3.6 Solving Systems Using Matrices You can use a matrix to represent and solve a system of equations without writing the variables. A matrix is a rectangular.
METU Informatics Institute Min720 Pattern Classification with Bio-Medical Applications Part 7: Linear and Generalized Discriminant Functions.
3.7 Adaptive filtering Joonas Vanninen Antonio Palomino Alarcos.
Exam 1 Oct 3, closed book Place ITE 119, Time:12:30-1:45pm
Gradient Methods In Optimization
Variations on Backpropagation.
Yue Xu Shu Zhang.  A person has already rated some movies, which movies he/she may be interested, too?  If we have huge data of user and movies, this.
1 Simple Linear Regression and Correlation Least Squares Method The Model Estimating the Coefficients EXAMPLE 1: USED CAR SALES.
Optimization in Engineering Design 1 Introduction to Non-Linear Optimization.
Genetic algorithms: A Stochastic Approach for Improving the Current Cadastre Accuracies Anna Shnaidman Uri Shoshani Yerach Doytsher Mapping and Geo-Information.
Statistical Models of Appearance for Computer Vision 主講人:虞台文.
ECE 530 – Analysis Techniques for Large-Scale Electrical Systems
Pattern Recognition Lecture 20: Neural Networks 3 Dr. Richard Spillman Pacific Lutheran University.
Clustering (3) Center-based algorithms Fuzzy k-means
Deflated Conjugate Gradient Method
Collaborative Filtering Matrix Factorization Approach
Structure from Motion with Non-linear Least Squares
Instructor :Dr. Aamer Iqbal Bhatti
Introduction to Scientific Computing II
Introduction to Scientific Computing II
Introduction to Scientific Computing II
6.5 Taylor Series Linearization
6.1 Introduction to Chi-Square Space
Neuro-Computing Lecture 2 Single-Layer Perceptrons
Introduction to Scientific Computing II
Neural Network Training
Structure from Motion with Non-linear Least Squares
Presentation transcript:

Matching a 3D Active Shape Model on sparse cardiac image data, a comparison of two methods Marleen Engels Supervised by: dr. ir. H.C. van Assen Committee: prof. dr. ir. B.M. ter Haar Romeny dr. A. Vilanova Bartroli dr. ir. H.C. van Assen dr. ir. H.M.M. ten Eikelder June 2007

Outline  Introduction  Active Shape Model  Optimization methods  Method of Least Squares  Cross Out method  Experiments with phantoms  Experiments with real data  Results  Conclusions and discussion  Future work

Introduction Anatomy of the heart Supplying the entire body of blood

Introduction

 Increasing number image acquisitions  Automate segmentation and diagnosis  Reduce scanning time by reducing the number of image slices per acquisition → sparse data Motivation

Introduction Goal of the project To segment sparse cardiac image, using a 3D Active Shape Model, implementing and testing 2 different approaches 1)Optimization methods, like Lötjönen et al. did. 2)Cross Out, newly developed in this project

Active Shape Model  A Statistical Shape Model (SSM) contains information about the mean shape and shape variations based on a representative training set. x = x mean + Φb b = Φ T (x - x mean )  When a SSM is used to segment unseen data then it is called an Active Shape Model (ASM).

Active Shape Model first modethird modesecond mode

Active Shape Model

 An ASM requires complete data sets  Modify ASMs SPASM by van Assen et al. Optimization Methods by Lötjönen et al. Cross Out Method (new)

Optimization methods  A different b vector generates a different shape x  Finding a vector b which generates a shape that fits the sparse data best → using optimization methods  Optimization methods: finding an optimum (global minimum or maximum) of a (cost)function

Optimization methods  Steepest Descent method  Conjugate Gradients method  Space method  … It is application dependent which method works best

Optimization methods Steepest Descent method A new point, closer to the minimum, is found by searching for a minimum in the opposite direction of the gradient at the current point Bad convergence if x o is badly chosen

Optimization methods  Uses non-interfering search directions, conjugate directions  A minimum can be found in a t-dimensional space in t iterations Conjugate Gradients method

Optimization methods Conjugate Gradients method x2x2 x2x2 x1x1 x1x1 Steepest descent Conjugate gradients

Optimization methods  Repetitive search to find the optimal vector b opt  Each element of b, b i for i = 1,…,t, is separately optimized  The initial b is b opt = 0, b i,opt = 0 Space method

Optimization methods Space method f(b) bibi b i,opt -3√λ i 3√λi3√λi

Method of Least Squares

 Can be applied to solve a linear system Ax = b  x * = (A T A) -1 A T b is the least squares solution of the linear system Ax = b, the distance between Ax * and b minimized A is the coefficient matrix, x are the unknown variables, and b are the known variables

 A shape can be generated with: x = x mean + Φb  Linear system: Φb = (x – x mean ), Φ the coefficient matrix, b the unknown variables, (x – x mean ) the known variables  Least squares solution is: b* = ( Φ T Φ) -1 Φ T (x – x mean )  In literature: b* = Φ T (x – x mean ) Method of Least Squares Application to ASM’s

Method of Least Squares Application to ASM’s A shape x 0 is generated with b 0 b * calc,1 = Φ T (x 0 – x mean ) b * calc,2 = (Φ T Φ) -1 Φ T (x 0 – x mean )

Cross Out method When x is not complete (sparse data) the equation Φ b = (x – x mean ) = dx still holds, when corresponding rows of dx and Φ are crossed regarding the dimensions [3N x t][t x 1] = [3N x 1] → [3N – 3R x t][t x 1] = [3N – 3R x 1]

Cross Out Method  Now a sparse linear system is created Φ sparse b = dx sparse = x sparse – x mean,sparse  Using the method of least squares to calculate b * sparse b * sparse =( Φ sparse T Φ sparse ) -1 Φ sparse T (x sparse – x mean,sparse )

Experiments  Error : average point to point distance between the point of calculated shape and the original shape  ptosError : average point to surface distance between the points of the calculated shape and the surface of the original shape The performance of the cross out method and the optimization methods can be determined by:

Experiments with phantoms  Per experiment a set of 15 shapes is used  15 different b vectors  Each element of b is randomly chosen with the restriction that the generated shape resembles the shapes of the training set.

Experiments with phantoms 1)Deleting 500 points  with the most variation  with the least variation  randomly 2)Deleting points in slices and vary the number of deleted slices 3)Using 60 and 89 modes Testing the Cross Out method

Experiments with phantoms Testing the Cross Out method (1), deleting 500 points Complete shapeShape without points with least variation Shape without points with most variation Shape without 500 random points

Experiments with phantoms Testing the Cross Out method (1), deleting 500 points

Experiments with phantoms Testing the Cross Out method (2), vary the number of slices to delete X = deleted

Experiments with phantoms Testing the Cross Out method (2), vary the number of slices to delete

Experiments with phantoms  The complete model has 89 modes of variations, 100 % of all the variation present in the training set  60 modes contains about 97 % of the variation present in the training set  15 shapes in 5 configurations Testing the Cross Out method (3), using 60 and 89 modes

Experiments with phantoms Testing the Cross Out method (3), using 60 and 89 modes X = deleted

Experiments with phantoms Testing the Cross Out method (3), using 60 and 89 modes

Experiments with phantoms  It does matter which points are deleted, deleting points with least variation gives the best result  Up till 8 slices can be deleted and still a good shape is found  Using 89 modes gives a better result than 60 modes Testing the Cross Out method, conclusions

Experiments with phantoms  Implemented in C by dr. J. Lötjönen using 60 modes  Optimization method  Step size of the gradient  Range of the parameter space  15 shapes in 4 different configurations  Conjugate gradients method with step size 0.1 for Error  Steepest descent method with step size 0.1 for ptosError Optimization methods

Experiments with phantoms  15 shapes in 4 configurations  Cross Out method with 60 modes  Cross Out method with 89 modes  Conjugate gradients with step size 0.1  Steepest Descent with step size 0.1 Optimization versus Cross Out

Experiments with phantoms Optimization versus Cross Out 11 slices9 slices7 slices 5 slices

Results Optimization versus Cross Out, using phantoms

Results Optimization versus Cross Out, using phantoms

Experiments with real data  15 shapes in 4 configurations  Cross Out 60 modes, Cross Out 89 modes, Conjugate gradients step size slices6 slices4 slices8 slices

Results Real data

Conclusions and Discussion  When using a ASM it is better to use the least squares method  The Cross Out method gives better results than the optimization methods  The performance of ASM depends on how well the training set represents the entire population

Future work  Test the robustness of the Cross Out method  Cross Out method should implemented as iterative procedure  Designing a smart scanning protocol

Questions? Special thanks to Hans van Assen Bart ter Haar Romeny