CSci 6971: Image Registration Lecture 16: View-Based Registration March 16, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart,

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Introduction to Monte Carlo Markov chain (MCMC) methods
Scale & Affine Invariant Interest Point Detectors Mikolajczyk & Schmid presented by Dustin Lennon.
Active Shape Models Suppose we have a statistical shape model –Trained from sets of examples How do we use it to interpret new images? Use an “Active Shape.
Component Analysis (Review)
CSci 6971: Image Registration Lecture 14 Distances and Least Squares March 2, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart,
Fitting: The Hough transform. Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not.
Chapter 4: Linear Models for Classification
Wangfei Ningbo University A Brief Introduction to Active Appearance Models.
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
The loss function, the normal equation,
Visual Recognition Tutorial
Computer Vision Optical Flow
CSci 6971: Image Registration Lecture 3: Images and Transformations January 20, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart,
Motion Analysis Slides are from RPI Registration Class.
Motion Analysis (contd.) Slides are from RPI Registration Class.
CSci 6971: Image Registration Lecture 4: First Examples January 23, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI Dr.
Motion Analysis (contd.) Slides are from RPI Registration Class.
CSci 6971: Image Registration Lecture 2: Vectors and Matrices January 16, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI.
CSci 6971: Image Registration Lecture 13: Robust Estimation February 27, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI.
Basic Mathematics for Portfolio Management. Statistics Variables x, y, z Constants a, b Observations {x n, y n |n=1,…N} Mean.
Algorithm Evaluation and Error Analysis class 7 Multiple View Geometry Comp Marc Pollefeys.
Clustering with Bregman Divergences Arindam Banerjee, Srujana Merugu, Inderjit S. Dhillon, Joydeep Ghosh Presented by Rohit Gupta CSci 8980: Machine Learning.
CSci 6971: Image Registration Lecture 5: Feature-Base Regisration January 27, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart,
Statistical Shape Models Eigenpatches model regions –Assume shape is fixed –What if it isn’t? Faces with expression changes, organs in medical images etc.
Principles of the Global Positioning System Lecture 10 Prof. Thomas Herring Room A;
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Colorado Center for Astrodynamics Research The University of Colorado STATISTICAL ORBIT DETERMINATION Project Report Unscented kalman Filter Information.
CSE 185 Introduction to Computer Vision
CSC 589 Lecture 22 Image Alignment and least square methods Bei Xiao American University April 13.
Least-Squares Regression
Summarized by Soo-Jin Kim
CSE554Laplacian DeformationSlide 1 CSE 554 Lecture 8: Laplacian Deformation Fall 2012.
Machine Learning CUNY Graduate Center Lecture 3: Linear Regression.
Kalman filtering techniques for parameter estimation Jared Barber Department of Mathematics, University of Pittsburgh Work with Ivan Yotov and Mark Tronzo.
October 14, 2014Computer Vision Lecture 11: Image Segmentation I 1Contours How should we represent contours? A good contour representation should meet.
Model Building III – Remedial Measures KNNL – Chapter 11.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
Digital Image Processing Lecture 7: Geometric Transformation March 16, 2005 Prof. Charlene Tsai.
MECN 3500 Inter - Bayamon Lecture 9 Numerical Methods for Engineering MECN 3500 Professor: Dr. Omar E. Meza Castillo
Modern Navigation Thomas Herring
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Conjugate Priors Multinomial Gaussian MAP Variance Estimation Example.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
Computational Intelligence: Methods and Applications Lecture 23 Logistic discrimination and support vectors Włodzisław Duch Dept. of Informatics, UMK Google:
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION ASEN 5070 LECTURE 11 9/16,18/09.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: ML and Simple Regression Bias of the ML Estimate Variance of the ML Estimate.
ECE 8443 – Pattern Recognition LECTURE 08: DIMENSIONALITY, PRINCIPAL COMPONENTS ANALYSIS Objectives: Data Considerations Computational Complexity Overfitting.
University of Colorado Boulder ASEN 5070 Statistical Orbit determination I Fall 2012 Professor George H. Born Professor Jeffrey S. Parker Lecture 11: Batch.
Data Modeling Patrice Koehl Department of Biological Sciences National University of Singapore
Non-Linear Models. Non-Linear Growth models many models cannot be transformed into a linear model The Mechanistic Growth Model Equation: or (ignoring.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 12: Advanced Discriminant Analysis Objectives:
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Inverse Modeling of Surface Carbon Fluxes Please read Peters et al (2007) and Explore the CarbonTracker website.
Statistical Models of Appearance for Computer Vision 主講人:虞台文.
Giansalvo EXIN Cirrincione unit #4 Single-layer networks They directly compute linear discriminant functions using the TS without need of determining.
Photoconsistency constraint C2 q C1 p l = 2 l = 3 Depth labels If this 3D point is visible in both cameras, pixels p and q should have similar intensities.
Learning Theory Reza Shadmehr Distribution of the ML estimates of model parameters Signal dependent noise models.
Regularized Least-Squares and Convex Optimization.
Hough Transform CS 691 E Spring Outline Hough transform Homography Reading: FP Chapter 15.1 (text) Some slides from Lazebnik.
David Belton and Geoff West CRC for Spatial Information Department of Spatial Sciences Curtin University 1.
Colorado Center for Astrodynamics Research The University of Colorado 1 STATISTICAL ORBIT DETERMINATION Statistical Interpretation of Least Squares ASEN.
LECTURE 11: Advanced Discriminant Analysis
University of Ioannina
LECTURE 09: BAYESIAN ESTIMATION (Cont.)
CH 5: Multivariate Methods
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
The loss function, the normal equation,
Mathematical Foundations of BME Reza Shadmehr
Presentation transcript:

CSci 6971: Image Registration Lecture 16: View-Based Registration March 16, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware

Image RegistrationLecture 16 2 Overview  Retinal image registration  The Dual-Bootstrap ICP algorithm  Covariance matrix  Covariance propagation  Model selection  View-based registration  Software design  Warning: mathematically, this lecture is a little “rich.” You will not be responsible for knowing the details  Retinal image registration  The Dual-Bootstrap ICP algorithm  Covariance matrix  Covariance propagation  Model selection  View-based registration  Software design  Warning: mathematically, this lecture is a little “rich.” You will not be responsible for knowing the details

Image RegistrationLecture 16 3 Retinal Image Registration: Applications  Mosaics  Multimodal integration  Blood flow animation  Change detection  Mosaics  Multimodal integration  Blood flow animation  Change detection

Image RegistrationLecture 16 4 Mosaics

Image RegistrationLecture 16 5 Multimodal Integration

Image RegistrationLecture 16 6 Fluorescein Angiogram Animation

Image RegistrationLecture 16 7 Change Visualization

Image RegistrationLecture 16 8 Retinal Image Registration - Preliminaries  Features  Transformation models  Initialization  Features  Transformation models  Initialization

Image RegistrationLecture 16 9 landmarks vascular centerlines Features  Vascular centerline points  Discrete locations along the vessel contours  Described in terms of pixel locations, orientations, and widths  Vascular landmarks  Pixel locations, orientations and width of vessels that meet to form landmarks  Vascular centerline points  Discrete locations along the vessel contours  Described in terms of pixel locations, orientations, and widths  Vascular landmarks  Pixel locations, orientations and width of vessels that meet to form landmarks

Image RegistrationLecture Transformation Models ModelParameter MatrixDoFAccuracy (pixels) Similarity45.05 Affine64.58 Reduced quadratic Full quadratic120.64

Image RegistrationLecture Initializing Registration  Form list of landmarks in each image  Form matches of one landmark from each image  The selection of these matches will be discussed in Lectures 18 and 19  Choose matches, one at a time  For each match:  Compute an initial similarity transformation in the small image region surrounding the landmarks  Apply Dual-Bootstrap ICP procedure to see if the initial alignment can be successfully grown into an accurate, image- wide alignment  End when one match leads to success, or all matches are exhausted  Form list of landmarks in each image  Form matches of one landmark from each image  The selection of these matches will be discussed in Lectures 18 and 19  Choose matches, one at a time  For each match:  Compute an initial similarity transformation in the small image region surrounding the landmarks  Apply Dual-Bootstrap ICP procedure to see if the initial alignment can be successfully grown into an accurate, image- wide alignment  End when one match leads to success, or all matches are exhausted

Image RegistrationLecture Dual-Bootstrap - Overview  Match and refine estimate in each region  Bootstrap the model:  Low-order for small regions;  High-order for large  Automatic selection  Bootstrap the region:  Covariance propagation gives uncertainty Iterate until convergence

Image RegistrationLecture Matching and Estimation in Each Region  Matching - standard stuff:  Vascular centerline points from within current region of moving image  Mapped using current transform estimate  Find closest point using Borgefors digital distance map  Estimation:  Fix scale estimate  Run IRLS  Matching - standard stuff:  Vascular centerline points from within current region of moving image  Mapped using current transform estimate  Find closest point using Borgefors digital distance map  Estimation:  Fix scale estimate  Run IRLS

Image RegistrationLecture Covariance Matrix of Estimate  Measures uncertainty in estimate of transformation parameters  Basis for region growth and model selection  The next few slides will give an overview of computing an approximate covariance matrix  We’ll start with linear regression  Measures uncertainty in estimate of transformation parameters  Basis for region growth and model selection  The next few slides will give an overview of computing an approximate covariance matrix  We’ll start with linear regression

Image RegistrationLecture Problem Formulation in Linear Regression  Independent (non-random) variable values:  Dependent (random) variable values  Linear relationship based on k+1 dimensional parameter vector a:  Independent (non-random) variable values:  Dependent (random) variable values  Linear relationship based on k+1 dimensional parameter vector a:

Image RegistrationLecture Least-Squares Formulation  Least-squares error term:  Here:  Least-squares error term:  Here:

Image RegistrationLecture Estimate and Covariance Matrix  Estimate:  Residual error variance (square of “scale”):  Parameter estimate covariance  Estimate:  Residual error variance (square of “scale”):  Parameter estimate covariance

Image RegistrationLecture Aside: Line Fitting in 2D  Form of the equation:  If the x i values are centered:  Then the parameters are independent with variances for the linear and constant terms, respectively  Form of the equation:  If the x i values are centered:  Then the parameters are independent with variances for the linear and constant terms, respectively

Image RegistrationLecture Hessians and Covariances  Back to k dimensions, re-consider the objective function:  Compute the Hessian matrix:  Observe the relationship  Back to k dimensions, re-consider the objective function:  Compute the Hessian matrix:  Observe the relationship

Image RegistrationLecture Hessians and Covariances  This is exact for linear regression, but serves as a good approximation for non-linear least-squares  In general, the Hessian will depend on the estimate (in regression it doesn’t because the problem is quadratic), so the approximate relationship is  This is exact for linear regression, but serves as a good approximation for non-linear least-squares  In general, the Hessian will depend on the estimate (in regression it doesn’t because the problem is quadratic), so the approximate relationship is

Image RegistrationLecture Hessian in Registration  Recall the weighted least-squares objective function:  Keeping the correspondences and the weights fixed, where D k gives the error of the k-th correspondence  Inverting this gives the covariance approximation.  This approximation is only good when the estimate is fairly accurate  Recall the weighted least-squares objective function:  Keeping the correspondences and the weights fixed, where D k gives the error of the k-th correspondence  Inverting this gives the covariance approximation.  This approximation is only good when the estimate is fairly accurate

Image RegistrationLecture Back to Dual-Bootstrap ICP  Covariance is used in two ways in each DB-ICP iteration:  Determining the region incorporates enough constraints to switch to a more complex model  Similarity => Affine => Reduced Quadratic => Quadratic  Determining the growth of the dual-bootstrap region:  More stable transformation estimates lead to faster growth  Covariance is used in two ways in each DB-ICP iteration:  Determining the region incorporates enough constraints to switch to a more complex model  Similarity => Affine => Reduced Quadratic => Quadratic  Determining the growth of the dual-bootstrap region:  More stable transformation estimates lead to faster growth

Image RegistrationLecture Model Selection  What model should be used to describe a given set of data?  Classic problem in statistics, and many methods have been proposed  Most trade-off the fitting accuracy of higher-order models with the stability (or lower complexity) of lower-order models  What model should be used to describe a given set of data?  Classic problem in statistics, and many methods have been proposed  Most trade-off the fitting accuracy of higher-order models with the stability (or lower complexity) of lower-order models

Image RegistrationLecture Model Selection in DB-ICP  Use correspondence set  Estimate the IRLS parameters and covariance matrices for each model in current set  For each model (with d m parameters) this generates a set of weights and errors and a covariance matrix:  Choose the model maximizing the model selection equation (derived from Bayesian modeling):  The first two terms increase with increasingly complex models; the last term decreases  Use correspondence set  Estimate the IRLS parameters and covariance matrices for each model in current set  For each model (with d m parameters) this generates a set of weights and errors and a covariance matrix:  Choose the model maximizing the model selection equation (derived from Bayesian modeling):  The first two terms increase with increasingly complex models; the last term decreases

Image RegistrationLecture Region Growth in DB-ICP  Grow each side independently  Grow is inversely proportional to uncertainty in mapping of boundary point on the center of each side  New rectangular region found from the new positions of each of the boundary points  Grow each side independently  Grow is inversely proportional to uncertainty in mapping of boundary point on the center of each side  New rectangular region found from the new positions of each of the boundary points

Image RegistrationLecture Aside: Covariance Propagation and Transfer Error  Given mapping function:  We will treat  as a random variable, but not g k  Uncertainty in  makes g k ’ a random variable.  What then is the covariance of g k ’?  We solve this using standard covariance propagation techniques:  Compute the Jacobian of the transformation, evaluated at g k :  Pre- and post-multiply to obtain the covariance of g k ’  In computer vision, this is called the “transfer error”  Given mapping function:  We will treat  as a random variable, but not g k  Uncertainty in  makes g k ’ a random variable.  What then is the covariance of g k ’?  We solve this using standard covariance propagation techniques:  Compute the Jacobian of the transformation, evaluated at g k :  Pre- and post-multiply to obtain the covariance of g k ’  In computer vision, this is called the “transfer error”

Image RegistrationLecture Outward Growth of a Side  Let  k be the outward normal of the side, and let r k be the distance of the side from the center of the region  Project the transfer error covariance onto  k to obtain a scalar variance  k  The outward growth (along normal  k ) is  where  controls the maximum growth rate, which occurs when  k < 1  Let  k be the outward normal of the side, and let r k be the distance of the side from the center of the region  Project the transfer error covariance onto  k to obtain a scalar variance  k  The outward growth (along normal  k ) is  where  controls the maximum growth rate, which occurs when  k < 1

Image RegistrationLecture Putting It All Together - The Example, Revisited

Image RegistrationLecture Turning to the Software  A “view” is a definition or snapshot of the registration problem.  A “view” contains:  An image region (current region, plus goal region)  A current transformation estimate and estimator  A current stage (resolution) of registration  Views work in conjunction with multistage registration  A “view” is a definition or snapshot of the registration problem.  A “view” contains:  An image region (current region, plus goal region)  A current transformation estimate and estimator  A current stage (resolution) of registration  Views work in conjunction with multistage registration

Image RegistrationLecture View-Based Registration - Procedural  The following is repeated for each initial estimate  For each stage:  Do  Match  Compute weights  Estimate scale  For each model  Run IRLS to estimate parameters and covariances  Re-estimate scale  Generate next view  For DB-ICP view generator this choose the best model and grows the region  Until region has converged and highest order model used  Prepare for next stage  The following is repeated for each initial estimate  For each stage:  Do  Match  Compute weights  Estimate scale  For each model  Run IRLS to estimate parameters and covariances  Re-estimate scale  Generate next view  For DB-ICP view generator this choose the best model and grows the region  Until region has converged and highest order model used  Prepare for next stage

Image RegistrationLecture Implementation  rgrl_view  Store the information about the view  rgrl_view_generator  Generate the next view  rgrl_view_based_registration  Mirrors rgrl_feature_based_registration with modifications based on the outline on previous slide  Example  rgrl/example/registration_retina.cxx  rgrl_view  Store the information about the view  rgrl_view_generator  Generate the next view  rgrl_view_based_registration  Mirrors rgrl_feature_based_registration with modifications based on the outline on previous slide  Example  rgrl/example/registration_retina.cxx

Image RegistrationLecture Summary  Retina registration:  Models, features and initialization  DB-ICP:  Matching, estimation and covariances  Model selection  Region growing  Generalization to view-based registration and its implementation in the toolkit.  Retina registration:  Models, features and initialization  DB-ICP:  Matching, estimation and covariances  Model selection  Region growing  Generalization to view-based registration and its implementation in the toolkit.

Image RegistrationLecture Looking Ahead to Lecture 17  Discussion of toolkits:  What’s easy, what’s hard  What’s missing  Project discussion  Requirements  Topics and initial steps  Discussion of toolkits:  What’s easy, what’s hard  What’s missing  Project discussion  Requirements  Topics and initial steps