Using Neumann Series to Solve Inverse Problems in Imaging Christopher Kumar Anand.

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Optimal transport methods for mesh generation Chris Budd (Bath), JF Williams (SFU)
Image Registration  Mapping of Evolution. Registration Goals Assume the correspondences are known Find such f() and g() such that the images are best.
Edge Preserving Image Restoration using L1 norm
11/11/02 IDR Workshop Dealing With Location Uncertainty in Images Hasan F. Ates Princeton University 11/11/02.
Fast Algorithms For Hierarchical Range Histogram Constructions
Engineering Optimization
Topic 6 - Image Filtering - I DIGITAL IMAGE PROCESSING Course 3624 Department of Physics and Astronomy Professor Bob Warwick.
Computer vision: models, learning and inference Chapter 8 Regression.
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Patch-based Image Deconvolution via Joint Modeling of Sparse Priors Chao Jia and Brian L. Evans The University of Texas at Austin 12 Sep
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: The FIR Adaptive Filter The LMS Adaptive Filter Stability and Convergence.
CMPUT 466/551 Principal Source: CMU
Solving Linear Systems (Numerical Recipes, Chap 2)
P. Venkataraman Mechanical Engineering P. Venkataraman Rochester Institute of Technology DETC2011 –47658 Determining ODE from Noisy Data 31 th CIE, Washington.
Data mining and statistical learning - lecture 6
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Lecture 24 Exemplary Inverse Problems including Vibrational Problems.
HMM-BASED PATTERN DETECTION. Outline  Markov Process  Hidden Markov Models Elements Basic Problems Evaluation Optimization Training Implementation 2-D.
1cs542g-term Notes  Extra class this Friday 1-2pm  If you want to receive s about the course (and are auditing) send me .
Martin Burger Institut für Numerische und Angewandte Mathematik European Institute for Molecular Imaging CeNoS Total Variation and related Methods Numerical.
Revision.
1 Image Filtering Readings: Ch 5: 5.4, 5.5, 5.6,5.7.3, 5.8 (This lecture does not follow the book.) Images by Pawan SinhaPawan Sinha formal terminology.
Optical flow and Tracking CISC 649/849 Spring 2009 University of Delaware.
Ordinary least squares regression (OLS)
Active Set Support Vector Regression
Martin Burger Total Variation 1 Cetraro, September 2008 Numerical Schemes Wrap up approximate formulations of subgradient relation.
Orthogonal Transforms
QUASI MAXIMUM LIKELIHOOD BLIND DECONVOLUTION QUASI MAXIMUM LIKELIHOOD BLIND DECONVOLUTION Alexander Bronstein.
Normalised Least Mean-Square Adaptive Filtering
Collaborative Filtering Matrix Factorization Approach
Topic 7 - Fourier Transforms DIGITAL IMAGE PROCESSING Course 3624 Department of Physics and Astronomy Professor Bob Warwick.
CHAPTER SIX Eigenvalues
Chapter 2. Image Analysis. Image Analysis Domains Frequency Domain Spatial Domain.
Chapter 15 Modeling of Data. Statistics of Data Mean (or average): Variance: Median: a value x j such that half of the data are bigger than it, and half.
Inverse problems in earth observation and cartography Shape modelling via higher-order active contours and phase fields Ian Jermyn Josiane Zerubia Marie.
Mean-shift and its application for object tracking
A Gentle Introduction to Bilateral Filtering and its Applications How does bilateral filter relates with other methods? Pierre Kornprobst (INRIA) 0:35.
Transforms. 5*sin (2  4t) Amplitude = 5 Frequency = 4 Hz seconds A sine wave.
Lecture 22 - Exam 2 Review CVEN 302 July 29, 2002.
Module 1: Statistical Issues in Micro simulation Paul Sousa.
The Group Lasso for Logistic Regression Lukas Meier, Sara van de Geer and Peter Bühlmann Presenter: Lu Ren ECE Dept., Duke University Sept. 19, 2008.
ECE 8443 – Pattern Recognition LECTURE 10: HETEROSCEDASTIC LINEAR DISCRIMINANT ANALYSIS AND INDEPENDENT COMPONENT ANALYSIS Objectives: Generalization of.
1 Complex Images k’k’ k”k” k0k0 -k0-k0 branch cut   k 0 pole C1C1 C0C0 from the Sommerfeld identity, the complex exponentials must be a function.
Introduction to Level Set Methods: Part II
Quality of model and Error Analysis in Variational Data Assimilation François-Xavier LE DIMET Victor SHUTYAEV Université Joseph Fourier+INRIA Projet IDOPT,
7- 1 Chapter 7: Fourier Analysis Fourier analysis = Series + Transform ◎ Fourier Series -- A periodic (T) function f(x) can be written as the sum of sines.
1  The Problem: Consider a two class task with ω 1, ω 2   LINEAR CLASSIFIERS.
Guest lecture: Feature Selection Alan Qi Dec 2, 2004.
Bundle Adjustment A Modern Synthesis Bill Triggs, Philip McLauchlan, Richard Hartley and Andrew Fitzgibbon Presentation by Marios Xanthidis 5 th of No.
Discretization Methods Chapter 2. Training Manual May 15, 2001 Inventory # Discretization Methods Topics Equations and The Goal Brief overview.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 12: Advanced Discriminant Analysis Objectives:
Robust inversion, dimensionality reduction, and randomized sampling (Aleksandr Aravkin Michael P. Friedlander et al) Cong Fang Dec 14 th 2014.
Fourier Transform.
Motivation for the Laplace Transform
Instructor: Mircea Nicolescu Lecture 7
Geology 5670/6670 Inverse Theory 12 Jan 2015 © A.R. Lowry 2015 Read for Wed 14 Jan: Menke Ch 2 (15-37) Last time: Course Introduction (Cont’d) Goal is.
بسم الله الرحمن الرحيم University of Khartoum Department of Electrical and Electronic Engineering Third Year – 2015 Dr. Iman AbuelMaaly Abdelrahman
Math for CS Fourier Transforms
Super-resolution MRI Using Finite Rate of Innovation Curves Greg Ongie*, Mathews Jacob Computational Biomedical Imaging Group (CBIG) University of Iowa.
PDE Methods for Image Restoration
Fast edge-directed single-image super-resolution
LECTURE 11: Advanced Discriminant Analysis
Computational Optimization
Collaborative Filtering Matrix Factorization Approach
Matrix Methods Summary
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Optimal sparse representations in general overcomplete bases
Image Filtering Readings: Ch 5: 5. 4, 5. 5, 5. 6, , 5
Presentation transcript:

Using Neumann Series to Solve Inverse Problems in Imaging Christopher Kumar Anand

Anand-Neumann Series-AdvOl-Feb Inverse Problem given measurements ( ) model ( ) Solve for.

Anand-Neumann Series-AdvOl-Feb Seismic Imaging 1. Bang 2. Listen 3. Solve Acoustic Equations

Anand-Neumann Series-AdvOl-Feb Magnetic Resonance Imaging 0. Tissue Density 1. Phase Modulation 2. Sample Fourier Transform 3. Invert Linear System

Anand-Neumann Series-AdvOl-Feb Challenging when... model is big variables model is nonlinear data is inexact (usually know error probabilistically)

Anand-Neumann Series-AdvOl-Feb Imaging discretize continuous model regular volume/area elements sparse structure

Anand-Neumann Series-AdvOl-Feb Solutions: Noise filter noisy solution 1. convolution filter 2. bilateral filter 3. Anisotropic Diffusion (uses pde) regularize via penalty 4. energy 5. Total Variation 6. something new

Anand-Neumann Series-AdvOl-Feb Solutions: Problem Size I. use a fast method (i.e. based on FFT) II. use (parallelizable) iterative method a. Conjugate Gradient b. Neumann series III. use sparsity c. choose penalties with sparse Hessians IV. use fast hardware d way parallelizable e. single precision

Anand-Neumann Series-AdvOl-Feb Cell BE 25 GFlops DP 200 GFlops SP need 384-way ||ism 4-way SIMD 8-way cores 6-times unrolling double buffering

Anand-Neumann Series-AdvOl-Feb Solutions: Nonlinearity use iterative method A. sequential projection onto convex sets B. trust region C. sequential quadratic approximations

Anand-Neumann Series-AdvOl-Feb Plan of Talk A. example/benchmark B. optimization 1. fit to data 2. regularization i. new penalty (with optimized gradient) ii. nonlinear penalties C. solution 3. operator decomposition 4. Neumann series D. proof of convergence E. numerical example 5. noise reduction 6. linear convergence

Anand-Neumann Series-AdvOl-Feb Example/Benchmark complex image complex data model

Anand-Neumann Series-AdvOl-Feb Example/Benchmark diagonal blocks easily invertible

Anand-Neumann Series-AdvOl-Feb Example/Benchmark direct inverse

Anand-Neumann Series-AdvOl-Feb Optimizationfit-to-data quadratic penalties nonlinear penalties

Anand-Neumann Series-AdvOl-Feb Fit to Data typically dense transformation use fast matrix-vector products e.g. FFT-based forward/adjoint problem linear forward problem gives quadratic objective

Anand-Neumann Series-AdvOl-Feb Bilateral Filter spatial kernel range kernel Bilateral Regularizer

Anand-Neumann Series-AdvOl-Feb Quadtratic Case optimize direction of gradient LP problem like FIR filter design heuristic choice of “stop band”

Anand-Neumann Series-AdvOl-Feb Optimal Spatial Kernel discrete kernel not rotational symmetric exact in quadratic case quadratic case use in all cases all cases

Anand-Neumann Series-AdvOl-Feb TV-like similar properties to Total Variation won’t smooth edges not differentiable

Anand-Neumann Series-AdvOl-Feb Sequential Quadratic TV-like sequential quadratic approximation tangent to TV-like

Anand-Neumann Series-AdvOl-Feb Mask penalize pixel values outside object

Anand-Neumann Series-AdvOl-Feb Segmentation probability of observing pixels assumes discrete pixel values equal likelihood equal normal error

Anand-Neumann Series-AdvOl-Feb Solve: Operator Decomposition“small” block diagonal with sparse banded blocks linear-time Cholesky decomposition leads to formal expression image rows blocks =

Anand-Neumann Series-AdvOl-Feb plus (poly-order)(fast algorithm) linear in problem size Calculate Using Fast Matrix-Vector Ops

Anand-Neumann Series-AdvOl-Feb Row by Row each block corresponds to a row each block can be calculated in parallel (number-rows)-way parallelism

Anand-Neumann Series-AdvOl-Feb Even Better: Use Minimax Polynomial calculate

Anand-Neumann Series-AdvOl-Feb minimax polynomial error step shortening Convergence next error current error

Anand-Neumann Series-AdvOl-Feb Safe in Single- Precision recalculate gradient at each outer iteration numerical error only builds up during polynomial evaluation coefficients well-behaved (and in our control)

Anand-Neumann Series-AdvOl-Feb Numerical Tests start with quadratic penalties add nonlinear penalties and change weights with and without time fixed budget for computation

Anand-Neumann Series-AdvOl-Feb iterations I. 10 iterations with bi2 regularization

Anand-Neumann Series-AdvOl-Feb iterations I. 10 iterations with bi2 regularization II. introduce other penalties 1. masking 2. magnet 3. segmentation

Anand-Neumann Series-AdvOl-Feb iterations optimized c simple c

Anand-Neumann Series-AdvOl-Feb Absolute Error iteration error

Anand-Neumann Series-AdvOl-Feb Relative (to limit) Error iteration error

Anand-Neumann Series-AdvOl-Feb Conclusion highly-parallel safe in single precision robust with respect to noise accommodates nonlinear penalties

Anand-Neumann Series-AdvOl-Feb Thanks to: of students and colleagues in the