Orthogonal and Least-Squares Based Coordinate Transforms for Optical Alignment Verification in Radiosurgery Ernesto Gomez PhD, Yasha Karant PhD, Veysi.

Slides:



Advertisements
Similar presentations
Single-view geometry Odilon Redon, Cyclops, 1914.
Advertisements

Epipolar Geometry.
Invariants (continued).
Orthogonal and Least-Squares Based Coordinate Transforms for Optical Alignment Verification in Radiosurgery Ernesto Gomez PhD, Yasha Karant PhD, Veysi.
In the past few years the usage of conformal and IMRT treatments has been increasing rapidly. These treatments employ the use of tighter margins around.
Two-view geometry.
A Versatile Depalletizer of Boxes Based on Range Imagery Dimitrios Katsoulas*, Lothar Bergen*, Lambis Tassakos** *University of Freiburg **Inos Automation-software.
Camera calibration and epipolar geometry
Image alignment Image from
Mosaics con’t CSE 455, Winter 2010 February 10, 2010.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Arizona’s First University. Feasibility of Image-Guided SRS for Trigeminal Neuralgia with Novalis.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
CS485/685 Computer Vision Prof. George Bebis
3-D Geometry.
Lotte Verbunt Investigation of leaf positioning accuracy of two types of Siemens MLCs making use of an EPID.
3D Measurements by PIV  PIV is 2D measurement 2 velocity components: out-of-plane velocity is lost; 2D plane: unable to get velocity in a 3D volume. 
COMP322/S2000/L221 Relationship between part, camera, and robot (cont’d) the inverse perspective transformation which is dependent on the focal length.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Single-view geometry Odilon Redon, Cyclops, 1914.
Projected image of a cube. Classical Calibration.
Camera Calibration CS485/685 Computer Vision Prof. Bebis.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Camera Parameters and Calibration. Camera parameters From last time….
CS 558 C OMPUTER V ISION Lecture IX: Dimensionality Reduction.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
Point set alignment Closed-form solution of absolute orientation using unit quaternions Berthold K. P. Horn Department of Electrical Engineering, University.
CSC 589 Lecture 22 Image Alignment and least square methods Bei Xiao American University April 13.
Department of Radiation Oncology Henry Ford Health System
Performance Analysis of an Optoelectronic Localization System for Monitoring Brain Lesioning with Proton Beams Fadi Shihadeh (1), Reinhard Schulte (2),
Software Development For Correction of Gradient- Nonlinearity Distortions in MR Images T.S. Lee, K.E. Schubert Computer Science CSUSB R.W. Schulte Radiation.
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
Homogeneous Coordinates (Projective Space) Let be a point in Euclidean space Change to homogeneous coordinates: Defined up to scale: Can go back to non-homogeneous.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Geometric Models & Camera Calibration
Mini-κ calibration studies Kristopher I. White & Sandor Brockhauser Kappa Workgroup Meeting
Shape from Stereo  Disparity between two images  Photogrammetry  Finding Corresponding Points Correlation based methods Feature based methods.
Geometric Camera Models
Vision Review: Image Formation Course web page: September 10, 2002.
Plenoptic Modeling: An Image-Based Rendering System Leonard McMillan & Gary Bishop SIGGRAPH 1995 presented by Dave Edwards 10/12/2000.
Ch. 3: Geometric Camera Calibration
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
Figure 6. Parameter Calculation. Parameters R, T, f, and c are found from m ij. Patient f : camera focal vector along optical axis c : camera center offset.
Honours Graphics 2008 Session 2. Today’s focus Vectors, matrices and associated math Transformations and concatenation 3D space.
Tom S. Lee, Keith E. Schubert Reinhard W. Schulte
Camera Model Calibration
Single-view geometry Odilon Redon, Cyclops, 1914.
Calibrating a single camera
Coordinate Transformations
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
Problem 1.5: For this problem, we need to figure out the length of the blue segment shown in the figure. This can be solved easily using similar triangles.
Stereotactic Radiosurgery
Depth from disparity (x´,y´)=(x+D(x,y), y)
Modeling 101 For the moment assume that all geometry consists of points, lines and faces Line: A segment between two endpoints Face: A planar area bounded.
Homogeneous Coordinates (Projective Space)
The Brightness Constraint
Optimization of Gamma Knife Radiosurgery
Epipolar geometry.
The Brightness Constraint
Overview Pin-hole model From 3D to 2D Camera projection
Introduction to Vectors and Frames
Two-view geometry.
Single-view geometry Odilon Redon, Cyclops, 1914.
CSCE441: Computer Graphics 2D/3D Transformations
Target Registration Error
Image Stitching Linda Shapiro ECE/CSE 576.
Image Stitching Linda Shapiro ECE P 596.
Presentation transcript:

Orthogonal and Least-Squares Based Coordinate Transforms for Optical Alignment Verification in Radiosurgery Ernesto Gomez PhD, Yasha Karant PhD, Veysi Malkoc, Mahesh R. Neupane, Keith E. Schubert PhD, Reinhard W. Schulte, MD

ACKNOWLEDGEMENT Henry L. Guenther Foundation Instructionally Related Programs (IRP), CSUSB ASI (Associated Student Inc.), CSUSB Department of Radiation Medicine, Loma Linda University Medical Center (LLUMC) Michael Moyers, Ph.D. (LLUMC)

OVERVIEW Experimental Procedure Introduction System Components 1. Camera System 2. Marker System Experimental Procedure 1. Phantombase Alignment 2. Alignment Verification (Image Processing) 3. Marker Image Capture Coordinate Transformations 1. Orthogonal Transformation 2. Least Square Transformation Results and Analysis Conclusions and Future directions Q &A

INTRODUCTION Radiosurgery is a non-invasive stereotactic treatment technique applying focused radiation beams It can be done in several ways: 1. Gamma Knife 2. LINAC Radiosurgery 3. Proton Radiosurgery Requires sub-millimeter positioning and beam delivery accuracy

Functional Proton Radiosurgery Generation of small functional lesions with multiple overlapping proton beams (250 MeV) Used to treat functional disorders: Parkinson’s disease (Pallidotomy) Tremor (Thalamotomy) Trigeminal Neuralgia Target definition with MRI Proton dose distribution for trigeminal neuralgia

System Components Camera System Three Vicon Cameras Camera Geometry

System Components Marker Systems and Immobilization Marker Cross Marker Caddy Stereotactic Halo Marker Systems Marker Caddy & Halo

Experimental Procedure Overview Goal of stereotactic procedure: align anatomical target with known stereotactic coordinates with proton beam axis with submillimeter accuracy Experimental procedure: align simulated marker with known stereotactic coordinates with laser beam axis let system determine distance between (invisible) predefined marker and beam axis based on (visible) markers (caddy & cone) determine system alignment error repeatedly (3 independent experiments) for 5 different marker positions

Experimental Procedure Step I- Phantombase Alignment Platform attached to stereotactic halo Three ceramic markers attached to pins of three different lengths Five hole locations distributed in stereotactic space Provides 15 marker positions with known stereotactic coordinates

Experimental Procedure Step II- Marker Alignment (Image Processing) 1 cm laser beam from stereotactic cone aligned to phantombase marker digital image shows laser beam spot and marker shadow image processed using MATLAB 7.0 by using customized circular fit algorithm to beam and marker image Distance offset between beam-center and marker-center is calculated (typically <0.2 mm)

Experimental Procedure Step III- Capture of Cone and Caddy Markers Capture of all visible markers with 3 Vicon cameras Selection of 6 markers in each system, forming two large, independent triangles Caddy marker triangles Cross marker triangles

Coordinate Transformation Orthogonal Transformation Involves 2 coordinate systems Local (L) coordinate system (Patient Reference System) Global (G) coordinate system (Camera Reference System) Two-Step Transformation of 2 triangles: Rotation L-plane parallel to G-plane L-triangle collinear with G-triangle Translation Transformation equation used: pn(g) = MB . MA . pn(l) + t (n = 1 - 3) Where MA is Rotation for Co-Planarity, MB is rotation for Co-linearity t is Translation vector

Coordinate Transformation Least-Square Transformation Also involves global (G) and local (L) coordinate systems Transformation is represented by a single homogeneous coordinates with 4D vector & matrix representation. General Least-Square transformation matrix: AX = B The regression procedure is used: X = A+ B Where A+ is the pseudo-inverse of A (i.e.: (ATA)-1AT, use QR) X is homogenous 4 x 4 transformation matrix. The transformation matrix or its inverse can be applied to local or global vector to determine the corresponding vector in the other system.

Results Accuracy of Camera System Method: compare camera-measured distances between markers pairs with DIL-measured values Results (15 independent runs) mean distance error + SD caddy: -0.23 + 0.33 mm cross: 0.00 + 0.09 mm

Results System Error - Initial Results (a) First 12 data runs: mean system error + SD orthogonal transform 2.8 + 2.2 mm (0.5 - 5.5) LS transform 61 + 33 mm (8.9 - 130) (b) 8 data runs, after improving calibration 2.4 + 0.6 mm (1.5 - 3.0) 46 + 23 mm (18 - 78)

Results System Error - Current Results (c) Last 15 data runs, 5 target positions, 3 runs per position: mean system error + SD orthogonal transform 0.6 + 0.3 mm (0.2 - 1.3) LS transform 25 + 8 mm (14 - 36) Least Squares Orthogonal

Conclusion and Future Directions Currently, Orthogonal Transformation outperforms standard Least-Square based Transformation by more than one order of magnitude Comparative analysis between Orthogonal Transformation and more accurate version of Least-Square based Transformation (e.g. Constrained Least Square) needs to be done Various optimization options, e.g., different marker arrangements, will be applied to attain an accuracy of better than 0.5 mm