Presentation is loading. Please wait.

Presentation is loading. Please wait.

Algebraic and Statistic Reconstruction Algorithms Liran Levy Advanced Topics in Sampling (049029), Winter 2008/9.

Similar presentations


Presentation on theme: "Algebraic and Statistic Reconstruction Algorithms Liran Levy Advanced Topics in Sampling (049029), Winter 2008/9."— Presentation transcript:

1 Algebraic and Statistic Reconstruction Algorithms Liran Levy Advanced Topics in Sampling (049029), Winter 2008/9

2 Presentation Outline Chapter 7: Algebraic Reconstruction Algorithms Principles of Computerized Tomographic Imaging, A. C. Kak and Malcolm Slaney, IEEE Press, 1988 An Evaluation of Maximum Likelihood Reconstruction for SPECT E. S. CHORNOBOY, C. J. CHEN, M. I. MILLER, T. R. MILLER, AND D. L. SNYDER, IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 9, NO. I, MARCH 1990 Algebraic Reconstruction Techniques Can Be Made Computationally Efficient Gabor T. Herman, Fellow, ZEEE, and Lorraine B. Meyer IEEE TRANSACTIONS ON MEDICAL IMAGING, VOL. 12, NO. 3, SEPTEMBER 1993

3 Algebraic Reconstruction Algorithms Motivation: Transform based techniques require uniform distributions over 180/360 degrees, and a large number of projections. When this is not possible, we can assume that there is a cross section of arrays of unknowns, and use algebraic algorithms.

4 One index representation: M rays, each of width, intersecting the grid. is the fractional area of the j th cell to the i th ray. A line integral will be called a ray-sum. The measured data is : When: Matrix inversion impossible since: N,M~65,000, M<N, noise Image and Projection Representation

5 Iterative Solutions is single point in an N-dimensional space, and each ray-sum is a hyperplane. The intersections of all this hyperplanes is a single point. The computational procedure is (for N=2): 1. Initial guess 2. Projecting the initial guess on the first plane: 3. Projecting the result of (2) on the second plane. 4. Projecting the result back on the first plane, and so forth. 2D Realization:

6 Iterative Solutions, Notations For N-dimensional space: Initial guess: The projected result of on the i th plane: The process of projection is described by: Final result (after M projections, which is 1 iteration): If there is a unique solution, then:

7 Iterative Solutions, Comments If M<N there is no unique solution, and the final result depends on the initial guess. If the hyperplanes are orthogonal, the convergence is fast (1 iteration). If the angle between the hyperplanes is small, the convergence is slow. If there is a special condition, it can be applied after each step of the iteration, such as: or In the presence of noise, there is no unique solution. The algorithm solution oscillates.

8 Iterative Solutions, Noise

9 ART: Algebraic Reconstruction Techniques Storing and retrieval can pose a problem. Thus we write the projection as: We assume,depending on whether the i th ray intersects with the j th pixel. Another approach is to replace with: To cancel salt and pepper noise, we update by: decreases along the iterations.

10 SIRT: Simultaneous Iterative Reconstruction Techniques First we calculate all Then averaging and updating. The result is better looking images, at the expense of convergence rate.

11 SART: Simultaneous Algebraic Reconstruction Techniques We use N basis functions: to approximate :

12 SART(2): Simultaneous Algebraic Reconstruction Techniques If we choose to be N squares, we get back to the pixel basis functions. Another choice is bilinear functions, which leads to a continuous form of We allow finding of over a set of equidistant points. The step size is a parameter that can be changed.

13 Reconstruction: SART We examine the following test image:

14 Reconstruction: SART (2) Using N=16.384 (128x128), I=12,700 (under determined by 25%), we get: (Line plot at y=-0.605) Salt and pepper noise is dominant even for the big tumors.

15 Reconstruction: SART (3) Another solution is to calculate the average of the corrections for all the rays (for all pixels): This technique is fast converging like ART, and noise suppressing like SIRT.

16 Reconstruction: SART (4) Another solution: adding a longitudinal Hamming window, by replacing the coefficient with weighted correction term: 1 iteration SART+Hamming window:

17 Reconstruction: SART (5) 1 iteration sequential ART+Hamming window:

18 Reconstruction: SART (6) 2 iterations SART+Hamming window: 3 iterations SART+Hamming window:

19 Conclusions For the 1 iteration reconstruction: All the structures are fairly well distinguished, and the noise is practically gone. As the number of iterations increases, the salt and pepper increases as well. The use of Hamming window, and the average of corrections helps the algorithms.

20 Maximum Likelihood Reconstruction An Evaluation of Maximum likelihood Reconstruction for SPECT

21 Background SPECT (Single Photon Emission Computed Tomography): The body is injected with a radioactive tracer, which emits gamma radiation. A gamma camera measure these rays over multiple angles. Motivation: Use photon emission statistics, to obtain an iterative algorithm that maximizes the log-likelihood functional. Fundamental phenomena for SPECT imaging: 1. Random radioactive process ("low count" data). 2. Depth dependent response function. 3. Anisotropic attenuation response.

22 Assumptions Mapping of emission space into measurement space : Discrete set of angular camera positions: Each position consists measurements. x1x1 x2x2 D u

23 A statistical model, Basics Radioactive emission is modeled as a Poisson process: Ideal case: perfect line integral collimation, zero scattering, zero attenuation. The measurement is a Poisson process with intensity: Many problems: The collected data is a poor estimate for The inversion is not well defined. FBP (filtered backprojection) needs a smoothing function. Not all 3 phenomena's included.

24 A statistical model for SPECT Translated random error, depending on a transition probability density. is the measurement process, a Poisson process with intensity: Random deletion of the translated photons: is the final measurement, a Poisson process with mean intensity (the detector efficiency is, the attenuation density is ) :

25 Application of the EM procedure Assuming that are known quantities. Iterative algorithm for,maximizes the log-likelihood function: The estimate is defined by the recursion:

26 Application of the EM procedure (2) This is expressed by (just for reference): The error between the measurement and the i th iteration projected estimate: The correction term is:

27 Experimental Results: Comparison Criteria Let be the source intensity, and a two dimensional circular Gaussian smoothing kernel. is the correlation factor: The SNR is: The image resolution is, the FWHM that maximizes the correlation.

28 1 st Image Comparison: Bar Phantom This is a low-count data ("noisy"), in the absence of photon attenuation: FBP: ~0.5M measurements, 90 viewing angles, Hanning window (K=1). ML : ~0.5M measurements, 90 viewing angles, 500 iterations.

29 2 nd Image Comparison: Circular Cold Spot Phantom A uniform/nonuniform attenuation medium, 6.35 cm diameter rod is placed in a 21.6 cm diameter cylindrical, 3M measurements of 1.1cm.

30 3 rd Image Comparison: Chest Phantom Sourced imaged with 360 degrees (2 degrees steps), 0.4M counts. Attenuation map contains 3 different values: air, soft tissue, bones.

31 Final Conclusions ML reconstructions exhibit improved SNR (compared to a standard FBP), improved resolution, better image quantifiability, better ability to define objects boundaries. The results maintain under low-count data. ROI estimations is better than higher data count, or more iterations. Future research: PSRF and attenuation function are assumed to be known. In real life they should be measured. This causes more errors, and should be characterized.

32 Comparison: ART vs. ML-EM Algebraic Reconstruction Techniques Can Be Made Computationally Efficient

33 Comparison Layout Improving ART efficiency: 1. Building the mapping 2. The iterative step 3. Choosing the relaxation parameter ML-EM: Iterative algorithm, best fitted to PET. (PET: Positron Emission Tomography) Building testing set and training set (for ART efficiency improvement). Comparing results obtained using ART to those obtained using ML-EM. Conclusions

34 PET: Positron Emission Tomography

35 Improving ART efficiency: Introduction Two choices are left open for ART. 1. The order of the collected data to be accessed during the reconstruction procedure. 2. Choice of the relaxation parameter. Approximation by a finite set of basis functions is: is the value of the i th measurement of the j th basis function. The initial guess is the average activity:

36 ART: (1) The Mapping A permutation of the measurements: The mapping calculations (skipping the mathematics) 1. Divide P (number of rays/views) to its prime factors: 2. Assign each index a vector. (Skipping the details of how to build the vector) 3. Calculate : Result: This assures that dividing into sets of size L, will make each subgroup values as far as possible from each other, thus the subgroups are highly independent.

37 ART: (2) Iterative Step Assuming we have, the iteration step is: The relaxation parameter is.

38 ART: (3) Relaxation Parameter The algorithm suggested for choosing : 1. Calculate the FOM for three values, and decide on a direction (up/down). 2. Build ascending/descending series, and find a point where the FOM direction revere. 3. Continue until the convergence is 5% or less. 4. Find for all k.

39 EM Iteration A very basic EM iterative algorithm, which is superior to others: Both have the same computational cost per iterative step. FOM used is MSE (FOM: Figure Of Merit).

40 Comparing: ART vs. ML-EM We assign 300 views, with 99 rays each. EM: 15,30,45 iterations Phantom originals (3 copies) ART: 1,2,3 iterations

41 Comparing: ART vs. ML-EM (2) Divide 26 images simulating brain slices into: Training set: 6 images, Testing set: 20 images. 5 phantoms are generated for each image (total of 30,100). The above is repeated 10 times (choose 6 out of 26). Comparing the relaxation parameter: Iteration number is much more significant than the random selection of the training set.

42 Comparing: ART vs. ML-EM (3)

43 Comparing: ART vs. ML-EM (4) Results after 1 (2) iteration of ART are better than those after 45 (60) iterations of EM. After 6 (90) iterations results are equal and stable. ART (EM) doesn't improve beyond 6 (90) iterations. PET measurements have inaccuracies as a Poisson random variable model. Otherwise, EM would be better than ART.

44 Comparing: ART vs. ML-EM (5) EM can be "accelerated" by a factor of 3-4 top (to achieve a specific likelihood, not specific MSE. This will "slow" the "acceleration"). Matching a relaxation parameter and a training set for optimizing EM would speed it by a factor of 3 top. ART deals with data items one by one, while EM deals simultaneously. EM can also deal with a block of items, but this changes the concept of EM. Results cannot be extended to other FOM (figure of merit).

45 Comparing ART to Other Versions of ART Insuring non-negativity doesn't improve due to the nature of the phantom. Variant of ART guaranteed to converge to a regularized least square solution is not superior. Using a whole block of ART at a time (instead of one item at a time) is not superior. Conclusion: Choosing the data access order and the relaxation parameter greatly improves the performance of ART (compared to a basic version), and influences results of any comparison to other methods.

46 The End


Download ppt "Algebraic and Statistic Reconstruction Algorithms Liran Levy Advanced Topics in Sampling (049029), Winter 2008/9."

Similar presentations


Ads by Google