Presentation is loading. Please wait.

Presentation is loading. Please wait.

Where does volume and point data come from?

Similar presentations


Presentation on theme: "Where does volume and point data come from?"— Presentation transcript:

1 Where does volume and point data come from?
Marc Levoy Computer Science Department Stanford University 34:15 total + 30% = ~45 minutes Time =

2 Three theses Thesis #1: Many sciences lack good visualization tools.
with homage to Fred Brooks “Interactive Graphics Can Double America’s Supercomputers: Theses for Debate Offered by Fred Brooks”, I3D ’86, Chapel Hill Thesis #1: Many sciences lack good visualization tools. Corollary: These are a good source for volume and point data. Thesis #2: Computer scientists need to learn these sciences. Corollary: Learning the science may lead to new visualizations. Thesis #3: We also need to learn their data capture technologies. Corollary: Visualizing the data capture process helps debug it. Time =

3 Success story #1: volume rendering of medical data
Hoehne Visible Man dataset appeared in Scientific American, 2001 careful segmentation, effective juxtaposition of different segmentations and cut planes shadow line appears faked, probably by a Sci Am artist Resolution Sciences (now Microscience Group) mouse tail, also sliced and photographed, like Visible Man effective control over surface appearance, multiple lights, shadows not interactive, but could be (on today’s GPUs) Karl-Heinz Hoehne Resolution Sciences Time =

4 Success story #1: volume rendering of medical data
Kaufman taken from research to practice obviously worked with practitioners took 12 years Arie Kaufman et al. Time =

5 Success story #2: point rendering of dense polygon meshes
Levoy and Whitted if you’re interested in my analysis of why this paper was a failure, read my chapter in Markus Gross’s upcoming book, Point-Based Computer Graphics QSplat enabled us to interactively manipulate an unstructured mesh containing half a million polygons something that hadn’t been done to that point Szymon Rusinkiewicz’s QSplat (2000) Levoy and Whitted (1985) Time =

6 Failure: volume rendering in the biological sciences
a leading software package after 15 years of research in volume rendering, a single threshold is still the most common control priced like medical imaging packages, which makes them too expensive for researchers in the basic sciences Photoshop discussion at Advanced Quantitative Light Microscopy (AQLM) course PhotoMerge, for combining images from motorized X-Y stages, programmable array microscopes, etc. almost any other package performs better: PanaVue, Microsoft’s Digital Image Studio, Matt Brown’s Autostitch but there aren’t good replacements tailored for the biological sciences (a leading software package) limited control over opacity transfer function no control over surface appearance or lighting no quantitative 3D probes Photoshop converting 16-bit to 8-bit dithers the low-order bit PhotoMerge (image mosaicing) performs poorly no support for image stacks, volumes, n-D images Time =

7 What’s going on in the basic sciences?
great source for volume and point data and more importantly... they need our help – lack of good visualization tools is holding them back all the arguments made for ViSC (Visualization in Scientific Computing) in the 1980s now apply to computational imaging new instruments  scientific discoveries most important new instrument in the last 50 years: the digital computer computers + digital sensors = computational imaging Def: imaging methods in which computation is inherent in image formation. – B.K. Horn the revolution in medical imaging (CT, MR, PET, etc.) is now happening all across the basic sciences (It’s also a great source for volume and point data!) Time =

8 Examples of computational imaging in the sciences
medical imaging rebinning transmission tomography reflection tomography (for ultrasound) geophysics borehole tomography seismic reflection surveying applied physics diffuse optical tomography diffraction tomography scattering and inverse scattering inspiration for light field rendering in this talk Time =

9 biology astronomy airborne sensing confocal microscopy
deconvolution microscopy astronomy coded-aperture imaging interferometric imaging airborne sensing multi-perspective panoramas synthetic aperture radar applicable at macro scale too Time =

10 optics holography wavefront coding Time =

11 Computational imaging technologies used in neuroscience
Magnetic Resonance Imaging (MRI) Positron Emission Tomography (PET) Magnetoencephalography (MEG) Electroencephalography (EEG) Intrinsic Optical Signal (IOS) In Vivo Two-Photon (IVTP) Microscopy Microendoscopy Luminescence Tomography New Neuroanatomical Methods (3DEM, 3DLM) Time =

12 The Fourier projection-slice theorem (a. k. a
The Fourier projection-slice theorem (a.k.a. the central section theorem) [Bracewell 1956] integrals of attenuation (e.g. of X-rays or visible light), of emission (e.g. fluorescence), etc. Ref: Bracewell, R. N. Strip Integration in Radio Astronomy, Australian Journal of Physics, Vol. 9, 1956, No. 2, pp Kak, A, Slaney, M., Principles of Computerized Tomographic Imaging, IEEE Press, 1988. P(t) G(ω) (from Kak) P(t) is the integral of g(x,y) in the direction  G(u,v) is the 2D Fourier transform of g(x,y) G(ω) is a 1D slice of this transform taken at  -1 { G(ω) } = P(t) ! Time =

13 Reconstruction of g(x,y) from its projections
P(t) P(t, s) G(ω) (from Kak) add slices G(ω) into u,v at all angles  and inverse transform to yield g(x,y), or add 2D backprojections P(t, s) into x,y at all angles  Time =

14 The need for filtering before (or after) backprojection
omega distance from the origin of frequency space convolving by F-1(|w|) is similar to taking a 2nd derivative u v ω 1/ω ω |ω| hot spot correction sum of slices would create 1/ω hot spot at origin correct by multiplying each slice by |ω|, or convolve P(t) by -1 { |ω| } before backprojecting this is called filtered backprojection Time =

15 Summing filtered backprojections
(from Kak) Time =

16 Example of reconstruction by filtered backprojection
Ref: Herman, G.T., Image Reconstruction from Projections, Academic Press, 1980. X-ray sinugram (from Herman) filtered sinugram reconstruction Time =

17 More examples CT scan of head volume renderings
occlusions violate the assumption that pixels are line integrals, so reconstruction fails thus, you cannot reconstruct volumetric models of opaque scenes using digital photography followed by tomography (although many have tried) volume renderings CT scan of head the effect of occlusions Time =

18 Limited-angle projections
here’s another violation of the assumptions on tomography, but one that is more easily addressed artifacts elongation in direction of available projections object-dependent sinusoidal variations Ref: Olson, T., Jaffe, J.S., An explanation of the effects of squashing in limited angle tomography, IEEE Trans. Medical Imaging, Vol. 9, No. 3., September 1990. [Olson 1990] Time =

19 Reconstruction using the Algebraic Reconstruction Technique (ART)
M projection rays N image cells along a ray pi = projection along ray i fj = value of image cell j (n2 cells) wij = contribution by cell j to ray i (a.k.a. resampling filter) (from Kak) applicable when projection angles are limited or non-uniformly distributed around the object can be under- or over-constrained, depending on N and M Time =

20 make an initial guess, e.g. assign zeros to all cells
Formula is derived in Kak, chapter 7, p. 278 Procedure make an initial guess, e.g. assign zeros to all cells project onto p1 by increasing cells along ray 1 until Σ = p1 project onto p2 by modifying cells along ray 2 until Σ = p2, etc. to reduce noise, reduce by for α < 1 Time =

21 linear system, but big, sparse, and noisy
SIRT = Simultaneous Itertative Reconstruction Technique SART = Simultaneous ART linear system, but big, sparse, and noisy ART is solution by method of projections [Kaczmarz 1937] to increase angle between successive hyperplanes, jump by 90° SART modifies all cells using f (k-1), then increments k overdetermined if M > N, underdetermined if missing rays optional additional constraints: f > 0 everywhere (positivity) f = 0 outside a certain area Procedure make an initial guess, e.g. assign zeros to all cells project onto p1 by increasing cells along ray 1 until Σ = p1 project onto p2 by modifying cells along ray 2 until Σ = p2, etc. to reduce noise, reduce by for α < 1 Time =

22 linear system, but big, sparse, and noisy
Ref: Olson, T., A stabilized inversion for limited angle tomography. Manuscript. 35 degrees missing linear system, but big, sparse, and noisy ART is solution by method of projections [Kaczmarz 1937] to increase angle between successive hyperplanes, jump by 90° SART modifies all cells using f (k-1), then increments k overdetermined if M > N, underdetermined if missing rays optional additional constraints: f > 0 everywhere (positivity) f = 0 outside a certain area [Olson] Time =

23 Nonlinear constraints
f = 0 outside of circle (oval?) [Olson] Time =

24 Borehole tomography receivers measure end-to-end travel time
Ref: Reynolds, J.M., An Introduction to Applied and Environmental Geophysics, Wiley, 1997. (from Reynolds) receivers measure end-to-end travel time reconstruct to find velocities in intervening cells must use limited-angle reconstruction methods (like ART) Time =

25 Applications obvious applications: looking for oil Left picture is from Reynolds, right picture is from Stanford’s Forma Urbis Romae project mapping ancient Rome using explosions in the subways and microphones along the streets? mapping a seismosaurus in sandstone using microphones in 4 boreholes and explosions along radial lines Time =

26 Optical diffraction tomography (ODT)
like any tomographic technique, it is applicable only to semi-transparent objects useful when the object refracts, or diffracts, making Fourier Projection Slice Theorem (and backprojection) invalid forward scattered field the shape taken by the initially planar waves after refraction or diffraction which is equivalent to measuring their amplitude and phase volume dataset index of refraction as a function of x and y broadband hologram of a semi-transparent object alternatively, you could hold wavelength constant and vary incident direction, thus filling frequency space with arcs of the same radius Ref: Wolf E 1969, Three-dimensional structure determination of semi-transparent objects from holographic data, Opt. Commun –6 limit as λ → 0 (relative to object size) is Fourier projection-slice theorem (from Kak) for weakly refractive media and coherent plane illumination if you record amplitude and phase of forward scattered field then the Fourier Diffraction Theorem says  {scattered field} = arc in  {object} as shown above, where radius of arc depends on wavelength λ repeat for multiple wavelengths, then take  -1 to create volume dataset equivalent to saying that a broadband hologram records 3D structure Time =

27 for weakly refractive media and coherent plane illumination
[Devaney 2005] example (from Devaney) PSF of a single point scatterer (real part, as a function of x and y) measuring phase typically requires a reference beam and interference between it and the main beam, i.e. a holographic procedure arc is sometimes called Ewald circle Ref: Devaney, A., Inverse scattering and optical diffraction tomography, Powerpoint presentation. limit as λ → 0 (relative to object size) is Fourier projection-slice theorem (from Kak) for weakly refractive media and coherent plane illumination if you record amplitude and phase of forward scattered field then the Fourier Diffraction Theorem says  {scattered field} = arc in  {object} as shown above, where radius of arc depends on wavelength λ repeat for multiple wavelengths, then take  -1 to create volume dataset equivalent to saying that a broadband hologram records 3D structure Time =

28 for weakly refractive media and coherent plane illumination
[Devaney 2005] limit as λ → 0 (relative to object size) is Fourier projection-slice theorem for weakly refractive media and coherent plane illumination if you record amplitude and phase of forward scattered field then the Fourier Diffraction Theorem says  {scattered field} = arc in  {object} as shown above, where radius of arc depends on wavelength λ repeat for multiple wavelengths, then take  -1 to create volume dataset equivalent to saying that a broadband hologram records 3D structure Time =

29 Inversion by filtered backpropagation
Ref: Jebali, A., Numerical Reconstruction of semi-transparent objects in Optical Diffraction Tomography, Diploma Project, Ecole Polytechnique, Lausanne, 2002. backprojection backpropagation [Jebali 2002] depth-variant filter, so more expensive than tomographic backprojection, also more expensive than Fourier method applications in medical imaging, geophysics, optics Time =

30 Diffuse optical tomography (DOT)
so what happens if the object is strongly scattering? then it becomes a problem in inverse scattering Refs: Arridge, S.R., Methods for the Inverse Problem in Optical Tomography, Proc. Waves and Imaging Through Complex Media, Kluwer, , 2001. Schweiger, M., Gibson, A., Arridge, S.R., “Computational Aspects of Diffuse Optical Tomography,” IEEE Computing, Vol. 5, No. 6, Nov./Dec., (for image) Jensen, H.W., Marschner, S., Levoy, M., Hanrahan, P., A Practical Model for Subsurface Light Transport, Proc. SIGGRAPH 2001. [Arridge 2003] assumes light propagation by multiple scattering model as diffusion process Time =

31 Diffuse optical tomography
acquisition 81 source positions, 81 detector positions for each source position, measure light at all detector positions use time-of-flight measurement to estimate initial guess for absorption, to reduce cross-talk between absoprtion and scattering [Arridge 2003] female breast with sources (red) and detectors (blue) absorption (yellow is high) scattering (yellow is high) assumes light propagation by multiple scattering model as diffusion process inversion is non-linear and ill-posed solve using optimization with regularization (smoothing) Time =

32 Computing vector light fields
1:30 we computer graphics researchers know a lot about scattering simulations to produce images here’s a variant you might not be familiar with this isn’t computational imaging in the sense I’ve defined it, i.e. there is no digital sensor but it is a computational method for producing volume data light as a vector field proposed by Michael Faraday formalized by James Clerk Maxwell studied in the context of illumination engineering by Arun Gershun field theory (Maxwell 1873) adding two light vectors (Gershun 1936) the vector light field produced by a luminous strip Time =

33 Computing vector light fields
flatland scene lines and arcs of varying opacity illumination (yellow haze) impinging uniformly from all directions (red circle) magnitude taking into account (partial) occlusion direction visualized using Brian Cabral’s line integral convolution (LIC) (Proc. SIGGRAPH 1993) direction gives orientation of a flat surface for maximum brightness in saddle between axis-aligned squares, surface could be oriented either way vector light field (optional) equivalent (under the particular illumination condition defined here) to the ambient occlusion map magnitude is fraction of circle seen from each point, and direction is average direction to these unoccluded points robot placed in scene should follow field lines to escape with least chance of collision flatland scene with partially opaque blockers under uniform illumination light field magnitude (a.k.a. irradiance) light field vector direction Time =

34 From microscope light fields to volumes
4D light field → digital refocusing → 3D focal stack → deconvolution microscopy → 3D volume data (DeltaVision) Time =

35 3D deconvolution object * PSF assumes object is semi-transparent, hence scattering is minimal each pixel is a line integral of attenuation (if backlit) or emission (if frontlit and object is fluorescent) missing rays not as bad as a photographic lens, because microscope lenses have a very wide aperture, for example, 0.95NA = about 70 degrees each side of the optical axis, as shown in the PSF above Ref: McNally, J.G., Karpova, T., Cooper, J., Conchello, J.A., Three-Dimensional Imaging by Deconvolution Microscopy, Methods, Vol. 19, 1999. [McNally 1999] focus stack of a point in 3-space is the 3D PSF of that imaging system object * PSF → focus stack  {object} ×  {PSF} →  {focus stack}  {focus stack}   {PSF} →  {object} spectrum contains zeros, due to missing rays imaging noise is amplified by division by ~zeros reduce by regularization (smoothing) or completion of spectrum improve convergence using constraints, e.g. object > 0  {PSF} Time =

36 Silkworm mouth (40x / 1.3NA oil immersion)
slice of focal stack slice of volume volume rendering Time =

37 From microscope light fields to volumes
tomographic reconstruction backprojection doesn’t work well, because we don’t have all rays but we could use ART it turns out that tomographic reconstruction using SART with a positivity constraint is the same as digital refocusing followed by deconvolution microscopy with the same positivity constraint we prove this in our SIGGRAPH 2006 paper, Light Field Microscopy 4D light field → digital refocusing → 3D focal stack → deconvolution microscopy → 3D volume data 4D light field → tomographic reconstruction → 3D volume data (DeltaVision) (from Kak) Time =

38 Optical Projection Tomography (OPT)
reconstruction of semi-transparent objects from digital photographs object must be immersed in refraction index-matched fluid James Sharpe, Science, 2002 object was fluorescent, so each pixel is a line integral of emission and microscopes are orthographic, hence tomography applies Trifonov EGSR 2006 object was fully transparent, immersed in colored liquid and backlit, so each pixel is a line integral of attenuation digital cameras are perspective, hence he had to use ART if this works, then what good is the extra dimension in the light field? see my SIGGRAPH 2006 paper on light field microscopy [Trifonov 2006] [Sharpe 2002] Time =

39 Confocal scanning microscopy
an alternative to 3D deconvolution for producing cross sectional images, hence volume data shown here for microscopy, but can also be applied at the large scale, as I showed in my SIGGRAPH 2004 paper on synthetic aperture confocal imaging Ref: Corle, T.R.. Kino, G.S. Confocal Scanning Optical Microscopy and Related Imaging Systems, Academic Press, 1996. if you introduce a pinhole only one point on the focal plane will be illuminated light source pinhole Time =

40 Confocal scanning microscopy
...and a matching optical system, hence the word confocal this green dot will be both strongly illuminated and sharply imaged while this red dot will have less light falling on it by the square of distance r, because the light is spread over a disk and it will also be more weakly imaged by the square of distance r, because its image is blurred out over an disk on the pinhole mask, and only a little bit is permitted through so the extent to which the red dot contributes to the final image falls off as the fourth power of r, the distance from the focal plane pinhole light source r photocell pinhole Time =

41 Confocal scanning microscopy
of course, you’ve only imaged one point so you need to move the pinholes and scan across the focal plane light source pinhole pinhole photocell Time =

42 Confocal scanning microscopy
light source pinhole pinhole photocell Time =

43 [UMIC SUNY/Stonybrook]
the object in the lower-right image is actually spherical, but portions of it that are off the focal plane are both blurry and dark, effectively disappearing [UMIC SUNY/Stonybrook] Time =

44 Synthetic aperture confocal imaging [Levoy et al., SIGGRAPH 2004]
light source → 5 beams → 0 or 1 beams Time =

45 Seeing through turbid water
Time =

46 Seeing through turbid water
scanned tile if there were objects at multiple depths, I could repeat this procedure for each depth, producing a volume dataset floodlit scanned tile Time =

47 Coded aperture imaging
Ref: Zand, J., Coded aperture imaging in high energy astronomy, (from Zand) optics cannot bend X-rays, so they cannot be focused pinhole imaging needs no optics, but collects too little light use multiple pinholes and a single sensor produces superimposed shifted copies of source Time =

48 Reconstruction by backprojection
assumes non-occluding source otherwise it’s the voxel coloring problem (from Zand) backproject each detected pixel through each hole in mask superimposition of projections reconstructs source + a bias essentially a cross correlation of detected image with mask also works for non-infinite sources; use voxel grid assumes non-occluding source Time =

49 Example using 2D images (Paul Carlisle)
A simple example: cross correlation is just convolution (of detected image by mask) without first reversing detected image in x and y conversion of blacks to -1’s in “decoding matrix” just serves to avoid normalization of resulting reconstruction (to remove the bias) performing this on an image of gumballs, rather than a 3D gumball scene, is equivalent to assuming the gumballs cover the sky at infinity, i.e. they are an angular function can this be reproduced in the lab? using a mask with holes, a screen behind it, and a camera to photograph the image falling on the screen Ref: Paul Carlisle, Coded Aperture Imaging, * = Time =

50 New sources for point data
Molecular probes 50-nm fluorescent microspheres smaller than the wavelength of light in fact their size controls their color too small to image? No... Gustafsson uses structured illumination and non-linear (saturation) imaging to beat the diffraction limit by 10x (in X and Y)! Ref: Gustafsson, M.G.L., Nonlinear structured-illumination microscopy: Wide-field fluorescence imaging with theoretically unlimited resolution, Proc. National Academy of Sciences (PNAS), Vol. 102, No. 37, Sept. 13, 2005. [Gustaffson 2005] (Molecular Probes) Time =

51 Three theses Thesis #1: Many sciences lack good visualization tools.
Computer scientists need to learn these sciences books I was required to read for my PhD: Gabor Herman, Image Reconstruction from Projections Glusker and Trueblood, Crystal Structure Analysis Thorpe, Elementary Topics in Differential Geometry Thesis #1: Many sciences lack good visualization tools. Corollary: These are a good source for volume and point data. Thesis #2: Computer scientists need to learn these sciences. Corollary: Learning the science may lead to new visualizations. Thesis #3: We also need to learn their data capture technologies. Corollary: Visualizing the data capture process helps debug it. Time =

52 The best visualizations are often created by domain scientists
Vesalius On the fabric of the human body being an anatomist, Vesalius knew that muscle striations were important, but looked a lot like hatching for shading so he and his illustrator (Stephen van Calcar) developed a style of drawing in which the two could be distinguished pointed out this distinction in his written narrative revolutionized anatomy and scientific illustration Ref: H. Robin, The Scientific Image, pp. 41, B. Baigrie, Picturing Knowledge, p. 57. Andreas Vesalius (1543) Time =

53 Three theses Thesis #1: Many sciences lack good visualization tools.
Corollary: These are a good source for volume and point data. Thesis #2: Computer scientists need to learn these sciences. Corollary: Learning the science may lead to new visualizations. Thesis #3: We also need to learn their data capture technologies. Corollary: Visualizing the data capture process helps debug it. Time =

54 Visualizing raw data helps debug the capture process
deconvolution to remove out of focus blur wasn’t working well making a vertical cross-section of my raw data explained the problem these are 1-micron shifts, each triggered by a person walking across the raised floor! X-Z cross-sectional slice of same stack hollow fluorescent 15-micron sphere, manually captured Z-stack, 1-micron increments, 40×/1.3NA oil objective Time =

55 ...or may force improvements in the capture technology
Shinya Inoue one of the microscopists I worked with at MBL prior to Inoue, nobody understood how chromosomes moved apart in an orderly fashion during mitosis people had seen spindle fibers in stained (dead) cells, but didn’t understand their purpose, and nobody could see these spindle fibers in the cell when it was alive stress-free (“rectified”) microscope objectives improved the polarization microscope, permitting spindle fibers to be seen in an (unstained) living cell video microscopy permitted capture of live cells during mitosis (and meiosis of gametes, which leaves half the genetic material in each offspring cell) Ref: Dell, K.R., Vale, R.D., A tribute to Shinya Inoue and innovation in light microscopy, Journal of Cell Biology, Vol. 165, NO. 1, April 12, 2004, pp crane fly spermatocyte undergoing meiosis, image and video by Rudolf Oldenbourg Shinya Inoué at his polarization microscope Time =

56 Final thought: the importance of building useful tools
Fred Brooks from his SIGGRAPH 1994 keynote address we need to build tools real scientists use we need to understand their science we need to understand their capture technology “A toolmaker succeeds as, and only as, the users of his tool succeed with his aid. However shining the blade, however jeweled the hilt, however perfect the heft, a sword is tested only by cutting. That swordsmith is successful whose clients die of old age.” – Fred Brooks, Computer Scientist as Toolsmith – II, Proc. CACM 1996 Time =

57 Acknowledgements Fred Brooks (“Computer Scientist as Toolsmith”)
Pat Hanrahan (“Self-Illustrating Phenomena”) Bill Lorensen (“The Death of Visualization”) Shinya Inoué (“History of Polarization Microscopy”) Time =


Download ppt "Where does volume and point data come from?"

Similar presentations


Ads by Google