Integral University EC-024 Digital Image Processing.

Slides:



Advertisements
Similar presentations
Geometry of Aerial Photographs
Advertisements

Computer Vision Radiometry. Bahadir K. Gunturk2 Radiometry Radiometry is the part of image formation concerned with the relation among the amounts of.
Three Dimensional Viewing
Foundations of Physics
CS 376b Introduction to Computer Vision 04 / 21 / 2008 Instructor: Michael Eckmann.
Chapter 23 Mirrors and Lenses.
Chapter 23 Mirrors and Lenses. Notation for Mirrors and Lenses The object distance is the distance from the object to the mirror or lens Denoted by p.
CT Physics V.G.Wimalasena Principal School of radiography.
Chapter 23 Mirrors and Lenses.
II From “Beam Paths” to the “Microscope” October 2008 Rudi Rottenfusser – Carl Zeiss MicroImaging.
Light: Geometric Optics
Camera calibration and epipolar geometry
Example: A particular nearsighted person is unable to see objects clearly when they are beyond 2.5 m away (the far point of this particular eye). What.
CS485/685 Computer Vision Prof. George Bebis
3D Measurements by PIV  PIV is 2D measurement 2 velocity components: out-of-plane velocity is lost; 2D plane: unable to get velocity in a 3D volume. 
© 2004 by Davi GeigerComputer Vision March 2004 L1.1 Binocular Stereo Left Image Right Image.
Copyright © 2012 Pearson Education Inc. PowerPoint ® Lectures for University Physics, Thirteenth Edition – Hugh D. Young and Roger A. Freedman Lectures.
CSE473/573 – Stereo Correspondence
Chapter 23 Mirrors and Lenses.
Three-Dimensional Concepts
3-D Computer Vision Using Structured Light Prepared by Burak Borhan.
Visualization- Determining Depth From Stereo Saurav Basu BITS Pilani 2002.
© 2009, TSI Incorporated Stereoscopic Particle Image Velocimetry.
Chapter 17 Optics 17.1 Reflection and Refraction
Automatic Camera Calibration
Reflectance Map: Photometric Stereo and Shape from Shading
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
CAP4730: Computational Structures in Computer Graphics 3D Concepts.
Design of photographic lens Shinsaku Hiura Osaka University.
Chapter 23 Mirrors and Lenses.
Fundamental Physics II PETROVIETNAM UNIVERSITY FUNDAMENTAL SCIENCES DEPARTMENT Vungtau, 2013 Pham Hong Quang
01/28/05© 2005 University of Wisconsin Last Time Improving Monte Carlo Efficiency.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
Shape from Stereo  Disparity between two images  Photogrammetry  Finding Corresponding Points Correlation based methods Feature based methods.
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
September 5, 2013Computer Vision Lecture 2: Digital Images 1 Computer Vision A simple two-stage model of computer vision: Image processing Scene analysis.
Lecture 03 15/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Stereo Viewing Mel Slater Virtual Environments
THE MICROSCOPE Chapter 7. Introduction A microscope is an optical instrument that uses a lens or a combination of lenses to magnify and resolve the fine.
: Chapter 11: Three Dimensional Image Processing 1 Montri Karnjanadecha ac.th/~montri Image.
Single-view geometry Odilon Redon, Cyclops, 1914.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
Lecture 18 Optical Instruments
Its now time to see the light…..  A lens is a curved transparent material that is smooth and regularly shaped so that when light strikes it, the light.
ECEN 4616/5616 Optoelectronic Design Class website with past lectures, various files, and assignments: (The.
Physics 203/204 4: Geometric Optics Images formed by refraction Lens Makers Equation Thin lenses Combination of thin lenses Aberration Optical Instruments.
12/24/2015 A.Aruna/Assistant professor/IT/SNSCE 1.
The Microscope and Forensic Identification. Magnification of Images A microscope is an optical instrument that uses a lens or a combination of lenses.
The law of reflection: The law of refraction: Image formation
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Intelligent Vision Systems Image Geometry and Acquisition ENT 496 Ms. HEMA C.R. Lecture 2.
Digital Image Processing Additional Material : Imaging Geometry 11 September 2006 Digital Image Processing Additional Material : Imaging Geometry 11 September.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Image Formation. The light rays coming from the leaves in the background of this scene did not form a focused image on the film of the camera that took.
Basics Reflection Mirrors Plane mirrors Spherical mirrors Concave mirrors Convex mirrors Refraction Lenses Concave lenses Convex lenses.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
Mirrors and Lenses How do eyeglasses correct your vision? When you look in a mirror, where is the face you see? In which ways is a cell phone camera similar.
Lenses. Combining prisms Combine a series of prisms and you can get an especially sharply focused image. A lens acts as a series of prisms.
: Chapter 11: Three Dimensional Image Processing
Thin lenses Lec. three.
Three-Dimensional Concepts. Three Dimensional Graphics  It is the field of computer graphics that deals with generating and displaying three dimensional.
Common Classification Tasks
Image Processing, Leture #20
Three Dimensional Viewing
Three-Dimensional Image Processing
Three-Dimensional Image Processing
Mirrors, Plane and Spherical Spherical Refracting Surfaces
Range calculations For each pixel in the left image, determine what pixel position in the right image corresponds to the same point on the object. This.
Computed Tomography (C.T)
Presentation transcript:

Integral University EC-024 Digital Image Processing

2 Three Dimensional Image Processing Unit 5

Three-Dimensional Image Processing  Spatially Three-Dimensional Images  CAT (Computerized Axial Tomotography  Stereometry  Stereoscopic Display  Shaded Surface Display 3

Basic Concept behind 3D imaging  If we can somehow get the sliced images of an object and then try to reconstruct the image, then it may eventually be possible to reconstruct an object’s image in 3D. 4

Optical sectioning ◦ Problem with conventional optical microscope: only structure near the focus plane is visible ◦ Serial sectioning (slicing the specimen into a series of thin sections and acquiring the image of each section) can be used to solve the problem but it has 2 major disadvantages:  loss of registration when sections become separated  geometric distortions ◦ Optical sectioning is achieved by digitizing the specimen with the focal plane situated at various levels along the optical axis 5

Definition of Registration  Image Registration is a process of overlaying two or more digital images of the same scene taken at different times or at the same time by different image sensors.  The final information(3D image) is obtained by combining various data (2D images). 6

Thick specimen imaging 7

Thick specimen imaging (cont’d) –The focal length of the objective lens determines the distance d f to the focal plane from the lens equation: 1/d i +1/d f = 1/f and the magnification of the objective is M = d i / d f 8

Thick specimen imaging (cont’d) We can place the focal plane at any desired level z’. The focal plane of the objective is related to the other microscope parameters by f = d i /(M+1) = (d f.M)/(M+1) = (d i d f )/(d i +d f ) and the distance from the center of the lens to the focal plane is d f = d i /M = (M+1)f /M = fd i /(d i -f) 9

Computerized Axial Tomography (CAT)  Conventional radiography –Using X-rays –Some structures in human body absorb X rays more heavily than other structures –No lenses are used –Projection (2 dimensional) of the object is recorded –Multiple views are frequently used to resolve ambiguities 10

Conventional radiography 11

Images obtained by Conventional Radiography 12

Tomography  Tomography –Useful where image detail is required in deeply imbedded structures such as those of the middle ear –One disadvantage: high dosage of X-ray 13

CAT Scan Animation 14

Tomography 15

CAT  Computerized axial tomography (CAT) –CAT is a technique that incorporates digital image processing to obtain 3-D images –The CAT scanner rotates about the object to acquire a series of exposures –The resulting set of 1-D intensity functions is used to compute a 2-D cross-sectional image of the object at the level of the beam –The beam is moved down the object in small steps producing a 3-D image 16

CAT 17

Stereometry  Stereometry is a technique by which one can deduce the 3-D shape of an object from a stereoscopic image pair  Image of the object can be recorded by measuring the brightness of each pixel on the image plane  The distance from the center of the lens to the point p defines the range of this pixel  A range image can be generated by assigning each pixel a gray level proportional, not to its brightness, but to the length of its pixel cone 18

Stereometry 19

Stereoscopic Imaging 20

Range equations  Range equations –Suppose that the point P, with coordinates (X 0, Y 0, Z 0 ) is located in front of the cameras –We can show that a line from P through the center of the left camera will intersect the Z = -f plane at 21

Range equations –Similarly for the right camera –We now setup a 2-D coordinate system in each image plane with a 180 o rotation, thus 22

Range equations –Now the coordinate of the point images are and Rearranging both equations 23

Range equations Solving for Z 0, gives the normal-range equation. –We can also write 24

Range equations Substitute Z 0 gives the true-range equation: 25

Range calculations  For each pixel in the left image, determine what pixel position in the right image corresponds to the same point on the object. This can be accomplished on a line-by-line basis.  Calculate the difference x r - x l to produce a displacement image, in which gray level represents pixel shift.  Using the displacement image, calculate Z o at each pixel to produce a normal range image. 26

Range calculations  Calculate the X and Y coordinates of each point by:  Now we can calculate the X,Y,Z-coordinates of every point on the object that maps to a pixel in the camera.  Finally, compute R as a function of X and Y to produce a true-range image. 27

Range calculations  Notes (for boresighted cameras): –If Z 0 >> d, the cameras must be converged to ensure that their fields of view overlap to include the objects in the near field. The range equations are slightly more complex. –If the cameras are not in the sample plane, the equations are even more complex. –Cameras geometry can be computed from a pair of stereo images. 28

Stereo Matching  The following figure illustrates a technique that locates the right image pixel position that corresponds to a particular left image pixel. 29

Stereo Matching –Suppose the given pixel in the left image has coordinates x l, y l –Fit the imaginary windows around that pixel and the pixel having the same coordinates in the right image –Compute a measure of agreement (using cross- correlation-- a sum of squared differences) –Move the window in the right image to the right to find maximum agreement, thus x r -x l can be found 30

Stereo Matching Notes: Noise tends to corrupt the image agreement measure Increase window size to ignore noise but this reduces the resolution of the resulting range image It is difficult to determine the range of a smooth surface. Projection of random texture onto such surface can be helpful 31

Stereometry with Wide-Angle Cameras  Used in Viking Mars Lander spacecraft  Two digitizing cameras (angle scanning cameras) are spaced 1 meter apart  The coordinates of a pixel are given by the azimuth and elevation angles of the centerline of its pixel cone  The azimuth is the angle between the yz- plane  The elevation angle is the angle between the xz-plane  Normal-range equation components in terms of the two camera azimuth coordinates and are written as follow: – is the elevation coordinate and is the same for both cameras 32

Stereometry with Wide-Angle Cameras 33

Stereometry with Wide-Angle Cameras  The azimuth is the angle between the yz- plane  The elevation angle is the angle between the xz-plane 34

Stereometry with Wide-Angle Cameras  Normal-range equation components in terms of the two camera azimuth coordinates and are written as follow:  - is the elevation coordinate and is the same for both cameras 35

Stereometry with Wide-Angle Camera (Cont’d) 36

Stereoscopic Image Display  Range relation 37

Stereoscopic Image Display  If the relationship DS = fd is satisfied, the scene will appear as if the observer had viewed it firsthand  Two conditions must be met to obtain accurate reproduction of a 3-D scene: –There should be converging lenses in front of each of the viewer’s eyes so that the viewer can focus his/her eyes at infinity and still see the two transparency in focus. Positive lenses of focal equal to D are commonly used –The viewing geometry is exact only when the viewer’s line of sight falls along the z-axis. 38