Presentation is loading. Please wait.

Presentation is loading. Please wait.

Geometric transformations Affine transformations Forward mapping Interpolations schemes.

Similar presentations


Presentation on theme: "Geometric transformations Affine transformations Forward mapping Interpolations schemes."— Presentation transcript:

1 Geometric transformations Affine transformations Forward mapping Interpolations schemes

2 A) Geometric transformations permit elimination of the geometric distortion that occurs when an image is captured. Geometric distortion may arise because of the lens or because of the irregular movement of the sensor during image capture. B) Geometric transformation processing is also essential in situations where there are distortions inherent in the imaging process such as remote sensing from aircraft or spacecraft. On example is an attempt to match remotely sensed images of the same area taken after one year, when the more recent image was probably not taken from precisely the same position. To inspect changes over the year, it is necessary first to execute a geometric transformation, and then subtract one image to other. We might also need to register two or more images of the same scene, obtained from different viewpoints or acquired with different instruments. Image registration matches up the features that are common to two or more images. Registration also finds applications in medical imaging.

3 A geometric transformation is a vector T that maps the pixel ( x,y) to a new position (x’, y’) x’ = T x (x,y) : y’=T y (x,y)(1) The transformation equations Tx and Ty are either know in advance - in the case of rotation, translation, scaling or can be determined from known original and transformed images. x’ x Suppose that the true position of a pixel is (x,y) and the distorted position is (x’,y’)

4 How can a geometrically distorted image be restored? A geometric transform consists of two basis steps: 1. Pixel co-ordinate transformation, which maps the co- ordinates of the input image pixel to the point in the output image. The output point co-ordinates should be computed as continuous values ( real numbers) as the position does not necessarily match the digital grid after the transform. 2. The second step is to find the point in the digital raster which matches the transformed point and determine its brightness value. It brightness is usually computed as an interpolation of the brightness of several points in the neighborhood.

5 Affine transformations An affine transformation is an important class of linear 2-D geometric transformations which maps variables (e.g. pixel intensity values located at position (x,y) in an input image) into new variables (e.g. in an output image (x’,y’) by applying a linear combination of translation, rotation, scaling and/or shearing (i.e. non-uniform scaling in some directions) operations. From equation (1) Tx and Ty are expressed as polynomials in x and y. If they are linear mapping function in x and y we apply an affine transformation

6 Figure shows a hierarchy of planar transformations which are important to computer vision

7 Translation x’=x+5 ; y’=y+3 The treatment of elements near image edges varies with implementation. Translation is used to improve visualization of an image, but also has a role as a preprocessor in applications where registration of two or more images is required. Since the dimensions of the input image are well defined, the output image is also a discrete space of finite dimension. If the new coordinates x’,y’ are outside the image, the translate operator will normally ignore them, although, in some implementations, it may link the higher coordinate points with the lower ones so as to wrap the result around back onto the visible space of the image. Most implementations fill the image areas out of which an image has been shifted with black pixels.

8 Translation Guidelines for Use The translate operator takes two arguments, Bx,By, which specify the desired horizontal and vertical pixel displacements, respectively. For example, consider the artificial image in which the subject's center lies in the center of the 300×300 pixel image. We can naively translate the subject into the lower, right corner of the image by defining a mapping (i.e. a set of values) for (Bx,By) which will take the subject's center from its present position at x=150,y=150 to an output position of x=300,y=300, as shown in the second image

9 Translation Guidelines for Use Translation has many applications of the cosmetic sort illustrated above. However, it is also very commonly used as a preprocessor in application domains where registration of two or more images is required. For example, feature detection and spatial filtering algorithms may calculate gradients in such a way as to introduce an offset in the positions of the pixels in the output image with respect to the corresponding pixels from the input image. In the case of the Laplacian of Gaussian spatial sharpening filter, some implementations require that the filtered image be translated by half the width of the Gaussian kernel with which it was convolved in order to bring it into alignment with the original.

10 Translation Guidelines for Use We again view the effects of mis- alignment if we consider translating by one pixel in the x and y directions and then subtracting this result from the original. The resulting image, shown in contains a description of all the places (along the direction of translation) where the intensity gradients are different; i.e. it highlights edges (and noise).

11 Translation - Applications Note that if we increase the translation parameter too much, e.g., by 6 pixels in each direction, as in the next image edges become severely mis-aligned and blurred.

12 Rotation The rotation operator performs a geometric transform which maps the position (x,y) of a picture element in an input image onto a position (x’y’) in an output image by rotating it through a user- specified angle about an origin. In most implementations, output locations which are outside the boundary of the image are ignored. Rotation is most commonly used to improve the visual appearance of an image, although it can be useful as a preprocessor in applications where directional operators are involved.

13 Rotation- Applications Rotating this image through a value which is not an integer multiple of 90 degrees (e.g. in this case equals 45 degrees) rotates part of the image off the visible output and leaves many empty pixel values, as seen in the next image

14 Rotation- Applications Like translation, rotation may be employed in the early stages of more sophisticated image processing operations. For example, there are numerous directional operators in image processing (e.g. many edge detection and morphological operators) and, in many implementations, these operations are only defined along a limited set of directions: 0, 45, 90, etc. A user may construct a hybrid operator which operates along any desired image orientation direction by first rotating an image through the desired direction, performing the edge detection, and then rotating the image back to the original orientation. (See Figure.)

15 Rotation- Applications We can perform edge detection along the opposite direction to that shown in the image by employing a 180 degree rotation in the edge detection algorithm. The result is shown in

16 Geometric Scaling The scale operator performs a geometric transformation which can be used to shrink or zoom the size of an image (or part of an image). Image reduction, commonly known as subsampling, is performed by replacement (of a group of pixel values by one arbitrarily chosen pixel value from within this group) or by interpolating between pixel values in a local neighborhoods. Image zooming is achieved by pixel replication or by interpolation. Scaling is used to change the visual appearance of an image, to alter the quantity of information stored in a scene representation, or as a low-level preprocessor in multi-stage image processing chain which operates on features of a particular scale. Scaling compresses or expands an image along the coordinate directions. As different techniques can be used to subsample and zoom

17 Geometric Scaling Figure : illustrates the two methods of sub-sampling. In the first, one pixel value within a local neighborhood is chosen (perhaps randomly) to be representative of its surroundings. (This method is computationally simple, but can lead to poor results if the sampling neighborhoods are too large.) The second method interpolates between pixel values within a neighborhood by taking a statistical sample (such as the mean) of the local intensity values.

18 Geometric Scaling An image (or regions of an image) can be zoomed either through pixel replication or interpolation. Figure 2 shows how pixel replication simply replaces each original image pixel by a group of pixels with the same value (where the group size is determined by the scaling factor). Alternatively, interpolation of the values of neighboring pixels in the original image can be performed in order to replace each pixel with an expanded group of pixels.

19 Composite Transformation [x’ y’ 1] =Mcomp [x y 1] where

20 Defining coefficients for Affine Transformation X’=XA A=X -1 X new ’ det(X) = x0(y1-y2)-y0(x1-x2)+(x1y2-x2y1)

21 Defining coefficients for Affine Transformation Input image Output image (x0,y0) (x2,y2) (x1,y1) (x’1,y’1) (x’0,y’0) (x’2,y’2)

22 Geometric transformation algorithms Forward mapping Let us consider that you want to apply rotation to two different pixels 1) the pixel at (0,100) after a 90 rotation 2) the pixel at (50,0) after a 35 rotation cos90=0, sin90=1  (-100,0) x’=xcos  -ysin  = 50cos(35)=40.96 y’=xsin  +ycos  = 50sin(35)=28.68

23 Forward mapping Problems : 1. Input pixel may map to apposition outside the screen. This problem can be solved by testing coordinates to check that they lie within the bounds of the output image before attempting to copy pixel value. 2. Input pixel may map to a non-integer position. Simple solution is to find the nearest integers to x’ and y’ and use these integers as a coordinates of the transformed pixel.

24 Interpolations schemes Interpolation is the process of determining the values of a function at positions lying between its samples. It achieves this process by fitting a continuous function through the discrete input samples. Interpolation reconstructs the signal lost in the sampling process by smoothing the data samples with a interpolation function. It woks as a low-pass filter. f(xk)f(xk) x x x xkxk Interpolation Function Resampled Point h(x)h(x) For equally spaced data, interpolation can be expressed by

25 Zero-order interpolation The rounding of calculated coordinates (x’,y’) to there nearest integers is a strategy known as zero-order ( or nearest-neighbour ) interpolation. Each interpolated output pixel is assigned the value of the nearest sample point in the input image. This technique, also known as the point shift algorithm is given by the following interpolation polynomial: It can be achieved by convolving the image with a one-pixel width rectangle in the spatial domain. The interpolation kernel for the nearest neighbor algorithm is defined as

26 Linear Interpolation Given an interval (x 0,x 1 ) and function values f 0 and f 1 for the endpoints, the interpolating polynomial is where a0 and a1 are determined by solving This give rise to the following interpolating polynomial In the spatial domain, linear interpolation is equivalent to convolving the sampled input with the following interpolation kernel: triangle filter

27 Bilinear interpolation ( First –order interpolation) f(0,0) f(1,1) x,0f(0,1) 0,00,y 0,1y 1,1 x,y f(x,y) f(1,0) 1,0 f(x,y) =[f(1,0) – f(0,0)]x +[f(0,1)-f(0,0)]y + [f(1,1)+f(0,0)-f(0,1)-f(1,0)]xy +f(0,0)

28 locations in an image can be expressed in various coordinate systems, depending on context. This section discusses the two main coordinate systems used in the Image Processing Toolbox, and the relationship between them. These two coordinate systems are described in 1.pixel co-ordinates 2.spatial co-ordinates Coordinate Systems

29 Generally, the most convenient method for expressing locations in an image is to use pixel coordinates. In this coordinate system, the image is treated as a grid of discrete elements, ordered from top to bottom and left to right, as illustrated by Pixel Coordinates

30 Spatial Coordinates In the pixel coordinate system, a pixel is treated as a discrete unit, uniquely identified by a single coordinate pair, such as (5,2). From this perspective, a location such as (5.3,2.2) is not meaningful. At times, however, it is useful to think of a pixel as a square patch. From this perspective, a location such as (5.3,2.2) is meaningful, and is distinct from (5,2). In this spatial coordinate system, locations in an image are positions on a plane, and they are described in terms of x and y (not r and c as in the pixel coordinate system).

31 These commands display an image using nondefault XData and YData. A = magic(5); x = [19.5 23.5]; y = [8.0 12.0]; image(A,'XData',x,'YData',y), axis image, colormap(jet(25))


Download ppt "Geometric transformations Affine transformations Forward mapping Interpolations schemes."

Similar presentations


Ads by Google