Presentation is loading. Please wait.

Presentation is loading. Please wait.

Geometric Correction of Remote Sensor Data

Similar presentations


Presentation on theme: "Geometric Correction of Remote Sensor Data"— Presentation transcript:

1 Geometric Correction of Remote Sensor Data
Geography KHU Jinmu Choi Geometric Error Image Registration/Rectification Root Mean Square Error Resampling Mosaicking

2 Geometric Correction It is usually necessary to preprocess remotely sensed data and remove geometric distortion so that individual picture elements (pixels) are in their proper planimetric (x, y) map locations. Remotely sensed imagery typically exhibits internal and external geometric error. It is important to recognize the source of the internal and external error and whether it is systematic (predictable) or nonsystematic (random). Systematic geometric error is generally easier to identify and correct than random geometric error.

3 Internal Geometric Error
Internal geometric errors are introduced by the remote sensing system itself or in combination with Earth rotation or curvature characteristics. systematic (predictable) distortion Geometric distortions in imagery that can sometimes be corrected through analysis of sensor characteristics and ephemeris data include: skew caused by Earth rotation effects, scanning system–induced variation in ground resolution cell size, scanning system one-dimensional relief displacement, and scanning system tangential scale distortion.

4 Image Offset (skew) The interaction between the fixed orbital path of the remote sensing system (N to S) and the Earth’s rotation (W to E) on its axis skews the geometry of the imagery collected. a) a Sun-synchronous orbit with an angle of inclination of 98.2 b) Pixels in three hypothetical scans (consisting of 16 lines each) of Landsat TM data. c) The result of adjusting (deskewing) the original Landsat TM data to the west to compensate for Earth rotation effects. (source: Jensen, 2011)

5 Variation in Ground Resolution Cell Size
The ground resolution cell size : a function of a) H (the altitude of the aircraft above ground level (AGL) at nadir) and H sec f (off-nadir); b) the instantaneous-field-of-view of the sensor, b, measured in radians; and c) the scan angle off-nadir, f. Pixels off-nadir have semi-major and semi-minor axes (diameters) that define the resolution cell size. The total field of view of one scan line is q. One-dimensional relief displacement and tangential scale distortion occur in the direction perpendicular to the line of flight and parallel with a line scan. (source: Jensen, 2011)

6 Ground Swath Width The ground swath width (gsw) is the length of the terrain strip remotely sensed by the system during one complete across-track sweep of the scanning mirror. total angular field-of-view of the sensor system, q, (90 ) the altitude of the sensor system above ground level, H. (6,000 m) (source: Jensen, 2011)

7 One-Dimensional Relief Displacement
The displacement takes place in a direction of each scan line At nadir, the scanning system looks directly down on a water tank One-dimensional relief displacement in both directions away from nadir for each sweep of the across-track mirror (source: Jensen, 2011)

8 Tangential Scale Distortion
tangential scale distortion (compression): Objects near the edge of the flight line be compressed and their shape distorted If the linear feature is parallel with or perpendicular to the line of flight, it does not experience sigmoid distortion. (source: Jensen, 2011)

9 External Geometric Error
External geometric errors introduced by phenomena that vary through space/time Random movements by the aircraft (spacecraft) : altitude changes, and roll, pitch, and yaw The diameter of the spot size on the ground (D) is a function of the IFOV(b) and the altitude(H) of the sensor (source: Jensen, 2011)

10 Attitude Changes Roll occurs the wings move up or down
Pitch occurs the nose or tail moves up or down Yaw occurs the body is forced by wind to be oriented some angle gyro-stabilization equipment corrected using ground control points. (source: Jensen, 2011)

11 Ground Control Points A ground control point (GCP) is a location on the surface of the Earth (e.g., a road intersection) that can be identified on the imagery and located accurately on a map. image coordinates specified in i rows and j columns, and map coordinates (e.g., x, y measured in degrees of latitude and longitude). The paired coordinates (i, j and x, y) from many GCPs (e.g., 20) can be modeled to derive geometric transformation coefficients. These coefficients are used to geometrically rectify the remote sensor data to a standard datum and map projection.

12 Geometric Correction Image-to-map rectification is the process involves selecting GCP image pixel coordinates (row and column) with their map coordinate counterparts Image-to-image registration is the translation and rotation alignment process by which two images of like geometry Examination of two images obtained on different dates (source: Jensen, 2011)

13 Image to Map Rectification Logic
Two basic operations to rectify image to a map coordinate: Spatial interpolation: A number of GCP pairs are used to establish the nature of the geometric coordinate transformation that must be applied to rectify or fill every pixel in the output image (x, y) with a value from a pixel in the unrectified input image (x, y ). Intensity interpolation: The process for determining the brightness value (BV ) to be assigned to the output rectified pixel. Polynomial equations be fit to the GCP data using least-squares criteria to model the corrections directly in the image domain

14 Polynomial Equation Concept of how different-order transformations fit a hypothetical surface illustrated in cross-section: a) Original observations. b) First-order linear transformation fits a plane to the data. c) Second-order quadratic fit. d) Third-order cubic fit. (source: Jensen, 2011)

15 Coordinate Transformations
Generally, for moderate distortions in a relatively small area of an image (e.g., a quarter of a Landsat TM scene), a first- order, six-parameter, affine (linear) transformation is sufficient to rectify the imagery to a geographic frame of reference. This type of transformation can model six kinds of distortion in the remote sensor data, including: translation in x and y scale changes in x and y skew, and rotation.

16 Input-to-Output (Forward) Mapping
When all six operations are combined into a single expression it becomes: where x and y are positions in the output-rectified image or map, and x and y represent corresponding positions in the original input image. Works well if we are rectifying the location of discrete coordinates found along a linear feature such as a road in a vector map.

17 The logic of filling a rectified output matrix with values from an unrectified input image matrix using input-to-output (forward) mapping logic. (source: Jensen, 2011)

18 Output-to-Input (Inverse) Mapping
Output-to-input, or inverse mapping logic, is based on the following two equations: where x and y are positions in the output-rectified image or map, and x and y represent corresponding positions in the original input image. The rectified output matrix consisting of x (column) and y (row) coordinates is filled in a systematic manner.

19 b) The logic of filling a rectified output matrix with values from an unrectified input image matrix using output-to-input (inverse) mapping logic and nearest-neighbor resampling. Output-to-input inverse mapping logic is the preferred methodology because it results in a rectified output matrix with values at every pixel location. (source: Jensen, 2011)

20 Spatial Interpolation Logic
The goal is to fill a matrix that is in a standard map projection with the appropriate values from a non-planimetric image. (source: Jensen, 2011)

21 Root Mean Squared Error
To measure the accuracy of a geometric rectification algorithm (actually, its coefficients) is to compute the Root Mean Squared Error (RMSerror) for each ground control point: Where: xorig and yorig are the original row and column coordinates of the GCP in the image and x and y are the computed or estimated coordinates in the original image when we utilize the six coefficients. Basically, the closer these paired values are to one another, the more accurate the algorithm (and its coefficients). The square root of the squared deviations represents a measure of the accuracy of each GCP

22 Order of Points Deleted Total RMS error after this point deleted
Characteristics of Ground Control Points Point Number Order of Points Deleted Easting on Map X1 Northing on Map Y1 X’ pixel Y’ Pixel Total RMS error after this point deleted 1 12 597120 3,627,050 150 185 0.501 2 9 597,680 3,627,800 166 165 0.663 ….. 20 601,700 3,632,580 283 8.542 Total RMS error with all 20 GCPs used: 11.016 If we delete GCP #20, the RMSE will be 8.452 (source: Jensen, 2011)

23 Resampling nearest neighbor, bilinear interpolation, and
Intensity interpolation involves the extraction of a brightness value from an x, y location in the original (distorted) input image and its relocation to the appropriate x, y coordinate location in the rectified output image. When this occurs, there are several methods of brightness value (BV) intensity interpolation that can be applied, including: nearest neighbor, bilinear interpolation, and cubic convolution. The practice is commonly referred to as resampling.

24 Nearest-neighbor Resampling
The brightness value closest to the predicted x’, y’ coordinate is assigned to the output x, y coordinate. (source: Jensen, 2011)

25 Bilinear Interpolation
Using the 4 pixel values nearest to the desired position (x’, y’). For example, the distances from the requested (x’, y’) position at 2.4, 2.7 in the input image to the closest four input pixel coordinates (2,2; 3,2; 2,3;3,3) are computed to get the average. (source: Jensen, 2011) Where are the surrounding four data point values, and are the distances squared from the point in question (x’, y’) to the these data points.

26 Cubic Convolution Assigns values to output pixels from the weighted values of 16 pixels surrounding the location of the desired x’, y’ pixel are used to determine the value of the output pixel. (source: Jensen, 2011) where Zk are the surrounding four data point values, and D2k are the distances squared from the point in question (x’, y’) to the these data points.

27 Image Mosaicking Mosaicking n rectified images requires four steps:
Individual images should be rectified to the same map projection and datum with the same resampling logic One of the images mosaicked is the base image that is overlap with image 2 by an amount (e.g., 20%). A representative geographic area in the overlap region is identified. This area in the base image is contrast stretched and the histogram of this geographic area in the base image is extracted. The histogram from the base image is then applied to image 2 using a histogram-matching algorithm. This causes the two images to have approximately the same grayscale or color characteristics.

28 Image Mosaicking It is possible to have the pixel brightness values in one scene simply dominate the pixel values in the overlapping scene. Unfortunately, this can result in noticeable seams in the final mosaic. Therefore, it is common to blend the seams between mosaicked images using feathering. Some digital image processing systems allow the user to specific a feathering buffer distance (e.g., 200 pixels) wherein 0% of the base image is used in the blending at the edge and 100% of image 2 is used to make the output image. At the specified distance (e.g., 200 pixels) in from the edge, 100% of the base image is used to make the output image and 0% of image 2 is used. At 100 pixels in from the edge, 50% of each image is used to make the output file.

29 Feathering The seam between adjacent images being mosaicked may be minimized using a) cut-line feathering logic, or b) edge feathering. Sometimes analysts prefer to use a linear feature such as a river or road to subdue the edge between adjacent mosaicked images. In this case, the analyst identifies a polyline in the image and then specifies a buffer distance away from the line as before where the feathering will take place. (source: Jensen, 2011)

30 Next Lab: Exercise: Geometric Correction Lecture: Image Enhancement
Source: Jensen and Jensen, 2011, Introductory Digital Image Processing, 4th ed, Prentice Hall.


Download ppt "Geometric Correction of Remote Sensor Data"

Similar presentations


Ads by Google