Geometric Correction of Remote Sensor Data

Slides:



Advertisements
Similar presentations
Digital Image Processing
Advertisements

REQUIRING A SPATIAL REFERENCE THE: NEED FOR RECTIFICATION.
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Major Operations of Digital Image Processing (DIP) Image Quality Assessment Radiometric Correction Geometric Correction Image Classification Introduction.
1:14 PM  Involves the manipulation and interpretation of digital images with the aid of a computer.  Includes:  Image preprocessing (rectification and.
Image Preprocessing Image Preprocessing.
NAPP Photo Five Pockets near Dubois. Google Earth.
Geometric Correction Lecture 5 Feb 18, What and why  Remotely sensed imagery typically exhibits internal and external geometric error. It is.
With support from: NSF DUE in partnership with: George McLeod Prepared by: Geospatial Technician Education Through Virginia’s Community Colleges.
Radiometric and Geometric Errors
Line scanners Chapter 6. Frame capture systems collect an image of a scene of one instant in time The scanner records a narrow swath perpendicular to.
Remote sensing in meteorology
A new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs Combines both geometry-based and image.
Digital Imaging and Remote Sensing Laboratory Correction of Geometric Distortions in Line Scanner Imagery Peter Kopacz Dr. John Schott Bryce Nordgren Scott.
Digital Image Processing (معالجة الصور الرقمية)
CS485/685 Computer Vision Prof. George Bebis
WILD 5750/6750 LAB 5 10/04/2010 IMAGE MOSAICKING.
Computer Vision Lecture 3: Digital Images
Geometric Correction Dr. John R. Jensen Department of Geography
Remote sensing image correction. Introductory readings – remote sensing
Georeferencing Getting maps and satellite images into GIS.
1 Image Pre-Processing. 2 Digital Image Processing The process of extracting information from digital images obtained from satellites Information regarding.
Applied Cartography and Introduction to GIS GEOG 2017 EL Lecture-3 Chapters 5 and 6.
Image Registration January 2001 Gaia3D Inc. Sanghee Gaia3D Seminar Material.
Geometric Correction of Imagery
Geometric Correction It is vital for many applications using remotely sensed images to know the ground locations for points in the image. There are two.
Remote Sensing Image Rectification and Restoration
Image Formation. Input - Digital Images Intensity Images – encoding of light intensity Range Images – encoding of shape and distance They are both a 2-D.
Chapter 3: Image Restoration Geometric Transforms.
NAPP Photo Five Pockets near Dubois.
Geometric Operations and Morphing.
Orthorectification using
Remote Sensing Geometry of Aerial Photographs
Image Preprocessing: Geometric Correction Image Preprocessing: Geometric Correction Jensen, 2003 John R. Jensen Department of Geography University of South.
Digital Image Processing Lecture 7: Geometric Transformation March 16, 2005 Prof. Charlene Tsai.
Integration of sensors for photogrammetry and remote sensing 8 th semester, MS 2005.
Quadratic Surfaces. SPLINE REPRESENTATIONS a spline is a flexible strip used to produce a smooth curve through a designated set of points. We.
Model Construction: interpolation techniques 1392.
Digital Image Processing Lecture 6: Image Geometry
Digital Image Processing GSP 216. Digital Image Processing Pre-Processing – Correcting for radiometric and geometric errors in data Image Rectification.
7 elements of remote sensing process 1.Energy Source (A) 2.Radiation & Atmosphere (B) 3.Interaction with Targets (C) 4.Recording of Energy by Sensor (D)
Lecture 3 The Digital Image – Part I - Single Channel Data 12 September
February 3 Interpretation of Digital Data Bit and Byte ASCII Binary Image Recording Media and Formats Geometric corrections Image registration Projections.
Chapter 8 Remote Sensing & GIS Integration. Basics EM spectrum: fig p. 268 reflected emitted detection film sensor atmospheric attenuation.
Digital Image Processing Definition: Computer-based manipulation and interpretation of digital images.
112/5/ :54 Graphics II Image Based Rendering Session 11.
Principle Component Analysis (PCA)
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Geoprocessing and georeferencing raster data
Chapter 3- Coordinate systems A coordinate system is a grid used to identify locations on a page or screen that are equivalent to grid locations on the.
Unsupervised Classification
IS502:M ULTIMEDIA D ESIGN FOR I NFORMATION S YSTEM D IGITAL S TILL I MAGES Presenter Name: Mahmood A.Moneim Supervised By: Prof. Hesham A.Hefny Winter.
Image Enhancement Band Ratio Linear Contrast Enhancement
Remote sensing/digital image processing. Color Arithmetic red+green=yellow green+blue=cyan red+blue=magenta.
Arithmetic and Geometric Transformations (Chapter 2) CS474/674 – Prof. Bebis.
1.Image Error and Quality 2.Sampling Theory 3.Univariate Descriptive Image Statistics 4.Multivariate Statistics 5.Geostatistics for RS Next Remote Sensing1.
Geometric Preprocessing
Coordinate Transformations
Professor Ke-Sheng Cheng
Lecture 5 Basic geometric objects
Distortions in imagery:
NAPP Photo Five Pockets near Dubois.
GEOGRAPHICAL INFORMATION SYSTEM
Digital Data Format and Storage
Introductory readings – remote sensing
Image Rectificatio.
Computer Vision Lecture 3: Digital Images
Spatial Data Entry via Digitizing
Remote sensing in meteorology
2011 International Geoscience & Remote Sensing Symposium
Presentation transcript:

Geometric Correction of Remote Sensor Data Geography KHU Jinmu Choi Geometric Error Image Registration/Rectification Root Mean Square Error Resampling Mosaicking

Geometric Correction It is usually necessary to preprocess remotely sensed data and remove geometric distortion so that individual picture elements (pixels) are in their proper planimetric (x, y) map locations. Remotely sensed imagery typically exhibits internal and external geometric error. It is important to recognize the source of the internal and external error and whether it is systematic (predictable) or nonsystematic (random). Systematic geometric error is generally easier to identify and correct than random geometric error.

Internal Geometric Error Internal geometric errors are introduced by the remote sensing system itself or in combination with Earth rotation or curvature characteristics. systematic (predictable) distortion Geometric distortions in imagery that can sometimes be corrected through analysis of sensor characteristics and ephemeris data include: skew caused by Earth rotation effects, scanning system–induced variation in ground resolution cell size, scanning system one-dimensional relief displacement, and scanning system tangential scale distortion.

Image Offset (skew) The interaction between the fixed orbital path of the remote sensing system (N to S) and the Earth’s rotation (W to E) on its axis skews the geometry of the imagery collected. a) a Sun-synchronous orbit with an angle of inclination of 98.2 b) Pixels in three hypothetical scans (consisting of 16 lines each) of Landsat TM data. c) The result of adjusting (deskewing) the original Landsat TM data to the west to compensate for Earth rotation effects. (source: Jensen, 2011)

Variation in Ground Resolution Cell Size The ground resolution cell size : a function of a) H (the altitude of the aircraft above ground level (AGL) at nadir) and H sec f (off-nadir); b) the instantaneous-field-of-view of the sensor, b, measured in radians; and c) the scan angle off-nadir, f. Pixels off-nadir have semi-major and semi-minor axes (diameters) that define the resolution cell size. The total field of view of one scan line is q. One-dimensional relief displacement and tangential scale distortion occur in the direction perpendicular to the line of flight and parallel with a line scan. (source: Jensen, 2011)

Ground Swath Width The ground swath width (gsw) is the length of the terrain strip remotely sensed by the system during one complete across-track sweep of the scanning mirror. total angular field-of-view of the sensor system, q, (90 ) the altitude of the sensor system above ground level, H. (6,000 m) (source: Jensen, 2011)

One-Dimensional Relief Displacement The displacement takes place in a direction of each scan line At nadir, the scanning system looks directly down on a water tank One-dimensional relief displacement in both directions away from nadir for each sweep of the across-track mirror (source: Jensen, 2011)

Tangential Scale Distortion tangential scale distortion (compression): Objects near the edge of the flight line be compressed and their shape distorted If the linear feature is parallel with or perpendicular to the line of flight, it does not experience sigmoid distortion. (source: Jensen, 2011)

External Geometric Error External geometric errors introduced by phenomena that vary through space/time Random movements by the aircraft (spacecraft) : altitude changes, and roll, pitch, and yaw The diameter of the spot size on the ground (D) is a function of the IFOV(b) and the altitude(H) of the sensor (source: Jensen, 2011)

Attitude Changes Roll occurs the wings move up or down Pitch occurs the nose or tail moves up or down Yaw occurs the body is forced by wind to be oriented some angle gyro-stabilization equipment corrected using ground control points. (source: Jensen, 2011)

Ground Control Points A ground control point (GCP) is a location on the surface of the Earth (e.g., a road intersection) that can be identified on the imagery and located accurately on a map. image coordinates specified in i rows and j columns, and map coordinates (e.g., x, y measured in degrees of latitude and longitude). The paired coordinates (i, j and x, y) from many GCPs (e.g., 20) can be modeled to derive geometric transformation coefficients. These coefficients are used to geometrically rectify the remote sensor data to a standard datum and map projection.

Geometric Correction Image-to-map rectification is the process involves selecting GCP image pixel coordinates (row and column) with their map coordinate counterparts Image-to-image registration is the translation and rotation alignment process by which two images of like geometry Examination of two images obtained on different dates (source: Jensen, 2011)

Image to Map Rectification Logic Two basic operations to rectify image to a map coordinate: Spatial interpolation: A number of GCP pairs are used to establish the nature of the geometric coordinate transformation that must be applied to rectify or fill every pixel in the output image (x, y) with a value from a pixel in the unrectified input image (x, y ). Intensity interpolation: The process for determining the brightness value (BV ) to be assigned to the output rectified pixel. Polynomial equations be fit to the GCP data using least-squares criteria to model the corrections directly in the image domain

Polynomial Equation Concept of how different-order transformations fit a hypothetical surface illustrated in cross-section: a) Original observations. b) First-order linear transformation fits a plane to the data. c) Second-order quadratic fit. d) Third-order cubic fit. (source: Jensen, 2011)

Coordinate Transformations Generally, for moderate distortions in a relatively small area of an image (e.g., a quarter of a Landsat TM scene), a first- order, six-parameter, affine (linear) transformation is sufficient to rectify the imagery to a geographic frame of reference. This type of transformation can model six kinds of distortion in the remote sensor data, including: translation in x and y scale changes in x and y skew, and rotation.

Input-to-Output (Forward) Mapping When all six operations are combined into a single expression it becomes: where x and y are positions in the output-rectified image or map, and x and y represent corresponding positions in the original input image. Works well if we are rectifying the location of discrete coordinates found along a linear feature such as a road in a vector map.

The logic of filling a rectified output matrix with values from an unrectified input image matrix using input-to-output (forward) mapping logic. (source: Jensen, 2011)

Output-to-Input (Inverse) Mapping Output-to-input, or inverse mapping logic, is based on the following two equations: where x and y are positions in the output-rectified image or map, and x and y represent corresponding positions in the original input image. The rectified output matrix consisting of x (column) and y (row) coordinates is filled in a systematic manner.

b) The logic of filling a rectified output matrix with values from an unrectified input image matrix using output-to-input (inverse) mapping logic and nearest-neighbor resampling. Output-to-input inverse mapping logic is the preferred methodology because it results in a rectified output matrix with values at every pixel location. (source: Jensen, 2011)

Spatial Interpolation Logic The goal is to fill a matrix that is in a standard map projection with the appropriate values from a non-planimetric image. (source: Jensen, 2011)

Root Mean Squared Error To measure the accuracy of a geometric rectification algorithm (actually, its coefficients) is to compute the Root Mean Squared Error (RMSerror) for each ground control point: Where: xorig and yorig are the original row and column coordinates of the GCP in the image and x and y are the computed or estimated coordinates in the original image when we utilize the six coefficients. Basically, the closer these paired values are to one another, the more accurate the algorithm (and its coefficients). The square root of the squared deviations represents a measure of the accuracy of each GCP

Order of Points Deleted Total RMS error after this point deleted Characteristics of Ground Control Points Point Number Order of Points Deleted Easting on Map X1 Northing on Map Y1 X’ pixel Y’ Pixel Total RMS error after this point deleted 1 12 597120 3,627,050 150 185 0.501 2 9 597,680 3,627,800 166 165 0.663 ….. 20 601,700 3,632,580 283 8.542 Total RMS error with all 20 GCPs used: 11.016 If we delete GCP #20, the RMSE will be 8.452 (source: Jensen, 2011)

Resampling nearest neighbor, bilinear interpolation, and Intensity interpolation involves the extraction of a brightness value from an x, y location in the original (distorted) input image and its relocation to the appropriate x, y coordinate location in the rectified output image. When this occurs, there are several methods of brightness value (BV) intensity interpolation that can be applied, including: nearest neighbor, bilinear interpolation, and cubic convolution. The practice is commonly referred to as resampling.

Nearest-neighbor Resampling The brightness value closest to the predicted x’, y’ coordinate is assigned to the output x, y coordinate. (source: Jensen, 2011)

Bilinear Interpolation Using the 4 pixel values nearest to the desired position (x’, y’). For example, the distances from the requested (x’, y’) position at 2.4, 2.7 in the input image to the closest four input pixel coordinates (2,2; 3,2; 2,3;3,3) are computed to get the average. (source: Jensen, 2011) Where are the surrounding four data point values, and are the distances squared from the point in question (x’, y’) to the these data points.

Cubic Convolution Assigns values to output pixels from the weighted values of 16 pixels surrounding the location of the desired x’, y’ pixel are used to determine the value of the output pixel. (source: Jensen, 2011) where Zk are the surrounding four data point values, and D2k are the distances squared from the point in question (x’, y’) to the these data points.

Image Mosaicking Mosaicking n rectified images requires four steps: Individual images should be rectified to the same map projection and datum with the same resampling logic One of the images mosaicked is the base image that is overlap with image 2 by an amount (e.g., 20%). A representative geographic area in the overlap region is identified. This area in the base image is contrast stretched and the histogram of this geographic area in the base image is extracted. The histogram from the base image is then applied to image 2 using a histogram-matching algorithm. This causes the two images to have approximately the same grayscale or color characteristics.

Image Mosaicking It is possible to have the pixel brightness values in one scene simply dominate the pixel values in the overlapping scene. Unfortunately, this can result in noticeable seams in the final mosaic. Therefore, it is common to blend the seams between mosaicked images using feathering. Some digital image processing systems allow the user to specific a feathering buffer distance (e.g., 200 pixels) wherein 0% of the base image is used in the blending at the edge and 100% of image 2 is used to make the output image. At the specified distance (e.g., 200 pixels) in from the edge, 100% of the base image is used to make the output image and 0% of image 2 is used. At 100 pixels in from the edge, 50% of each image is used to make the output file.

Feathering The seam between adjacent images being mosaicked may be minimized using a) cut-line feathering logic, or b) edge feathering. Sometimes analysts prefer to use a linear feature such as a river or road to subdue the edge between adjacent mosaicked images. In this case, the analyst identifies a polyline in the image and then specifies a buffer distance away from the line as before where the feathering will take place. (source: Jensen, 2011)

Next Lab: Exercise: Geometric Correction Lecture: Image Enhancement Source: Jensen and Jensen, 2011, Introductory Digital Image Processing, 4th ed, Prentice Hall.