Maa-57.2040 Kaukokartoituksen yleiskurssi General Remote Sensing Image enhancement I Autumn 2007 Markus Törmä

Slides:



Advertisements
Similar presentations
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Advertisements

Linear Filtering – Part I Selim Aksoy Department of Computer Engineering Bilkent University
Spatial Filtering (Chapter 3)
Image Processing Lecture 4
Computer Vision Lecture 16: Texture
Digital Image Processing
Image Processing IB Paper 8 – Part A Ognjen Arandjelović Ognjen Arandjelović
1Ellen L. Walker ImageJ Java image processing tool from NIH Reads / writes a large variety of images Many image processing operations.
Chapter 4: Image Enhancement
E.G.M. PetrakisFiltering1 Linear Systems Many image processing (filtering) operations are modeled as a linear system Linear System δ(x,y) h(x,y)
Digital Image Processing In The Name Of God Digital Image Processing Lecture3: Image enhancement M. Ghelich Oghli By: M. Ghelich Oghli
Image Enhancements, Indices and Transformations. (A) Energy Source or Illumination Radiation and the Atmosphere (B) Interaction with the Target (C) Transmission,
Lecture 4 Linear Filters and Convolution
6/9/2015Digital Image Processing1. 2 Example Histogram.
Image Filtering CS485/685 Computer Vision Prof. George Bebis.
Digital Image Processing
1 Image Filtering Readings: Ch 5: 5.4, 5.5, 5.6,5.7.3, 5.8 (This lecture does not follow the book.) Images by Pawan SinhaPawan Sinha formal terminology.
CS443: Digital Imaging and Multimedia Filters Spring 2008 Ahmed Elgammal Dept. of Computer Science Rutgers University Spring 2008 Ahmed Elgammal Dept.
Image Enhancement.
Image Analysis Preprocessing Arithmetic and Logic Operations Spatial Filters Image Quantization.
Lecture 12: Image Processing Thursday 12 February Last lecture: Earth-orbiting satellites Reading, LKC p
Introduction to Digital Data and Imagery
Chapter 12 Spatial Sharpening of Spectral Image Data.
Radiometric Correction and Image Enhancement
CS 376b Introduction to Computer Vision 02 / 26 / 2008 Instructor: Michael Eckmann.
Neighborhood Operations
Spatial-based Enhancements Lecture 3 prepared by R. Lathrop 10/99 updated 10/03 ERDAS Field Guide 6th Ed. Ch 5: ;
Machine Vision ENT 273 Image Filters Hema C.R. Lecture 5.
GEOG2021 Environmental Remote Sensing
Digital Numbers The Remote Sensing world calls cell values are also called a digital number or DN. In most of the imagery we work with the DN represents.
Review of Statistics and Linear Algebra Mean: Variance:
ASPRS Annual Conference 2005, Baltimore, March Utilizing Multi-Resolution Image data vs. Pansharpened Image data for Change Detection V. Vijayaraj,
Maa Kaukokartoituksen yleiskurssi General Remote Sensing Image enhancement II Autumn 2007 Markus Törmä
Remote Sensing and Image Processing: 2 Dr. Hassan J. Eghbali.
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
Digital Image Processing GSP 216. Digital Image Processing Pre-Processing – Correcting for radiometric and geometric errors in data Image Rectification.
1 Remote Sensing and Image Processing: 2 Dr. Mathias (Mat) Disney UCL Geography Office: 301, 3rd Floor, Chandler House Tel:
Lecture 03 Area Based Image Processing Lecture 03 Area Based Image Processing Mata kuliah: T Computer Vision Tahun: 2010.
What is an image? What is an image and which image bands are “best” for visual interpretation?
7 elements of remote sensing process 1.Energy Source (A) 2.Radiation & Atmosphere (B) 3.Interaction with Targets (C) 4.Recording of Energy by Sensor (D)
Image processing Fourth lecture Image Restoration Image Restoration: Image restoration methods are used to improve the appearance of an image.
Topographic correction of Landsat ETM-images Markus Törmä Finnish Environment Institute Helsinki University of Technology.
Autonomous Robots Vision © Manfred Huber 2014.
Environmental Remote Sensing GEOG 2021 Lecture 2 Image display and enhancement.
Remote Sensing Image Enhancement. Image Enhancement ► Increases distinction between features in a scene ► Single image manipulation ► Multi-image manipulation.
Data Models, Pixels, and Satellite Bands. Understand the differences between raster and vector data. What are digital numbers (DNs) and what do they.
Change Detection Goal: Use remote sensing to detect change on a landscape over time.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
NOTE, THIS PPT LARGELY SWIPED FROM
REMOTE SENSING DATA Markus Törmä Institute of Photogrammetry and Remote Sensing Helsinki University of Technology
Spatial Filtering (Chapter 3) CS474/674 - Prof. Bebis.
Image Enhancement in the Spatial Domain.
Environmental Remote Sensing GEOG 2021
Initial Display Alternatives and Scientific Visualization
Using vegetation indices (NDVI) to study vegetation
REMOTE SENSING Digital Image Processing Radiometric Enhancement Geometric Enhancement Reference: Chapters 4 and 5, Remote Sensing Digital Image Analysis.
Digital Data Format and Storage
Digital 2D Image Basic Masaki Hayashi
Digital Numbers The Remote Sensing world calls cell values are also called a digital number or DN. In most of the imagery we work with the DN represents.
Computer Vision Lecture 16: Texture II
CIS 350 – 3 Image ENHANCEMENT SPATIAL DOMAIN
Geometric and Point Transforms
Image Information Extraction
Spatial operations and transformations
Digital Image Processing
Spectral Transformation
Image Filtering Readings: Ch 5: 5. 4, 5. 5, 5. 6, , 5
Lecture 12: Image Processing
Spatial operations and transformations
Presentation transcript:

Maa Kaukokartoituksen yleiskurssi General Remote Sensing Image enhancement I Autumn 2007 Markus Törmä

Image restoration Errors due to imaging process are removed Geometric errors –position of image pixel is not correct one when compared to ground Radiometric errors –measured radiation do not correspond radiation leaving ground Aim is to form faultless image of scene

Image enhancement Image is made better suitable for interpretation Different objects will be seen better  manipulation of image contrast and colors Different features (e.g. linear features) will be seen better  e.g. filtering methods Multispectral images: combination of image channels to compress and enhance imformation –ratio images –image transformations Necessary information is emphasized, unnecessary removed

Image enhancement Image is processed to be more suitable for interpretation Pixel operations: DN of pixel is changed independent to other pixels –sum, multiply, subtract, ratio with constant Local operations: DN is changed using DN of pixels which are spatially close –filtering Global operations: all DNs have effect to DN –histogram manipulation –transformation to zero mean and unit deviation

HISTOGRAM Graphical representation of probability of occurrence of image DNs Horizontal axis: DN from 0 to 255 Vertical axis: number of pixels with DN or probability of occurrence of DN in image

HISTOGRAM DNs of image are usually in narrower region that monitor can show –usually at the darker end of scale DNs are scaled to larger area  more DNs are used from wider region and image interpretation is enhanced

HISTOGRAM Histogram equalization: scaling is weighted according to probability of occurrence of DNs More DNs are used to present commonly occurring DNs

HISTOGRAM Nonlinearly equalized histogram: also other kinds of mathematical functions or combinations of functions can be used E.g. equalized histogram should be similar to normal distribution

HISTOGRAM Thresholding: DNs are divided to two groups DNs less than threshold  0 DNs more that threshold  1 E.g. separate water areas from land areas

HISTOGRAM "Level Slicing” Histogram is divided to levels considerably less than original DNs DNs within one level are presented using one grey level or color Usually used to visualize –thermal images –vegetation index images

HISTOGRAM Level Slicing: BW and color vesion of vegetation index image

Image filtering Image f is convolved with filtering mask h g = f * h Image smoothing / low pass filtering: –noise removal Image sharpening / high pass filtering: –rapid changes in image function are enhanced

Image filtering Image smoothing Random errors due to instrument noise and data transmission are removed Average / mean filtering Median filtering

Image filtering Based on use of filtering maks h Simple averaging filtering mask, size 5x5 pixels: Averaging filtering mask, size 3x3 pixels:

Image filtering Principle of convolution Filtered pixel value:

Image filtering Original PAN and average filtered image with 3x3 filtering window

Image filtering Original PAN and average filtered image with 7x7 filtering window

Image filtering Median filtering DN of pixel is median DN of pixels defined by filtering mask Take pixels under filtering mask  sort from smallest to biggest  choose median (the middle one) Useful if noise consists of single intense spikes (removes them) and edges of areas should be preserved (do not alter them)

Image filtering Median filtering

Image filtering Original PAN and median filtered image with 3x3 filtering window

Image filtering Original PAN and median filtered image with 7x7 filtering window

Image filtering Image averaging corresponds to integration of image function If changes in image function are of interest  derivate image function Image 2-dimensional function: partial derivates in x- and y-direction Partial derivates are used to determine amount and direction of change in each pixel In practice derivates are approximated by differences of neighboring pixels These can also be implemented using filtering masks

Image filtering Derivative of image function in horizontal direction can be computed using mask:

Image filtering Derivative of image function in vertical direction can be computed using mask:

Image filtering Absolute values of partial derivative images...

Image filtering …here magnitude of derivative is approximated by mean of partial derivatives

Image texture Spatial variation of image grey levels or colors Determines smoothness or coarseness of image Different targets have different texture  can help in interpretation E.g. Spot panchromatic image: –residential area: lots of variations –water: very little variations –coniferous forest: some variations

Image texture Compute features which describes properties of texture –new images In most simple case compute average value and deviation of some neighborhood –statistical properties of texture Some methods can take direction etc. into account –Haralick’s grey level co-occurrence matrix

Image texture Variance and skewness of distribution, 7x7 window

Multispectral images Essential information from image channels All channels are not necessarily useful –do not use if do not need Some alternatives –ratio images –difference images –index images –image transformations

Visual interpretation Channelwise Black-and-White image OR 3 channels at time, color image

Landsat-7 ETM, : Visible channels blue, green, red Infrared channels

Color image Humans can distinguish about 20 – 30 grey levels Usually images have 256 grey levels –it is not possible to distinguish small details Humans can distinguish millions of colors –should be exploited in interpretation Computers have additive color system –primary colors are Red, Green and Blue –RGB-system –channels are presented in combination of 3 channels –if reflectance of target is larger in one channel than others, target is colored with that primary color

True color image Channels are presented using ttheir natural colors: –blue channel using blue primary color –green channel using green primary color –red channel using red primary color Is possible with instruments with these three channels, like Landsat ETM

False color image Channels with wavelenghts which humans do not use or visible channels in wrong order E.g.: –green channel using blue primary color –red channel using green primary color –NIR channel using red primary color

ETM, R: Ch7, G: Ch4, B: Ch5

IHS-color coordinates RGB-color coordinate system is not the only one IHS: –Intensity: brightness of color –Hue: wavelenght of color –Saturation: purity or greyness of color Sometimes in order to enhance some feature, make transformation RGB  IHS, edit / process image and make transformation IHS  RGB E.g. colors to DEM –3-channel image: RGB  IHS –Change: put DEM to intensity –Make IHS  RGB

IHS-color coordinates Porvoo: ETM 321 and Intensity

IHS-color coordinates Porvoo: ETM 321 and Hue

IHS-color coordinates Porvoo: ETM 321 and Saturation

Ratio images Channel A pixel value is divided by channel B pixel value –E.g. NIR / RED Emphasizes the differences between channels –Increase difference between vegetated and non- vegetated areas –Images taken at different times  changes

Ratio images If reflectances from different targets are different, channel ratio can emphasize this difference E.g. Water has low reflectance at near-infrared, bigger at red Vegetation has low reflectance at red wavelenght, considerably bigger at near-infrared NIR/PUN: –Very small for water << 1 –Large for vegetation >> 1

Ratio images Multiplicative factors, which affect all channels, are removed Effect of topography, sun angle, shadows Idea is to decrease the variation of DNs of pixels belonging to same land cover Example: CH1CH 2 CH1/CH 2 Deciduous forest: –sun –shadow Coniferous forest: –sun –shadow

Ratio images Is vegetation in good or bad condition –NIR/RED higher for vegetation is good condition As plant becomes ill or autunm comes –Less chlorophyll –Higher reflectance at RED wavelenghts due to smaller chlorophyll absorption –NIR/RED smaller

Ratio images Ratio images can be more complicated: (CH A - CH B ) / (CH C - CH B ) It is wanted to remove some noise or atmospheric effect visible at channel B from channel ratio Problem In some cases different targets may look the same when their actual reflectances differ Can be avoided by interpreting ratio images together with some original image channels

OIF: optimum index factor It is easy to compute many ratio images –Which are best? –Multispectral image, n channels: n(n-1) ratio images –Visual comparison of all combinations takes time OIF: best combination of three ratio images Compute image variances and correlations between images –Large variance: good information content –Large correlation between images: images are very much alike Choose three images, which –Maximize variance –Minimize correlation

Ratio image: example TM7 (2.2  m) / TM1 (0.48  m): sandy areas white TM

Ratio image: example ETM

Ratio image: example Changes Green: more sand 1990 Red: more sand 1999 NOTE: Images have been taken at different seasons, so changes might be due to seasonal effects like changes in vegetation or soil moisture

Difference image Pixel value of channel A is subtracted from channel B value Image taken at time A is subtracted from image taken at time B –Changes between images Average filtered image is subtracted from original image –Enhances edges

Difference image Images taken at different times –Simple way to find out changes Areas without changes –Difference close to 0 Areas with changes –Large positive / negative values Natural changes must be removed before change detection –Changes in illumination –Radiometric calibration and atmospheric correction –Noise

Example: TM 191/ vs. ETM 193/ , channel 3 (red) Yellow: Ch3 reflectance has increased Red: Ch3 reflectance has decreased

Difference image Different channels of same image Atmospheric or other noise is decreased –Other channel charcterizes noise NIR-RED: vegetation index

Difference image Left: Spot5, ch2, Kolari Right: Average filtered image, 5x5 window Left: ch2 - average Right: Absolute value of difference

Image addition CH A + CH B Spatial resolution enhancement (data fusion) Combination of image channels: spectral averaging Gradient image + original image –Sharpens borders  can make interpretation easier

Image multiplication CH A * CH B Multiplication of two image channels increases the visual effect of topography Masking: unwanted areas can be removed from image –Another image is mask, where pixel is 0 if it is to be removed –Other is image