Object-oriented classification

Slides:



Advertisements
Similar presentations
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Advertisements

Image Analysis Phases Image pre-processing –Noise suppression, linear and non-linear filters, deconvolution, etc. Image segmentation –Detection of objects.
Major Operations of Digital Image Processing (DIP) Image Quality Assessment Radiometric Correction Geometric Correction Image Classification Introduction.
CDS 301 Fall, 2009 Image Visualization Chap. 9 November 5, 2009 Jie Zhang Copyright ©
Image Segmentation Image segmentation (segmentace obrazu) –division or separation of the image into segments (connected regions) of similar properties.
Segmentation and Region Detection Defining regions in an image.
EE 7730 Image Segmentation.
Thresholding Otsu’s Thresholding Method Threshold Detection Methods Optimal Thresholding Multi-Spectral Thresholding 6.2. Edge-based.
CS 376b Introduction to Computer Vision 04 / 08 / 2008 Instructor: Michael Eckmann.
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
Segmentation Divide the image into segments. Each segment:
Lecture 14: Classification Thursday 18 February 2010 Reading: Ch – 7.19 Last lecture: Spectral Mixture Analysis.
Image Classification To automatically categorize all pixels in an image into land cover classes or themes.
CS292 Computational Vision and Language Segmentation and Region Detection.
Image Segmentation by Clustering using Moments by, Dhiraj Sakumalla.
Spatial data models (types)
Entropy and some applications in image processing Neucimar J. Leite Institute of Computing
Image Classification
An Object-oriented Classification Approach for Analyzing and Characterizing Urban Landscape at the Parcel Level Weiqi Zhou, Austin Troy& Morgan Grove University.
National Center for Supercomputing Applications University of Illinois at Urbana-Champaign Image Features Kenton McHenry, Ph.D. Research Scientist.
Environmental Remote Sensing Lecture 5: Image Classification " Purpose: – categorising data – data abstraction / simplification – data interpretation –
October 14, 2014Computer Vision Lecture 11: Image Segmentation I 1Contours How should we represent contours? A good contour representation should meet.
Land Cover Classification Defining the pieces that make up the puzzle.
Geometric Activity Indices for Classification of Urban man-made Objects using Very-High Resolution Imagery R. Bellens, L. Martinez-Fonte, S. Gautama Ghent.
Digital Image Processing In The Name Of God Digital Image Processing Lecture8: Image Segmentation M. Ghelich Oghli By: M. Ghelich Oghli
ASPRS Annual Conference 2005, Baltimore, March Utilizing Multi-Resolution Image data vs. Pansharpened Image data for Change Detection V. Vijayaraj,
1 Remote Sensing and Image Processing: 6 Dr. Mathias (Mat) Disney UCL Geography Office: 301, 3rd Floor, Chandler House Tel:
B. Krishna Mohan and Shamsuddin Ladha
BUILDING EXTRACTION AND POPULATION MAPPING USING HIGH RESOLUTION IMAGES Serkan Ural, Ejaz Hussain, Jie Shan, Associate Professor Presented at the Indiana.
Raster Data Model.
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
Digital Image Processing CCS331 Relationships of Pixel 1.
Remote Sensing Supervised Image Classification. Supervised Image Classification ► An image classification procedure that requires interaction with the.
Chapter 10, Part II Edge Linking and Boundary Detection The methods discussed in the previous section yield pixels lying only on edges. This section.
Color Image Segmentation Speaker: Deng Huipeng 25th Oct , 2007.
Segmentation: Region Based Methods Region-based methods –iterative methods based on region merging and/or splitting based on the degree of similarity of.
Extent and Mask Extent of original data Extent of analysis area Mask – areas of interest Remember all rasters are rectangles.
Map of the Great Divide Basin, Wyoming, created using a neural network and used to find likely fossil beds See:
CS654: Digital Image Analysis
Causes and Consequences of Spatial Heterogeneity Ecolog(ists) use(s) the concept of a landscape in two ways. The first, which considers a landscape as.
Remote Sensing Unsupervised Image Classification.
Digital Image Processing
Object-oriented Land Cover Classification in an Urbanizing Watershed Erik Nordman, Lindi Quackenbush, and Lee Herrington SUNY College of Environmental.
Machine Vision ENT 273 Regions and Segmentation in Images Hema C.R. Lecture 4.
Updated Cover Type Map of Cloquet Forestry Center For Continuous Forest Inventory.
Image Segmentation Nitin Rane. Image Segmentation Introduction Thresholding Region Splitting Region Labeling Statistical Region Description Application.
Thresholding Foundation:. Thresholding In A: light objects in dark background To extract the objects: –Select a T that separates the objects from the.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Digital Image Processing CSC331
May 2003 SUT Color image segmentation – an innovative approach Amin Fazel May 2003 Sharif University of Technology Course Presentation base on a paper.
Unsupervised Classification
Image Enhancement Band Ratio Linear Contrast Enhancement
Course : T Computer Vision
Machine Vision ENT 273 Lecture 4 Hema C.R.
Image Representation and Description – Representation Schemes
Object-based Classification
IMAGE PROCESSING RECOGNITION AND CLASSIFICATION
Built-up Extraction from RISAT Data Using Segmentation Approach
Computer Vision Lecture 13: Image Segmentation III
DIGITAL SIGNAL PROCESSING
UZAKTAN ALGIILAMA UYGULAMALARI Segmentasyon Algoritmaları
Map of the Great Divide Basin, Wyoming, created using a neural network and used to find likely fossil beds See:
Computer Vision Lecture 12: Image Segmentation II
Incorporating Ancillary Data for Classification
University College London (UCL), UK
Computer Vision Lecture 16: Texture II
Fall 2012 Longin Jan Latecki
Image Information Extraction
Image Segmentation Image analysis: First step:
Presentation transcript:

Object-oriented classification Lecture 11

Source: http://usda-ars.nmsu.edu/PDF%20files/laliberteAerialPhotos.pdf

Why? Per-pixel classification Only based on pixel value or spectral value Ignore spatial autocorrelation One-to-many (one pixel value similar to many classes) Salt-and-pepper A crucial drawback of these per-pixel classification methods is that while the information content of the imagery increases with spatial resolution, the accuracy of land use classification may decrease. This is due to increasing of the within class variability inherent in a more detailed, higher spatial resolution data.

Object-oriented classification Use spatial autocorrelation (to grows homogeneous regions, or regions with specified amounts of heterogeneity) Not only pixel values but also spatial measurements that characterizer the shape of the region Divide image into segments or regions based on spectral and shape similarity or dissimilarity, i.e., from image pixel level to image object level. Once training objects selected, some methods can be used to classify all objects into different training objects, such as nearest-neighbor, membership function (fuzzy classification logic), or knowledge-based approaches. Classification process is rather fast because objects not individual pixels are assigned to specific classes. Primarily used for high spatial resolution image classification

1. Image segmentation Image segmentation is a partitioning of an image into constituent parts using image attributes such as pixel intensity, spectral values, and/or textural properties. Image segmentation produces an image representation in terms of edges and regions of various shapes and interrelationships. Segmentation algorithms are based on region growing/merging, simulated annealing, boundary detection, probability-based image segmentation, fractal net evolution approach (FNEA), and more. In region growing/merging, neighboring pixels or small segments that have similar spectral properties are assumed to belong to the same larger segment and are therefore merged

Software http://www.ecognition.com/products

Criteria for segmentation The scale parameter is an abstract value to determine the maximum possible change of heterogeneity caused by fusing several objects. The scale parameter is indirectly related to the size of the created objects. The heterogeneity at a given scale parameter is directly linearly dependent on the object size. Homogeneous areas result in larger objects, and heterogeneous areas result in larger objects. Small scale number results small objects, lager scale number results in larger objects. This refers to Multiresolution image segmentation. Color is the pixel value; Shape includes compactness and smoothness which are two geometric features that can be used as "evidence." Smoothness describes the similarity between the image object borders and a perfect square. Compactness describes the "closeness" of pixels clustered in a object by comparing it to a circle Pixel neighborhood function

Pixel neighborhood function One criteria used to segment a remotely sensed image into image objects is a pixel neighborhood function, which compares an image object being grown with adjacent pixels. The information is used to determine if the adjacent pixel should be merged with the existing image object or be part of a new image object. a) If a plane 4 neighborhood function is selected, then two image objects would be created because the pixels under investigation are not connected at their plane borders. b) Pixels and objects are defined as neighbors in a diagonal 8 neighborhood if they are connected at a plane border or a corner point. Diagonal neighborhood mode only be used if the structure of interest are of a scale to the pixel size. Example of road extraction from coarse resolution image. In all other case, plane neighborhood mode is appropriate choice Should be decided before the first segmentation

Color and shape These two criteria are used to create image objects (patches) of relatively homogeneous pixels in the remote sensing dataset using the general segmentation function (Sf): where the user-defined weight for spectral color versus shape is 0 < wcolor < 1. If the user wants to place greater emphasis on the spectral (color) characteristics in the creation of homogeneous objects (patches) in the dataset, then wcolor is weighted more heavily (e.g., wcolor = 0.8). Conversely, if the spatial characteristics of the dataset are believed to be more important in the creation of the homogeneous patches, then shape should be weighted more heavily.

where mg means merge (total pixels in all objects 1 and 2 here). Spectral (i.e., color) heterogeneity (h) of an image object is computed as the sum of the standard deviations of spectral values of each layer (sk) (i.e., band) multiplied by the weights for each layer (wk): Usually equal weight for all bands except you know certain band is really important So the color criterion is computed as the weighted mean of all changes in standard deviation for each band k of the m bands of remote sensing dataset. The standard deviation sk are weighted by the object sizes nob (i.e. the number of pixels) (Definiens, 2003): where mg means merge (total pixels in all objects 1 and 2 here).

compactness smoothness n is number of pixel in the object, l is the perimeter, b is shortest possible border length of a box bounding the object Compactness weight makes it possible to separate objects that have quite different shapes but not necessarily a great deal of color contrast, such as clearcuts VS bare patches within forested areas.

Classification based on Image Segmentation Logic takes into account spatial and spectral characteristics Jensen, 2005

2. classification Classification of image objects Based on fuzzy systems Nearest-neighbor Membership function are used to determine if an object belongs to a class or not. These membership functions are based on fuzzy logic. where an object can have a probability of belonging to a class - with the probability being in the range 0 to 1 - where 0 is absolutely DOES NOT belong to the class, and 1 is absolutely DOES belong to the class.

Nearest Neighbor based on sample objects within a defined feature space, the distance to each feature space or to each sample object is calculated for each image object This allows a very simple, rapid yet powerful classification, in which individual image objects are marked as typical representatives of a class (=training areas), and then the rest of the scene is classified accordingly (“click and classify”). Therefore, digitization of training areas is not necessary anymore.

3. An example Chihuahuan Desert Rangeland Research Center (CDRRC) Northern part of Chihuahuan desert Semidesert grassland Increase in shrubs, decrease in grasslands Honey mesquite (Prosopis glandulosa) main increaser 150 ha pasture Jornada Experimental Range Laliberte et al. 2004 Remote Sensing of Env.

Workflow in eCognition Pixel level Level 1 Level 2 Level 3 Image object hierarchy Input images Multiresolution segmentation Feedback Training samples standard nearest neighbor Creation of class hierarchy Level 1 Level 2 Classification Membership functions fuzzy logic Classification based segmentation Final merged classification

membership functions Classification using only 1 membership function: 1) Mean value of objects - similar to thresholding Dark background classified as shrub Classification using 3 membership functions: Mean value of objects Mean difference to neighbors Mean difference to super-object Shrubs can be differentiated in dark as well as light backgrounds

Image object hierarchy with 3 segmentation levels Original image Quickbird panchromatic Level 2 scale 100 Level 1 scale 10 Level 3 scale 300

Level 2 classification Level 1 classification: shrubs

Shrub/grass dynamics

Conclusions From 1937-2003 Shrub increase 0.9% to 13.1% Grass decrease 18.5% to 1.9% Vegetation dynamics related to precipitation patterns (1951-1956 drought), historical grazing pressures Image analysis underestimated shrub and grass cover 87% of shrubs >2 m2 were detected

4. Combine image and other datasets for classification in eCognition For example in urban area, combining the spectral image and elevation data (DEM), significant elevation info can be used to outline object’s shape

Source: http://www.definiens-imaging.com/documents/an/tsukuba.pdf

Roof surface materials Source: http://www.definiens-imaging.com/documents/publications/lemp-urs2005.pdf

Incorrect classified is 4.6% (red)