3 Why? Per-pixel classification Only based on pixel value or spectral valueIgnore spatial autocorrelationOne-to-many (one pixel value similar to many classes)Salt-and-pepperA crucial drawback of these per-pixel classification methods is that while the information content of the imagery increases with spatial resolution, the accuracy of land use classification may decrease. This is due to increasing of the within class variability inherent in a more detailed, higher spatial resolution data.
4 Object-oriented classification Use spatial autocorrelation (to grows homogeneous regions, or regions with specified amounts of heterogeneity)Not only pixel values but also spatial measurements that characterizer the shape of the regionDivide image into segments or regions based on spectral and shape similarity or dissimilarity, i.e., from image pixel level to image object level.Once training objects selected, some methods can be used to classify all objects into different training objects, such as nearest-neighbor, membership function (fuzzy classification logic), or knowledge-based approaches.Classification process is rather fast because objects not individual pixels are assigned to specific classes.Primarily used for high spatial resolution image classification
5 1. Image segmentationImage segmentation is a partitioning of an image into constituent parts using image attributes such as pixel intensity, spectral values, and/or textural properties. Image segmentation produces an image representation in terms of edges and regions of various shapes and interrelationships.Segmentation algorithms are based on region growing/merging, simulated annealing, boundary detection, probability-based image segmentation, fractal net evolution approach (FNEA), and more.In region growing/merging, neighboring pixels or small segments that have similar spectral properties are assumed to belong to the same larger segment and are therefore merged
7 Criteria for segmentation The scale parameter is an abstract value to determine the maximum possible change of heterogeneity caused by fusing several objects.The scale parameter is indirectly related to the size of the created objects.The heterogeneity at a given scale parameter is directly linearly dependent on the object size. Homogeneous areas result in larger objects, and heterogeneous areas result in larger objects.Small scale number results small objects, lager scale number results in larger objects. This refers to Multiresolution image segmentation.Color is the pixel value;Shape includes compactness and smoothness which are two geometric features that can be used as "evidence."Smoothness describes the similarity between the image object borders and a perfect square.Compactness describes the "closeness" of pixels clustered in a object by comparing it to a circlePixel neighborhood function
8 Pixel neighborhood function One criteria used to segment a remotely sensed image into image objects is a pixel neighborhood function, which compares an image object being grown with adjacent pixels. The information is used to determine if the adjacent pixel should be merged with the existing image object or be part of a new image object. a) If a plane 4 neighborhood function is selected, then two image objects would be created because the pixels under investigation are not connected at their plane borders. b) Pixels and objects are defined as neighbors in a diagonal 8 neighborhood if they are connected at a plane border or a corner point.Diagonal neighborhood mode only be used if the structure of interest are of a scale to the pixel size. Example of road extraction from coarse resolution image.In all other case, plane neighborhood mode is appropriate choiceShould be decided before the first segmentation
9 Color and shapeThese two criteria are used to create image objects (patches) of relatively homogeneous pixels in the remote sensing dataset using the general segmentation function (Sf):where the user-defined weight for spectral color versus shape is 0 < wcolor < 1.If the user wants to place greater emphasis on the spectral (color) characteristics in the creation of homogeneous objects (patches) in the dataset, then wcolor is weighted more heavily (e.g., wcolor = 0.8). Conversely, if the spatial characteristics of the dataset are believed to be more important in the creation of the homogeneous patches, then shape should be weighted more heavily.
10 where mg means merge (total pixels in all objects 1 and 2 here). Spectral (i.e., color) heterogeneity (h) of an image object is computed as the sum of the standard deviations of spectral values of each layer (sk) (i.e., band) multiplied by the weights for each layer (wk):Usually equal weight for all bands except you know certain band is really importantSo the color criterion is computed as the weighted mean of all changes in standard deviation for each band k of the m bands of remote sensing dataset. The standard deviation sk are weighted by the object sizes nob (i.e. the number of pixels) (Definiens, 2003):where mg means merge (total pixels in all objects 1 and 2 here).
11 compactnesssmoothnessn is number of pixel in the object, l is the perimeter,b is shortest possible border length of a box bounding the objectCompactness weight makes it possible to separate objects that have quite different shapes but not necessarily a great deal of color contrast, such as clearcuts VS bare patches within forested areas.
12 Classification based on Image Segmentation Logic takes into account spatial and spectral characteristicsJensen, 2005
13 2. classification Classification of image objects Based on fuzzy systemsNearest-neighborMembership function are used to determine if an object belongs to a class or not. These membership functions are based on fuzzy logic.where an object can have a probability of belonging to a class - with the probability being in the range 0 to 1 - where 0 is absolutely DOES NOT belong to the class, and 1 is absolutely DOES belong to the class.
14 Nearest Neighborbased on sample objects within a defined feature space, the distance to each feature space or to each sample object is calculated for each image objectThis allows a very simple, rapid yet powerful classification, in which individual image objects are marked as typical representatives of a class (=training areas), and then the rest of the scene is classified accordingly (“click and classify”). Therefore, digitization of training areas is not necessary anymore.
16 3. An example Chihuahuan Desert Rangeland Research Center (CDRRC) Northern part of Chihuahuan desertSemidesert grasslandIncrease in shrubs, decrease in grasslandsHoney mesquite (Prosopis glandulosa) main increaser150 ha pastureJornada Experimental RangeLaliberte et al. 2004Remote Sensing of Env.
18 Workflow in eCognition Pixel levelLevel 1Level 2Level 3Image object hierarchyInput imagesMultiresolution segmentationFeedbackTraining samples standard nearest neighborCreation of class hierarchyLevel 1Level 2ClassificationMembership functions fuzzy logicClassification based segmentationFinal merged classification
19 membership functionsClassification using only 1 membership function:1) Mean value of objects - similar to thresholdingDark background classified as shrubClassification using 3 membership functions:Mean value of objectsMean difference to neighborsMean difference to super-objectShrubs can be differentiated in dark as well as light backgrounds
20 Image object hierarchy with 3 segmentation levels Original imageQuickbird panchromaticLevel 2 scale 100Level 1 scale 10Level 3 scale 300
23 ConclusionsFromShrub increase 0.9% to 13.1%Grass decrease 18.5% to 1.9%Vegetation dynamics related to precipitation patterns ( drought), historical grazing pressuresImage analysis underestimated shrub and grass cover87% of shrubs >2 m2 were detected
24 4. Combine image and other datasets for classification in eCognition For example in urban area, combining the spectral image and elevation data (DEM), significant elevation info can be used to outline object’s shape