Presentation on theme: "SCHOOL OF ENVIRONMENT Translating satellite images into meaningful geospatial information: The data fusion approach Mr. Amit A. Kokje PhD candidate, School."— Presentation transcript:
SCHOOL OF ENVIRONMENT Translating satellite images into meaningful geospatial information: The data fusion approach Mr. Amit A. Kokje PhD candidate, School of Environment The University of Auckland New Zealand
SCHOOL OF ENVIRONMENT · Text Background GIS is evolving day by day and now been used in widespread applications. For urban areas, characterised by rapid development GI systems are playing crucial role in decision making and planning. However acquisition of geospatial data is still considered as major challenge, as conventional methods (surveying, manual updation) fails to deliver the information in timely and cost effective manner. Very high resolution (VHR) satellite imagery offering detailed view and large coverage of earth features can be a efficient source for rapid geospatial data acquisition.
SCHOOL OF ENVIRONMENT · Text Practical limitations: VHR images Commercial trade-off between high spatial and spectral resolution, limits the existing VHR satellites to four or less spectral bands ( Herold et al., 2003 ) Spectral reflectance of many features especially urban land-cover classes lay outside or near the boundaries of the spectral range of VHR satellite images (Herold et al., 2003), causing difficulties in spectral value based classification of Spectral separability of urban features for IKONOS, Landsat TM and AVIRS. As a result sole use of VHR images for urban feature identification yields less satisfactory results these features ( Chen et al., 2009 ).
SCHOOL OF ENVIRONMENT · Text Practical limitations : data fusion Integration of additional data such as LiDAR, complimentary to spectral information of VHR images have been attempted successfully. Yet many of the methods faces limitations due to lack of robustness of the data fusion procedures. Existing methods fails to incorporate various interpretation parameters associated with both the data sets. Since most of the data fusion methods are customised, targeting single feature identification and limiting their applicability to other types of urban scene. Hence there is a need of robust standardise procedure for multi-sensor data fusion for rapid extraction of geospatially compatible data.
SCHOOL OF ENVIRONMENT · Text Research objectives The research attempts to integrate elevation and intensity data from LiDAR and spectral information of satellite images for identification of various LULC commonly found urban scene. Major focus is on identification of suitable feature segmentation parameters based on LiDAR products (elevation, slope and intensity) and satellite images. Increasing LULC classification accuracy using proposed progressive feature identification and extraction method. Development of a standard procedure for urban land features recognition in order to produce GIS ready LULC map.
SCHOOL OF ENVIRONMENT · Text Study area Major economic hub and the largest city in the country, Auckland (area: 673km2) is selected for this research. 3.75 km² out of 4.33 km² core central business district (CBD) area (174°45'17.60"E 36°50'21.35"S to 174°46'16.43"E 36°51'29.80"S) has been considered for the study.
SCHOOL OF ENVIRONMENT · Text Data used LiDAR data : September 2006 LiDAR data acquired by using ALTM3100EA scanner. Average point density 1pt/m2, Resolution : 1m Vertical and horizontal error : +/- 0.25m. Ancillary Data : Vector road maps, City council development plan maps for feature verification. Satellite images : Ortho-rectified QuickBird PAN + Multi Spectral images acquired on 28 July 2006. PAN resolution: 0.6m Multi Spectral resolution :2.4m Spectral Channels: Near Infra red, Red, Green and Blue.
SCHOOL OF ENVIRONMENT · Text Data preparation LiDAR data : Merging the LiDAR tiles, Digital Surface Model (DSM) and Digital Terrain Model (DTM) extraction. LiDAR data extraction normalised Digital Surface Model (nDSM) (nDSM= DSM – DTM) and Intensity image Satellite images : Geo-referencing followed by Pan sharpening of multi spectral image using PCA resolution merge. Spectral Indices Calculation Normalised Difference Vegetation Index (NDVI) ( NDVI= NIR – R / NIR +R ) and Normalised Difference Water Index (NDWI) ( NDWI= G- NIR / G+NIR )
SCHOOL OF ENVIRONMENT · Text Methodology Multi-resolution Segmentation Base class Identification Ground Above Ground Land features identification Vector road data LiDAR <> Principal assumption for image segmentation and classification is that, all the features within the study area can be differentiated into ground and non ground objects.
SCHOOL OF ENVIRONMENT · Text Image Segmentation: Elevation, intensity and slope parameters extracted from LiDAR data were used to group (segment) the image elements. Multi-resolution segmentation algorithm ( Definiens eCognition Developer ) was applied. Base-class identification Primary classes recognition : Using nDSM threshold all image segments were differentiated in to two baseline classes. nDSM > threshold Above ground (Red) nDSM < threshold Ground (yellow)
SCHOOL OF ENVIRONMENT · Text Various spectral, geometrical, scalar and elevation parameters were applied on base line ground and above ground image segments to identify urban features. Feature recognition Water (blue) Based on NDWI values water was distinguished from ground features. NDWI threshold Ground Segments nDSM correction nDSM correction parameters was applied to reduce the misclassification due to spectral similarities between NDWI and shadow regions..
SCHOOL OF ENVIRONMENT · Text Feature recognition Wharves: large impervious surfaces associated with harbour (red). Neighbourhood associationGround Segments Distance from reference features. Neighbourhood association with water was considered to differentiate wharves from rest of the ground features.
SCHOOL OF ENVIRONMENT · Text Feature recognition Vegetation: Ground level vegetation (light green), Trees (dark green). NDVI Ground Segments nDSM Normalised Difference Vegetation Index (NDVI) threshold in combination with nDSM values used to discriminate vegetation (grass and trees). Above ground Segments
SCHOOL OF ENVIRONMENT · Text Feature recognition Roads: Ground roads (dark blue) and above ground road segments such as bridges motorway crossings (black). Vector road data Ground Segments Intensity nDSM Using vector road data, LiDAR intensity and nDSM values roads were distinguish from other ground features. Pedestrian walkways (in parks) were identified using intensity information. Above ground road segments were distinguished using nDSM values. Above ground Segments
SCHOOL OF ENVIRONMENT · Text Feature recognition Buildings (magenta) nDSM, Slope elevation nDSM, slope and elevation parameters obtained from 3D LiDAR data were used to identify buildings from above ground base class. Above ground Segments
SCHOOL OF ENVIRONMENT · Text Elevation, slope and nDSM derived from LiDAR yields clear boundaries of various features, resulting in more accurate segmentation. Early stage separation of image segments into ground and above ground proved to be beneficial for feature recognition stage. Discussion Similar image parameters when applied on baseline ground and above ground classes, different features were identified, reducing the need of separate classification parameters for individual features.
SCHOOL OF ENVIRONMENT · Text The feature recognition stage resulted in identification of major land features such as water, wharves, vegetation (trees and grass), roads (ground and above ground segments) and buildings present in the study area. Identification of various land uses present in the study area will be attempted in the second stage, using features recognised in the first stage What next ? Land features Exclusive scaling parameters Detail land uses Stage one: feature identification Stage two: land use classification Thematic map Image objects merging