Presentation is loading. Please wait.

Presentation is loading. Please wait.

Dengsheng Lu Professor Center for Global Change and Earth Observations Michigan State University, East Lansing, Michigan April.

Similar presentations


Presentation on theme: "Dengsheng Lu Professor Center for Global Change and Earth Observations Michigan State University, East Lansing, Michigan April."— Presentation transcript:

1 Dengsheng Lu Professor Center for Global Change and Earth Observations Michigan State University, East Lansing, Michigan Email: ludengsh@msu.edu April 30, 2014

2 Image classification procedure Research objectives and characteristics of the study area Collection of data sets (Remote sensing, ancillary data, & field survey) Data preprocessing (e.g., Geometric rectification; radiometric, atmospheric, and topographic calibration) Feature extraction (e.g., vegetation indices, textures, transformation, and data fusion) and selection Image classification with a suitable classifier Post-classification processing Evaluation of classified image Selection and refinement of training samples Determination of a classification system Lu, D., and Weng, Q., 2007. A Survey of Image Classification Methods and Techniques for Improving Classification Performance. IJRS, 28(5), 823-870.

3 Outline  Land Use/Cover Classification From Radiometric data to the Combination of Radiometric and Spatial Features Landsat TM, Radar, and QuickBird From Individual Sensor Data to the Integration of Multi- resolution/Multi-sensor Data Fusion of Landsat and Radar data From Parametric to Nonparametric Classification Algorithms  Summary and discussion

4 - Land use/cover classification - From spectral signature to combination of spectral and spatial information Three case studies  Landsat TM image  Radar data (ALOS PALSAR L-band and RADARSAT C- band)  QuickBird image

5 Land cover classification with Landsat data in Altamira, Para State Li, G., Lu, D., Moran, E., and Hetrick, S., 2011. Land-cover Classification in a Moist Tropical Region of Brazil with Landsat TM Imagery. International Journal of Remote Sensing. 32(23), 8207-8230.

6 Remotely sensed data used in research

7 Research objective Majority of optical sensor data have multispectral bands. Land cover classification is mainly based on spectral signatures Spatial information  Texture  Segmentation Objective: how to effectively use spatial information to improve land use/cover classification

8 Field data collection and organization Field survey was conducted in 2009  Identify candidate sample locations in the laboratory  Record the locations of different vegetation cover types using a global positioning system (GPS) device  Describe vegetation stand structure (e.g., height, canopy cover, species composition) and take pictures Create representative region of Interest (ROI) polygons

9 Land cover classification system Forest  Upland forest  Flooding forest  Liana forest Succession vegetation  Initial – SS1  Intermediate – SS2  Advanced – SS3 Agropasture Non-vegetated lands  Water  Wetland  Urban

10 Land use/cover classification with Landsat TM image Image preprocessing  Radiometric and atmospheric calibration for TM image  Image-to-image registration Identification of suitable vegetation indices Identification of suitable textural images Selection of training samples Classification with maximum likelihood classifier Accuracy assessment

11 Vegetation indices used in research

12 Texture measures used in research

13 Selection of best textural images Identify potential textures based on separability analysis Selection of the combination of textural images where STD i is the standard deviation of the textural image i, R ij is the correlation coefficient between two textural images i and j, and n is the number of textural images Best texture combination (BTC) =    n j ij n i i RSTD 11

14 Selection of variables for classification Datasets used in classification:  six TM spectral bands  a combination of spectral and vegetation indices  a combination of spectral and textural images  a combination of spectral, vegetation indices, and textural images

15 Land use/cover classification Selection and refinement of training sample plots Classification with maximum likelihood algorithm Accuracy assessment

16 A comparison of TM band 4, band 5, two vegetation indices, and two textural images a and b – TM bands 4 and 5; c and d – the second component from tasseled cap transformation and the vegetation index based on bands 4, 2, 5, and 7; e and f – textural images based on dissimilarity on band 2 and band 4 and window size of 9x9 pixels

17 Comparison of classification results among different datasets with MLC Six TM spectral bands Combination of spectral bands and two vegetation indices Combination of spectral bands and two textural images Combination of spectral bands, two vegetation indices and two textural images

18 Comparison of accuracy assessment results with MLC among different datasets

19 Conclusion of land cover classification with Landsat TM imagery This research shows the importance of textural images in improving vegetation classification performance, and the combination of vegetation indices and textural images into spectral bands can further improve the vegetation classification performance The incorporation of vegetation indices into spectral signatures cannot significantly improve land cover classification performance

20 Land cover classification with radar data Combination of radiometric and textural images Objective Examine the performance of radar data in land use/cover classification in the moist tropical region Li, G., Lu, D., Moran, E., Dutra, L., and Batistella, M., 2012. A comparative analysis of ALOS PALSAR L-band and RADARSAT-2 C-band data for land-cover classification in a tropical moist region. ISPRS Journal of Photogrammetry and Remote Sensing, 70, 26-38.

21 Strategy for land cover classification with radar data

22 Preprocessing - Speckle reduction Different speckle reduction methods (e.g., median, Lee- Sigma, Gamma-Map, local-region and Frost) with various window sizes (e.g., 3x3, 5x5, 7x7 and 9x9) were examined Following criteria are used  speckle reduction  edge sharpness preservation  line and point target contrast preservation  retention of texture information  computational efficiency Lee-Sigma with window size of 5x5 is finally selected for this research

23 Textural analysis Six GLCM (Grey-level co-occurrence matrix)-based texture measures  Variance, dissimilarity, homogeneity, contrast, entropy, and second moment  Six different window sizes (i.e., 5x5, 9x9, 15x15, 19x19, 25x25, and 31x31)  HH and HV images as well as the derived image Separability analysis Best texture combination

24 A summary of potential textural images Data Potential textures Selected textures Single texturesCombinations ALOS PALSAR L-band HH SM25, ENT25, SM31 VAR31-DIS31, ENT25-SM31, DIS31- ENT31, SM25-CON31, ENT19-SM25, VAR31-CON31 SM25- CON31 ALOS PALSAR L-band HV CON25, CON31, CON19, and DIS25 CON25 with all selected texture with window sizes greater than 15x15; DIS25- CON31, VAR25-CON31, CON19-DIS25 CON25- SM19 ALOS PALSAR L-band NL ENT31, DIS25, and ENT25 DIS25-CON31, CON25-DIS31, DIS19- DIS31, DIS25-ENT31, DIS15-DIS25, SM25-VAR31, VAR25-DIS25, SM25-DIS31 SM25- VAR31 RADARSAT-2 C-band HH DIS25, HOM25, DIS15 DIS15-CON31, DIS25-HOM31, DIS19- CON31, CON15-DIS25 DIS25- HOM31 RADARSAT-2 C-band HV CON25, DIS25, HOM25, CON31 DIS25 (or CON25) with most textures; HOM31-DIS31, CON19-HOM31 (or ENT31) CON25- HOM31 RADARSAT-2 C-band NL SM31, ENT19 ENT19-ENT31, SM19-ENT31SM19- ENT31 Identification of textural images for land-cover classification

25 No.ScenariosExamples 1 Single polarization image: HH, HV, and NL (3 scenarios) HH 2 Textural images: Selected textural images corresponding to single-polarization option (3 scenarios) HH-text 3 Combination of single-polarization image and relevant textural images (3 scenarios) HH&text 4 Combination of single-polarization options (2 scenarios) HH&HV 5 Combination of textural images from different, individual polarization options (2 scenarios) HH-&HV-text 6 Combination of single-polarization options and their relevant textural images (2 scenarios) HH&HV&text 7 Combination of both HH and HV from L-band and C-band data L & C HH&HV 8 Combination of textural images from HH and HV of L- band and C-band data L & C HH-&HV- text 9 Combination of single-polarization options and their relevant textural images from L-band and C-band data L & C HH&HV&text Scenarios based on PALSAR and RADARSAT data

26 A comparison of HH, HV, and NL from ALOS PALSAR L- band and RADARSAT-2 C- band and their corresponding textural images a, b, and c are ALOS PALSAR L- band HH image and HH-derived SM25 and CON31 textural images; d, e, and f are ALOS PALSAR L- band HV image and HV-derived CON25 and SM19 textural images; g, h, and i are ALOS PALSAR L- band NL image and NL-derived SM25 and VAR31 textural images; j, k, and l are RADARSAT-2 C- band HH image and HH-derived DIS25 and HOM31 textural images; m, n, and o are RADARSAT-2 C -band HV image and HV-derived CON25 and HOM31 textural images; p, q, and r are RADARSAT-2 C - band NL image and NL-derived SM19 and ENT31 textural images;

27 Classification results among different scenarios

28 Comparison of kappa coefficients among different data sets and between ALOS PALSAR L-band and RADARSAT-2 C-band

29 Classification results from the combination of ALOS PALSAR L-band and RADARSAT-2 C-band data

30 Classification results based on a coarse classification system

31 A comparison of color composites and classification results Color composites based on ALOS PALSAR L-band (a) and RADARSAT C-band (b) HH, HV and NL images assigned as red, green and blue Land-cover classification images from ALOS PALSAR L-band (c) and RADARSAT C-band (d) HH, HV and their corresponding textural images

32 Conclusions from radar data The best texture measure varies, depending on different polarization options and wavelengths, but the best window sizes were 25x25 and 31x31  Single texture has poor performance in vegetation classification  A combination of two textural images improved vegetation separability  A combination of three or more textural images did not significantly improve vegetation separability Considering single polarization images:  The HH image from either L-band or C-band has performs better than HV, but NL images did not improve performance compared to HH  Textural images from L-band HH, HV, or NL images provided similar or poor performances compared to corresponding radiometric images, but the textural images from C-band perform better than corresponding radiometric images  The combination of radiometric and textural images from either L-band or C- band polarizations improved classification compared to their individual datasets  For L-band data, textural images were less important than radiometric bands but inversed for C-band data

33 Conclusions (cont.) Considering the combinations of different polarization images  For L-band data, the combination of HH and HV images improved classification, but adding the NL image did not  For C-band data, the combination of HH and HV or adding NL cannot improve classification compared to individual polarization images Considering combinations of different radar data: The combination of PALSAR L-band and RADARSAT-2 C-band HH and HV images yields very limited improvement, and combination of their textural images cannot improve the classification, but a combination of all radiometric and textural images indeed improved classification accuracy by 6.6% Comparison of classification results indicated that L-band data perform much better than C-band data, but both datasets cannot effectively separate fine vegetation classes. They are valuable for coarse land-cover classification

34 Urban Land Use/Cover Classification in a Complex Urban-Rural Landscape with QuickBird Imagery Objective How to effectively use spatial information in the very high spatial resolution image to improve classification performance Spatial features: Texture and Segmentation Lu, D., Hetrick, S., and Moran, E. 2010. Land Cover Classification in a Complex Urban-Rural Landscape with QuickBird Imagery. Photogrammetric Engineering and Remote Sensing. 76(10), 1159-1168.

35 Methods Identification of textural images Development of segment image Urban classification with maximum likelihood Based on QuickBird spectral image Based on combination of spectral and textural images Segmentation-based classification ECHO (Extraction and Classification of Homogeneous Objects)

36 Comparison of textural images (a: red-band image; b, c, and d: textural images derived with dissimilarity texture measure on the red-band images with three window sizes of 9x9, 15x15, and 21x21 pixels, respectively) Comparison of (a) original red-band image and (b) segmentation-based mean- spectral red-band image

37 Comparison of classified images based on Quickbird imagery in Brazil a: MLC on spectral bands b: ECHO c: Segmentation- based classifier d: MLC on combined spectral and textural images

38 A comparison of accuracy assessment results

39 Summary Spectral information is more important than other remote sensing inherent features, especially in medium and coarse spatial resolution images As spatial resolution increases such as Quickbird, spatial information become important Combination of spectral and spatial information is valuable for improving land use/cover classification The key is to identify suitable textural images Segmentation is another way to use spatial information while reduce the spectral variation

40 -Land Use/Cover Classification- From Individual Sensor Data to the Integration of Multi-resolution/-sensor Data Objective: To identify which wavelength and polarization and which data fusion method—Principal component analysis (PCA), Wavelet-merging technique, High Pass Filter resolution-merging method (HPF), and normalized multiplication method (NMM)—yield better land cover classification in a moist tropical region Lu, D., Li, G., Moran, E., Dutra, L., and Batistella, M., 2011. A Comparison of Multisensor Integration Methods for Land-cover Classification in the Brazilian Amazon. GIScience & Remote Sensing. 48(3), 345-370.

41 Research problems Many data fusion methods are available (e.g. Pohl and van Genderen, 1998; J. Zhang, 2010), but  Which fusion method is suitable for integrating Landsat TM and radar data to improve land-cover classification, especially in moist tropical regions?  Which wavelength (e.g. L-band or C-band) and which polarization option (e.g. HH and HV) have better land-cover classification for the same data fusion method?

42 Study area – Altamira, Para State, Brazil 1. Study area covers 3,116 km 2 2. Major deforestation began in the early 1970s, coincident with the construction of the Transamazon Highway 3.Dominant native types of vegetation are mature moist forest and liana forest 4. Deforestation has led to a complex landscape consisting of different stages of secondary succession, pasture, and agricultural lands

43 Image collection and preprocessing Landsat TM image: radiometric and atmospheric calibration  Improved dark-object subtraction method Radar data: speckle reduction Image-to-image registration  Landsat TM (used as reference image), ALOS PALSAR, and RADARSAT data

44 Data fusion methods Principal component analysis – PCA Wavelet-merging technique – Wavelet High pass filtering based resolution merging method - HPF Normalized multiplication method – NMM

45 Data fusion – PCA Transforming TM multispectral bands into six PCs Remapping the SAR HH (or HV, or NL) image into the data range of PC1 Substituting the PC1 with the remapped SAR image Applying an inverse PCA to the data.

46 Data fusion – Wavelet  HP and LP – high pass and low pass;  c and r – column and row decimation

47 Data fusion – HPF

48 Data fusion – NMM

49 Selection of training samples  220 sample plots were selected  A window size of 3x3 to 9x9 was used, depending on the patch size  10 to 30 plots for each class was used

50 Land cover classification with MLC Maximum likelihood classification (MLC), assumes normal or near normal distribution for each feature of interest and an equal prior probability among the classes  MLC is based on the probability that a pixel belongs to a particular class. It takes the variability of classes into account by using the covariance matrix

51 Accuracy assessment A total of 212 test sample plots were collected from field survey and QuickBird image An error matrix was developed for each classification scenario Producer’s accuracy and user’s accuracy for each class, and overall accuracy and kappa coefficient for each scenario were calculated based on the error matrix

52 A comparison of color composites among TM, radar and their fused images from different data fusion methods a and b - TM color composite (4, 5, and 3 as RGB) c –PALSAR L-band HH, HV, and NL D–RADARSAT-2 C-band HH, HV, and NL e, f, g, and h are data fusion results from PCA, wavelet, HPF and NMM based on TM and PALSAR L-band HH image i, j, k, and l are data fusion results from PCA, wavelet, HPF and NMM based on TM and RADARSAT-2 C-band HH data.

53 Comparison of accuracy assessment results

54

55 A–entire study area based on the original TM spectral data; other five images are part of the study area showing the rectangle area in Figure 3a; b–the highlighted area based on Figure 3a, i.e., original TM image; C–based on ALOS PALSAR L-band HH, HV and NL data; d– based on TM and ALOS PALSAR L-band HH wavelet fusion image; e– based on RADARSAT-2 C-band HH, HV and NL data; f– based on TM and RADARSAT-2 C-band HH wavelet fusion image A comparison of classification results among different scenarios

56 Conclusions based on data fusion methods TM image provides much higher land-cover classification accuracy than individual radar datasets; and PALSAR L-band data provide better classification than RADARSAT-2 C-band data, but neither PALSAR nor RADARSAT data have the capability for detailed vegetation classification Different polarization options, such as HH and HV, or different wavelength such as L- or C-band, performed similarly when they were used for multi-sensor data fusion Compared to the TM data, Wavelet multisensor fusion improved overall classification accuracy by 3.3%–5.7%; the HPF-based fusion method was similar to the TM image, while the PCA-based and NMM fusion methods reduced overall classification accuracy by 5.1%–6.1% and 7.6% –12.7%, respectively

57 -Land Use/cover Classification- From Parametric to Nonparametric Classification Algorithms Objective: explore which classification algorithm is suitable for a specific dataset in a study area Li, G., Lu, D., Moran, E., and Sant’Anna, S.J.S., 2012. A comparative analysis of classification algorithms and multiple sensor data for land use/land cover classification in the Brazilian Amazon. Journal of Applied Remote Sensing, 6(1), 061706 (Dec 14, 2012). doi:10.1117/1.JRS.6.061706.

58 Major factors affecting classification accuracy When you decide to do a classification for your study area  Ground truth data  Classification system  Remote sensing data  Selection of remote sensing-derived variables  Selection of a suitable classification algorithm

59 Land-cover classification with different algorithms based on various dataset scenarios

60 Classification Algorithms Maximum likelihood classifier Classification tree analysis Fuzzy ARTMAP K-nearest neighbor Object-based classification

61 Comparison of classification results (1)

62 Comparison of classification results (2) textural images

63 Comparison of classification results (3)

64 Comparison of classification results (4)

65 A comparison of kappa coefficients among different classification methods based on different scenarios

66 a –MLC on TM image; b – enlarged area for the black box in image a; c – CTA based on TM image; d –CTA based on ALOS PALSAR L-band HH, HV, and corresponding textural images; e –KNN based on the combination of Landsat TM and PALSAR textural images; f –Fuzzy ArtMap based on the data fusion results from Landsat TM and PALSAR L-band HH image A comparison of classification results from the best classification corresponding to each data set

67 Time required for image classification

68 Conclusions based on comparison of classification algorithms MLC has stable results for different datasets and is often the first choice when time is a constraint for LULC classification CTA and KNN can provide good results for most datasets and are often recommended If multi-source data are used, MLC and neural network is not a good choice, because MLC require normal data distribution, and ANN is difficult to be convergent at training stage Data fusion between TM and radar data improved classification for most classification algorithms and ARTMAP provides the best

69 Comments on land use/cover classification Spatial information: very important when high spatial resolution images are used  Texture: two textural images are needed  Segmentation: selection of suitable parameters for segmentation Combination of multi-sensor data  Selection of suitable data fusion method Classification algorithms  Maximum likelihood algorithm is good for spectral image classification, but is not suitable for multisource data, in this case, nonparametric algorithms such as classification tree is suitable

70 NSF funded project (2009-2012) Advancing Land Use and Land Cover Analysis by Integrating Optical and Polarimetric Radar Platforms Research team United States: Emilio Moran Dengsheng Lu Guiying Li Scott Hetrick Brazil: Luciano Dutra Corina Freitas Sidnei SantÁnna


Download ppt "Dengsheng Lu Professor Center for Global Change and Earth Observations Michigan State University, East Lansing, Michigan April."

Similar presentations


Ads by Google