B. Krishna Mohan and Shamsuddin Ladha

Slides:



Advertisements
Similar presentations
Patient information extraction in digitized X-ray imagery Hsien-Huang P. Wu Department of Electrical Engineering, National Yunlin University of Science.
Advertisements

Applications of one-class classification
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Computer Vision Lecture 16: Texture
Course Syllabus 1.Color 2.Camera models, camera calibration 3.Advanced image pre-processing Line detection Corner detection Maximally stable extremal regions.
Multiple Criteria for Evaluating Land Cover Classification Algorithms Summary of a paper by R.S. DeFries and Jonathan Cheung-Wai Chan April, 2000 Remote.
Recognition using Regions CVPR Outline Introduction Overview of the Approach Experimental Results Conclusion.
Complex Networks for Representation and Characterization of Object For CS790g Project Bingdong Li 11/9/2009.
EE 7730 Image Segmentation.
Thresholding Otsu’s Thresholding Method Threshold Detection Methods Optimal Thresholding Multi-Spectral Thresholding 6.2. Edge-based.
Application of image processing techniques to tissue texture analysis and image compression Advisor : Dr. Albert Chi-Shing CHUNG Presented by Group ACH1.
3. Introduction to Digital Image Analysis
A 3D Approach for Computer-Aided Liver Lesion Detection Reed Tompkins DePaul Medix Program 2008 Mentor: Kenji Suzuki, Ph.D. Special Thanks to Edmund Ng.
Image Classification To automatically categorize all pixels in an image into land cover classes or themes.
E.G.M. PetrakisBinary Image Processing1 Binary Image Analysis Segmentation produces homogenous regions –each region has uniform gray-level –each region.
INTRODUCTION Problem: Damage condition of residential areas are more concerned than that of natural areas in post-hurricane damage assessment. Recognition.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Convolutional Neural Networks for Image Processing with Applications in Mobile Robotics By, Sruthi Moola.
Presented by: Kamakhaya Argulewar Guided by: Prof. Shweta V. Jain
ENDA MOLLOY, ELECTRONIC ENG. FINAL PRESENTATION, 31/03/09. Automated Image Analysis Techniques for Screening of Mammography Images.
BACKGROUND LEARNING AND LETTER DETECTION USING TEXTURE WITH PRINCIPAL COMPONENT ANALYSIS (PCA) CIS 601 PROJECT SUMIT BASU FALL 2004.
Land Cover Classification Defining the pieces that make up the puzzle.
Geometric Activity Indices for Classification of Urban man-made Objects using Very-High Resolution Imagery R. Bellens, L. Martinez-Fonte, S. Gautama Ghent.
ASPRS Annual Conference 2005, Baltimore, March Utilizing Multi-Resolution Image data vs. Pansharpened Image data for Change Detection V. Vijayaraj,
Texture analysis Team 5 Alexandra Bulgaru Justyna Jastrzebska Ulrich Leischner Vjekoslav Levacic Güray Tonguç.
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
Course Syllabus 1.Color 2.Camera models, camera calibration 3.Advanced image pre-processing Line detection Corner detection Maximally stable extremal regions.
Image Classification 영상분류
© by Yu Hen Hu 1 ECE533 Digital Image Processing Image Segmentation.
資訊工程系智慧型系統實驗室 iLab 南台科技大學 1 A Static Hand Gesture Recognition Algorithm Using K- Mean Based Radial Basis Function Neural Network 作者 :Dipak Kumar Ghosh,
Object Detection with Discriminatively Trained Part Based Models
1 Multiple Classifier Based on Fuzzy C-Means for a Flower Image Retrieval Keita Fukuda, Tetsuya Takiguchi, Yasuo Ariki Graduate School of Engineering,
Image Processing and Pattern Recognition Jouko Lampinen.
G52IVG, School of Computer Science, University of Nottingham 1 Edge Detection and Image Segmentation.
EECS 274 Computer Vision Segmentation by Clustering II.
Course 9 Texture. Definition: Texture is repeating patterns of local variations in image intensity, which is too fine to be distinguished. Texture evokes.
How natural scenes might shape neural machinery for computing shape from texture? Qiaochu Li (Blaine) Advisor: Tai Sing Lee.
Team 5 Wavelets for Image Fusion Xiaofeng “Sam” Fan Jiangtao “Willy” Kuang Jason “Jingsu” West.
Pixel Connectivity Pixel connectivity is a central concept of both edge- and region- based approaches to segmentation The notation of pixel connectivity.
Chapter 8 Remote Sensing & GIS Integration. Basics EM spectrum: fig p. 268 reflected emitted detection film sensor atmospheric attenuation.
Handwritten Hindi Numerals Recognition Kritika Singh Akarshan Sarkar Mentor- Prof. Amitabha Mukerjee.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Levels of Image Data Representation 4.2. Traditional Image Data Structures 4.3. Hierarchical Data Structures Chapter 4 – Data structures for.
Chap 7 Image Segmentation. Edge-Based Segmentation The edge information is used to determine boundaries of objects Pixel-based direct classification methods.
A NOVEL METHOD FOR COLOR FACE RECOGNITION USING KNN CLASSIFIER
Face Image-Based Gender Recognition Using Complex-Valued Neural Network Instructor :Dr. Dong-Chul Kim Indrani Gorripati.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
Digital Image Processing
Medical Image Analysis Image Segmentation Figures come from the textbook: Medical Image Analysis, by Atam P. Dhawan, IEEE Press, 2003.
Object-oriented Land Cover Classification in an Urbanizing Watershed Erik Nordman, Lindi Quackenbush, and Lee Herrington SUNY College of Environmental.
Slides from Dr. Shahera Hossain
Sub pixelclassification
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Color Image Segmentation Mentor : Dr. Rajeev Srivastava Students: Achit Kumar Ojha Aseem Kumar Akshay Tyagi.
1. 2 What is Digital Image Processing? The term image refers to a two-dimensional light intensity function f(x,y), where x and y denote spatial(plane)
Copyright ©2008, Thomson Engineering, a division of Thomson Learning Ltd.
Course : T Computer Vision
Medical Image Analysis
Recognition of biological cells – development
Built-up Extraction from RISAT Data Using Segmentation Approach
Efficient Image Classification on Vertically Decomposed Data
Chapter 12 Object Recognition
Evaluating Land-Use Classification Methodology Using Landsat Imagery
Efficient Image Classification on Vertically Decomposed Data
Introduction Computer vision is the analysis of digital images
REMOTE SENSING Multispectral Image Classification
Midterm Exam Closed book, notes, computer Similar to test 1 in format:
Introduction to Radial Basis Function Networks
Midterm Exam Closed book, notes, computer Similar to test 1 in format:
Presentation transcript:

B. Krishna Mohan and Shamsuddin Ladha Comparison of Object Based and Pixel Based Classification of High Resolution Satellite Images using Artificial Neural Networks B. Krishna Mohan and Shamsuddin Ladha CSRE, IIT Bombay

Introduction High resolution images are a viable option for extraction of spatial information and updation of GIS databases. Many countries have launched / are launching satellites with high resolution multispectral sensor payloads. Prominent among them are Ikonos, Quickbird, Geo-Eye, Cartosat and others, more recently Kompsat and forthcoming Thaisat.

High Resolution Imagery From digital image processing point of view, high spatial resolution allows perception of the content of the image in the form of objects. In contrast, low resolution images could only be analyzed pixel by pixel, since each pixel itself could be a combination of several categories.

A High Resolution Image Significant Objects: Water Buildings Pool Roads Bridge Huts Vegetation Farm Resolution: 1 metre

A Low Resolution Image Individual objects seen are only the large structures like lake, race course, roads (no width), residential areas (no individual buildings) Resolution: 10 metres

Overview – Object Oriented Image Classification Object oriented segmentation and classification methods are a new development in this direction. Image is decomposed into non-overlapping regions. In addition to the spectral properties, shape and textural properties of the regions are taken into consideration for classification of the regions in lieu of the individual pixels. Man made objects have definite shape (circular, rectangular, elongated, etc.) while natural objects are better distinguished based on spectral and textural characteristics.

Objectives of Our Study Develop a system for, Segmentation of high resolution images. Derivation of spatial, spectral and textural features. Classification using a mixture of spatial, spectral and textural features. Evaluation of results and comparison with per-pixel classification.

Object Oriented Classification Methodology Preprocess Input Image Region Segmentation Connected Component Labeling Shape Features Texture Features Spectral Features Neural Network Classification Classified Image

Step 1: Image Pre-processing Objectives: Suppress noise. Eliminate minute details that are of no interest. Methods: Image smoothing. Adaptive Gaussian/Median Filtering. Although optional, this step is very important in improving the accuracy of classification.

Adaptive Gaussian Filter The filter width (s) adapts to the image condition, such that the value of s varies from pixel to pixel to effectively eliminate noise and reduce image distortion Variable s

Step 2: Gradient Computation Required to mark boundaries of regions and limit their extent. Based on mathematical morphology. Can be applied to multiband images – implemented for 3-band (color) images.

Step 3: Seed Point Generation Segmentation process requires a set of seed pixels to start growing regions. Known as marker in mathematical morphology literature. Necessary to control size and number of regions.

Step 3: Marker Generation Cluster the image datasets into K classes using standard K-means algorithm. Consider the K cluster means (mk:k = 1 to K). All pixels that are within mk ± d are selected as markers. Regions are grown from these pixels. This scheme worked better than the top-hat transform suggested by Meyer and Beucher.

Step 4: Region Growing by Morphological Watershed Transform Principle is based on simulation of flooding a terrain of varying topography. Floods cause accumulation of water in catchments, bounded by high gradients. Starts with the seed points (markers) and adds adjacent pixels till high gradients (region boundaries) are encountered.

Step 5: Connected Component Extraction Extracted regions are assigned mean of the pixels falling within each region. Each region is assigned a separate label so that region properties can be computed. Multipass scanning algorithm implemented to extract connected regions.

Step 6: Computation of Regional Features Types of region features: Shape Spectral Textural Shape features (7 features implemented): Aspect ratio, Convexity, Form, Solidity, Compactness, Extent, Roundness Textural features: ASM, CON, ENT, IDM from four directional gray level co-occurrence matrices and one average co-occurrence matrix

Step 7: Connected Component Classification Artificial Neural Network as Classifier Multilayer Feedforward Network (Perceptron/MLFF) Radial Basis Function Based Network (RBF) Classification Algorithm Train and test sample selection Network training Network accuracy computation using cross validation Classification of all components Generation of classified image from classified components

Perceptron Architecture D E S H I D D E N L A Y E R S

Data Sets Ikonos image window Quickbird image window

Input Image – Ikonos Image Significant Objects: Water Buildings Pool Roads Bridge Huts Vegetation Farm Resolution: 1 metre

Watershed Transformation Output – Ikonos Image

Object based Classification Output using MLFF Network – Ikonos Image

Input Image – Quickbird Image Significant Objects: Lake Buildings Pool Roads Vegetation Trees Mountain Shadow Resolution: 2000 x 2000

Watershed Transformation Output – Quickbird Image

Object Based Classification Output using MLFF Network – Quickbird Image

Object Based Classification Output using RBF Network – Quickbird Image

Pixel Based Classification Output using MLFF Network – Quickbird Image

Pixel Based Classification Output using RBF Network – Quickbird Image

Classification Statistics

Summary Object based high resolution image classification. Classification using neural networks. Superior to per-pixel methods.

Thank You