Hyperspectral Imaging Alex Chen 1, Meiching Fong 1, Zhong Hu 1, Andrea Bertozzi 1, Jean-Michel Morel 2 1 Department of Mathematics, UCLA 2 ENS Cachan,

Slides:



Advertisements
Similar presentations
Noise & Data Reduction. Paired Sample t Test Data Transformation - Overview From Covariance Matrix to PCA and Dimension Reduction Fourier Analysis - Spectrum.
Advertisements

Data Mining Feature Selection. Data reduction: Obtain a reduced representation of the data set that is much smaller in volume but yet produces the same.
Han-na Yang Trace Clustering in Process Mining M. Song, C.W. Gunther, and W.M.P. van der Aalst.
A Graphical Operator Framework for Signature Detection in Hyperspectral Imagery David Messinger, Ph.D. Digital Imaging and Remote Sensing Laboratory Chester.
Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction.
Machine Learning Lecture 8 Data Processing and Representation
Image classification in natural scenes: Are a few selective spectral channels sufficient?
Dimensionality Reduction PCA -- SVD
VISUALIZATION OF HYPERSPECTRAL IMAGES ROBERTO BONCE & MINDY SCHOCKLING iMagine REU Montclair State University.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #20.
“Random Projections on Smooth Manifolds” -A short summary
School of Computing Science Simon Fraser University
Principal Component Analysis
Lecture 4 Unsupervised Learning Clustering & Dimensionality Reduction
FACE RECOGNITION, EXPERIMENTS WITH RANDOM PROJECTION
Image Enhancement.
VISUALIZATION OF HYPERSPECTRAL IMAGES ROBERTO BONCE & MINDY SCHOCKLING iMagine REU Montclair State University.
3D Geometry for Computer Graphics
Computer Vision I Instructor: Prof. Ko Nishino. Today How do we recognize objects in images?
Smart Traveller with Visual Translator for OCR and Face Recognition LYU0203 FYP.
Principal Component Analysis Principles and Application.
INTRODUCTION Problem: Damage condition of residential areas are more concerned than that of natural areas in post-hurricane damage assessment. Recognition.
CS 485/685 Computer Vision Face Recognition Using Principal Components Analysis (PCA) M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Tal Mor  Create an automatic system that given an image of a room and a color, will color the room walls  Maintaining the original texture.
Summarized by Soo-Jin Kim
Dimensionality Reduction: Principal Components Analysis Optional Reading: Smith, A Tutorial on Principal Components Analysis (linked to class webpage)
Presented By Wanchen Lu 2/25/2013
Gwangju Institute of Science and Technology Intelligent Design and Graphics Laboratory Multi-scale tensor voting for feature extraction from unstructured.
Image recognition using analysis of the frequency domain features 1.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #19.
Feature extraction 1.Introduction 2.T-test 3.Signal Noise Ratio (SNR) 4.Linear Correlation Coefficient (LCC) 5.Principle component analysis (PCA) 6.Linear.
A New Subspace Approach for Supervised Hyperspectral Image Classification Jun Li 1,2, José M. Bioucas-Dias 2 and Antonio Plaza 1 1 Hyperspectral Computing.
BACKGROUND LEARNING AND LETTER DETECTION USING TEXTURE WITH PRINCIPAL COMPONENT ANALYSIS (PCA) CIS 601 PROJECT SUMIT BASU FALL 2004.
Fuzzy Entropy based feature selection for classification of hyperspectral data Mahesh Pal Department of Civil Engineering National Institute of Technology.
Phase Congruency Detects Corners and Edges Peter Kovesi School of Computer Science & Software Engineering The University of Western Australia.
Classification Course web page: vision.cis.udel.edu/~cv May 12, 2003  Lecture 33.
Computer Graphics and Image Processing (CIS-601).
CSE 185 Introduction to Computer Vision Face Recognition.
2005/12/021 Content-Based Image Retrieval Using Grey Relational Analysis Dept. of Computer Engineering Tatung University Presenter: Tienwei Tsai ( 蔡殿偉.
2005/12/021 Fast Image Retrieval Using Low Frequency DCT Coefficients Dept. of Computer Engineering Tatung University Presenter: Yo-Ping Huang ( 黃有評 )
A NOVEL METHOD FOR COLOR FACE RECOGNITION USING KNN CLASSIFIER
Speech Lab, ECE, State University of New York at Binghamton  Classification accuracies of neural network (left) and MXL (right) classifiers with various.
Principal Component Analysis (PCA).
Face detection and recognition Many slides adapted from K. Grauman and D. Lowe.
Principal Components Analysis ( PCA)
Unsupervised Learning II Feature Extraction
Image Enhancement Band Ratio Linear Contrast Enhancement
Methods of multivariate analysis Ing. Jozef Palkovič, PhD.
Dimensionality Reduction
Recognition: Face Recognition
Principal Component Analysis (PCA)
Dimension Reduction via PCA (Principal Component Analysis)
Data Preparation for Deep Learning
Principal Component Analysis
Outline Peter N. Belhumeur, Joao P. Hespanha, and David J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection,”
In summary C1={skin} C2={~skin} Given x=[R,G,B], is it skin or ~skin?
What Is Spectral Imaging? An Introduction
Face Recognition and Detection Using Eigenfaces
REMOTE SENSING Multispectral Image Classification
Principal Component Analysis
Image Information Extraction
X.1 Principal component analysis
CS4670: Intro to Computer Vision
Announcements Project 2 artifacts Project 3 due Thursday night
Announcements Project 4 out today Project 2 winners help session today
Where are we? We have covered: Project 1b was due today
Principal Component Analysis
Midterm Exam Closed book, notes, computer Similar to test 1 in format:
The “Margaret Thatcher Illusion”, by Peter Thompson
Presentation transcript:

Hyperspectral Imaging Alex Chen 1, Meiching Fong 1, Zhong Hu 1, Andrea Bertozzi 1, Jean-Michel Morel 2 1 Department of Mathematics, UCLA 2 ENS Cachan, Paris Classification of Materials in a Hyperspectral Image Overview of Hyperspectral Images and Dimension Reduction Principal Components Analysis K-means Clustering Classification of Materials Stable Signal Recovery  However, most meaningful algorithms applied to raw hyperspectral data are too computationally expensive.  Due to the high information content of a hyperspectral image and a large degree of redundancy in the data, dimension reduction is an integral part of analyzing a hyperspectral image.  Techniques exist for reducing dimensionality in both the spatial (principal components analysis) and spectral (clustering) domains.  A standard RGB color image has three spectral bands (wavelengths of light).  In contrast, a hyperspectral image typically has more than 200 spectral bands that can include not only the visible spectrum, but also some bands in the infrared and ultraviolet spectra as well.  The extra information in the spectral bands can be used to classify objects in an image with greater accuracy.  Applications include the military, mineral identification, and vegetation identification.  Principal components analysis (PCA) is a method used to reduce the data stored in the typically more than 200 wavelengths of a hyperspectral image down to a smaller subspace, typically 5-10 dimensions, without losing too much information.  PCA considers all possible projections of data and chooses the projection with the greatest variation in the first component (eigenvector of covariance matrix), second greatest in the second component, and so on.  These experiments ran PCA on hyperspectral data with 31 bands. In all tests (on eight images), the first four eigenvectors accounted for at least 97% of the total variation of the data.  Using the projection of the data onto the first few eigenvectors (obtained from PCA), k-means clustering assigns each data point to a cluster. The color of each point is assigned to be the color of the center of the cluster to which it belongs.  These points can then be mapped back to the original space, giving a new image with k colors.  This significantly reduces the amount of space needed to store the data.  Using Hypercube®, an application for hyperspectral imaging, the following data (210 bands) was classified using different algorithms.  Using a result of Candes, Romberg, and Tao for (approximate) sparse signal recovery, it may be possible to compress a hyperspectral signature further, before implementing compression techniques such as PCA.  In this method, a hyperspectral signature at a given pixel is converted to the Fourier domain (or in some basis so that the signal is sparse), and a small number of measurements on the signal is taken.  The signal may be reconstructed accurately, given enough measurements. eig1 74.0% eig2 17.6% eig3 5.4% eig4 1.1% Total 98.1% Original Image Image Reconstructed with 15 colors  K-means can also be used to find patterns in the data.  Pixels representing similar items should be classified as being the same. This use of k-means is discussed further in the next section.  One significant drawback is that the number of clusters k must be specified a priori. Classification using “Absolute Difference”:  |ref - sig| Classification using “Correlation Coefficient”: Cov (ref,sig)/(  (ref)*  (sig))  Significant features considered include roads, vegetation and building rooftops.  Nine points were chosen that seemed to represent best the various materials in the image.  Ten algorithms were tested, with “Correlation Coefficient” giving the best results in that most buildings and vegetation are properly classified. However, the main road near the top has many points that are misclassified, unlike with “Absolute Difference,” though “Absolute Difference” does not perform as well in most cases. Interpretation of Results “Correlation Coefficient” with extra “soil” point  Running the algorithms with Hypercube gives the same problems as k-means, namely, the number of clusters k must be preselected.  Based on results from the previous experiment, adding a point corresponding to “soil” (yellow) gives a better classification.  One reason for the effectiveness of “Correlation Coefficient” is that brightness is not a factor in classification.  In the spectral signature plot of three points on the right, points 2 and 3 are both vegetation, with 3 being much brighter than 2. Point 1 represents a piece of road.  “Absolute Difference” considers the difference in amplitude for each wavelength as significant (thus misclassifying 1 and 2 to be the same), while “Correlation Coefficient” considers only the relative shape (thus classifying 2 and 3 together correctly). This research supported in part by NSF grant DMS and NSF VIGRE grant DMS Example of signal recovery of an approximately sparse signal Original SignalRecovered Signal