Www.nr.no earthobs.nr.no Temporal Analysis of Forest Cover Using a Hidden Markov Model Arnt-Børre Salberg and Øivind Due Trier Norwegian Computing Center.

Slides:



Advertisements
Similar presentations
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
Advertisements

2 – In previous chapters: – We could design an optimal classifier if we knew the prior probabilities P(wi) and the class- conditional probabilities P(x|wi)
Dynamic Bayesian Networks (DBNs)
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
Introduction to Hidden Markov Models
Hidden Markov Models Bonnie Dorr Christof Monz CMSC 723: Introduction to Computational Linguistics Lecture 5 October 6, 2004.
Cognitive Computer Vision
Page 1 Hidden Markov Models for Automatic Speech Recognition Dr. Mike Johnson Marquette University, EECE Dept.
An Introduction to Hidden Markov Models and Gesture Recognition Troy L. McDaniel Research Assistant Center for Cognitive Ubiquitous Computing Arizona State.
Hidden Markov Models Theory By Johan Walters (SR 2003)
Desheng Liu, Maggi Kelly and Peng Gong Dept. of Environmental Science, Policy & Management University of California, Berkeley May 18, 2005 Classifying.
Albert Gatt Corpora and Statistical Methods Lecture 8.
Paper Discussion: “Simultaneous Localization and Environmental Mapping with a Sensor Network”, Marinakis et. al. ICRA 2011.
PatReco: Hidden Markov Models Alexandros Potamianos Dept of ECE, Tech. Univ. of Crete Fall
HMM-BASED PATTERN DETECTION. Outline  Markov Process  Hidden Markov Models Elements Basic Problems Evaluation Optimization Training Implementation 2-D.
Hidden Markov Model 11/28/07. Bayes Rule The posterior distribution Select k with the largest posterior distribution. Minimizes the average misclassification.
First introduced in 1977 Lots of mathematical derivation Problem : given a set of data (data is incomplete or having missing values). Goal : assume the.
Part 4 b Forward-Backward Algorithm & Viterbi Algorithm CSE717, SPRING 2008 CUBS, Univ at Buffalo.
Face Recognition Using Embedded Hidden Markov Model.
Timothy and RahulE6886 Project1 Statistically Recognize Faces Based on Hidden Markov Models Presented by Timothy Hsiao-Yi Chin Rahul Mody.
1 Hidden Markov Model Instructor : Saeed Shiry  CHAPTER 13 ETHEM ALPAYDIN © The MIT Press, 2004.
1 Integration of Background Modeling and Object Tracking Yu-Ting Chen, Chu-Song Chen, Yi-Ping Hung IEEE ICME, 2006.
May 11, 2005 Tracking on a Graph Songhwai Oh Shankar Sastry Target trajectoriesEstimated tracks Tracking in Sensor Networks Target tracking is a representative.
. Class 5: Hidden Markov Models. Sequence Models u So far we examined several probabilistic model sequence models u These model, however, assumed that.
Sensys 2009 Speaker:Lawrence.  Introduction  Overview & Challenges  Algorithm  Travel Time Estimation  Evaluation  Conclusion.
Cognitive Computer Vision Kingsley Sage and Hilary Buxton Prepared under ECVision Specific Action 8-3
Isolated-Word Speech Recognition Using Hidden Markov Models
earthobs.nr.no Land cover classification of cloud- and snow-contaminated multi-temporal high-resolution satellite images Arnt-Børre Salberg and.
Alignment and classification of time series gene expression in clinical studies Tien-ho Lin, Naftali Kaminski and Ziv Bar-Joseph.
Hidden Markov Models Usman Roshan CS 675 Machine Learning.
Image Classification 영상분류
1 University of Texas at Austin Machine Learning Group 图像与视频处理 计算机学院 Motion Detection and Estimation.
Use of Hidden Markov Models and Phenology for Multitemporal Satellite Image Classification Applications to Mountain Vegetation Classification.
Chapter 3: Maximum-Likelihood Parameter Estimation l Introduction l Maximum-Likelihood Estimation l Multivariate Case: unknown , known  l Univariate.
CS Statistical Machine learning Lecture 24
1 CONTEXT DEPENDENT CLASSIFICATION  Remember: Bayes rule  Here: The class to which a feature vector belongs depends on:  Its own value  The values.
State Estimation and Kalman Filtering
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Supervised Learning Resources: AG: Conditional Maximum Likelihood DP:
earthobs.nr.no Retraining maximum likelihood classifiers using a low-rank model Arnt-Børre Salberg Norwegian Computing Center Oslo, Norway IGARSS.
ECE 8443 – Pattern Recognition Objectives: Jensen’s Inequality (Special Case) EM Theorem Proof EM Example – Missing Data Intro to Hidden Markov Models.
1 Hidden Markov Model Observation : O1,O2,... States in time : q1, q2,... All states : s1, s2,... Si Sj.
 Present by 陳群元.  Introduction  Previous work  Predicting motion patterns  Spatio-temporal transition distribution  Discerning pedestrians  Experimental.
Discriminative Training and Machine Learning Approaches Machine Learning Lab, Dept. of CSIE, NCKU Chih-Pin Liao.
1 Hidden Markov Model Observation : O1,O2,... States in time : q1, q2,... All states : s1, s2,..., sN Si Sj.
Hidden Markov Models. A Hidden Markov Model consists of 1.A sequence of states {X t |t  T } = {X 1, X 2,..., X T }, and 2.A sequence of observations.
Definition of the Hidden Markov Model A Seminar Speech Recognition presentation A Seminar Speech Recognition presentation October 24 th 2002 Pieter Bas.
Alignment of Growth Seasons from Satellite Data Ragnar Bang Huseby, Lars Aurdal, Dagrun Vikhamar, Line Eikvil, Anne Solberg, Rune Solberg.
Other Models for Time Series. The Hidden Markov Model (HMM)
Visual Recognition Tutorial1 Markov models Hidden Markov models Forward/Backward algorithm Viterbi algorithm Baum-Welch estimation algorithm Hidden.
Ch 1. Introduction Pattern Recognition and Machine Learning, C. M. Bishop, Updated by J.-H. Eom (2 nd round revision) Summarized by K.-I.
A Study on Speaker Adaptation of Continuous Density HMM Parameters By Chin-Hui Lee, Chih-Heng Lin, and Biing-Hwang Juang Presented by: 陳亮宇 1990 ICASSP/IEEE.
Hidden Markov Models HMM Hassanin M. Al-Barhamtoshy
Hidden Markov Models BMI/CS 576
Temporal Classification and Change Detection
CS479/679 Pattern Recognition Dr. George Bebis
Chapter 3: Maximum-Likelihood Parameter Estimation
Hidden Markov Models.
ICS 280 Learning in Graphical Models
Statistical Models for Automatic Speech Recognition
IGRASS2011 An Interferometric Coherence Optimization Method Based on Genetic Algorithm in PolInSAR Peifeng Ma, Hong Zhang, Chao Wang, Jiehong Chen Center.
Propagating Uncertainty In POMDP Value Iteration with Gaussian Process
Hidden Markov Models Part 2: Algorithms
Hidden Markov Autoregressive Models
Statistical Models for Automatic Speech Recognition
Hidden Markov Model LR Rabiner
CONTEXT DEPENDENT CLASSIFICATION
Image and Video Processing
Algorithms of POS Tagging
Hidden Markov Models By Manish Shrivastava.
NON-NEGATIVE COMPONENT PARTS OF SOUND FOR CLASSIFICATION Yong-Choon Cho, Seungjin Choi, Sung-Yang Bang Wen-Yi Chu Department of Computer Science &
Presentation transcript:

earthobs.nr.no Temporal Analysis of Forest Cover Using a Hidden Markov Model Arnt-Børre Salberg and Øivind Due Trier Norwegian Computing Center Oslo, Norway IGARSS 2011, July 27

earthobs.nr.no Outline ► Motivation ► Hidden Markov model ▪Concept ▪Class sequence estimation ▪Transition probabilities, clouds, and atmosphere ► Application: Tropical forest monitoring ► Conclusion

earthobs.nr.no Introduction ► Time-series of images needed in order to detect changes of the spatial forest cover ► Time-series analysis requires methodology that ▪handles the natural variability between in images ▪overcome problems with cloud cover in optical images (and missing data in Landsat-7) ▪handles atmospheric disturbances ▪does not propagate errors from one time instant to the next

earthobs.nr.no Introduction: Change detection ► Naive ▪Simply create forest cover maps from two years, and compare ▪Errors in both maps are added. Not a good idea! ► Time series analysis ▪Model what is going on by using all available images from the two years (and before, between, and after)

earthobs.nr.no Hidden Markov Model (HMM) ► HMM used to model each location on ground as being in one of the following classes:  = {forest, sparse forest, grass, soil} ► Markov property: P(  t |  t-1,…,  1 ) = P(  t |  t-1 )

earthobs.nr.no Hidden Markov Model (HMM) ► Bayes rule for Markov models ► P(  t |  t-1 ) : class transition probability class 1class 2 P(1|1) P(2|1) class 3 P(3|1)

earthobs.nr.no Class sequence estimation ► Two popular methods for finding the class sequence  1.Most likely class sequence 2.Minimum probability of class error

earthobs.nr.no Most likely class sequence (MLCS) ► Finds the class sequence that maximizes the likelihood →maximum likelihood estimate ► The optimal sequence is efficiently found using the Viterbi-algorithm

earthobs.nr.no Viterbi algorithm Forest Sparse forest Grass Soil Forest Sparse forest Grass Soil Possible states at time t Possible states at time t+1 Most probable sequence of previous states for each state at time t The best sequence ending at state c, given the observations x 1, …, x t The probability of jumping from state c to state k (this is dependent on the time interval) The probability of observing the actual observation, given that the state is k

earthobs.nr.no Minimum probability of class error ► If we are interested in obtaining a minimum probability of error class estimate (at time instant t), the MLCS method is not optimal, but the maximum a posteriori (MAP) estimate is ► The MAP estimate at time instant t ►  t found using the forward-backward algorithm

earthobs.nr.no Class transition probabilities ► Landsat: Minimum time interval between two subsequent acquisitions is 16 days ► Let P 0 (  t =m|  t =m’) = P 0 (m|m’) denote the class transition probability corresponding to 16 days ► Class transition probability for any 16·  t interval …in matrix form

earthobs.nr.no Clouds ► Clouds prevent us from observing the Earth’s surface ► Clouds may be handled using two different strategies 1.Cloud screening and masking of cloud- contaminated pixels as missing observations 2.Include a cloud class in the HMM modeling framework …in this work we only consider strategy 1. ► Cloud screening performed using a SVM

earthobs.nr.no Data distributions, class transition probabilities, co-registration ► Landsat image data (band 1-5,7) modeled using class dependent multivariate Gaussian distributions ► Mean vector and covariance matrix estimated from training data ► Class transition probabilities assumed fixed ▪May be estimated from the data (e.g. Baum- Welch) ► Precise co-registration needed

earthobs.nr.no Atmospheric disturbance Ground surface reflectance Atmosphere Top of the atmosphere reflectance  We apply a re-training methodology for handling image data variations (IGARSS’11, MO4.T05)  LEDAPS calibration (surface reflectance) will be investigated

earthobs.nr.no Landsat 5 TM images (166/63) Amani, Tanzania

earthobs.nr.no Results - Forest cover maps Forest Sparse forest Soil Grass Worldview

earthobs.nr.no Results - Forest cover change WV

earthobs.nr.no Landsat 5 TM images ( ) Santarém, Brazil

earthobs.nr.no Results - Forest cover maps Santarém, Brazil

earthobs.nr.no Results - Forest cover change maps Santarém, Brazil

earthobs.nr.no Multsensor possibilities ► Multitemporal observations from other sensors (e.g., radar) may naturally be modeled in the hidden Markov model ▪Only the sensor data distributions are needed, e.g. ► The multisensor images need to be geocoded to the same grid 21

earthobs.nr.no Temporal forest cover sequence ► Multisensor Hidden Markov model 22 CLASSES tt  t-1  t+1 ytyt y t-2  t-2 ytyt y t+1 y t-1 Optical SAR TIME OBSERVATIONS

earthobs.nr.no Conclusions ► Time series analysis of each pixel based on a hidden Markov model ► Finds the most likely sequence of land cover classes ► Change detection based on classified sequence ▪Does not propagate errors since the whole sequence is classified simultaneously. ▪Regularized by the transition probabilities. ► Handles cloud contaminated images ► Multisensor possibilities 23

earthobs.nr.no Acknowledgements The experiment presented here was supported by a research grant from the Norwegian Space Centre. 24