Super-Resolution of Remotely-Sensed Images Using a Learning-Based Approach Isabelle Bégin and Frank P. Ferrie Abstract Super-resolution addresses the problem.

Slides:



Advertisements
Similar presentations
A Two-Step Approach to Hallucinating Faces: Global Parametric Model and Local Nonparametric Model Ce Liu Heung-Yeung Shum Chang Shui Zhang CVPR 2001.
Advertisements

Bayesian Belief Propagation
Optimizing and Learning for Super-resolution
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
Víctor Ponce Miguel Reyes Xavier Baró Mario Gorga Sergio Escalera Two-level GMM Clustering of Human Poses for Automatic Human Behavior Analysis Departament.
Hongliang Li, Senior Member, IEEE, Linfeng Xu, Member, IEEE, and Guanghui Liu Face Hallucination via Similarity Constraints.
A Sampled Texture Prior for Image Super-Resolution Lyndsey C. Pickup, Stephen J. Roberts and Andrew Zisserman, Robotics Research Group, University of Oxford.
Complex Feature Recognition: A Bayesian Approach for Learning to Recognize Objects by Paul A. Viola Presented By: Emrah Ceyhan Divin Proothi Sherwin Shaidee.
Introduction of Probabilistic Reasoning and Bayesian Networks
EE462 MLCV Lecture Introduction of Graphical Models Markov Random Fields Segmentation Tae-Kyun Kim 1.
IMAGE UPSAMPLING VIA IMPOSED EDGE STATISTICS Raanan Fattal. ACM Siggraph 2007 Presenter: 이성호.
IMAGE RESTORATION AND REALISM MILLIONS OF IMAGES SEMINAR YUVAL RADO.
Belief Propagation by Jakob Metzler. Outline Motivation Pearl’s BP Algorithm Turbo Codes Generalized Belief Propagation Free Energies.
Rob Fergus Courant Institute of Mathematical Sciences New York University A Variational Approach to Blind Image Deconvolution.
Belief Propagation on Markov Random Fields Aggeliki Tsoli.
1 Bayesian Image Modeling by Generalized Sparse Markov Random Fields and Loopy Belief Propagation Kazuyuki Tanaka GSIS, Tohoku University, Sendai, Japan.
Visual Recognition Tutorial
Image Super-Resolution Using Sparse Representation By: Michael Elad Single Image Super-Resolution Using Sparse Representation Michael Elad The Computer.
Date:2011/06/08 吳昕澧 BOA: The Bayesian Optimization Algorithm.
Pattern Recognition and Machine Learning
Exampled-based Super resolution Presenter: Yu-Wei Fan.
Bayesian Image Super-resolution, Continued Lyndsey C. Pickup, David P. Capel, Stephen J. Roberts and Andrew Zisserman, Robotics Research Group, University.
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Learning Low-Level Vision William T. Freeman Egon C. Pasztor Owen T. Carmichael.
Pattern Recognition. Introduction. Definitions.. Recognition process. Recognition process relates input signal to the stored concepts about the object.
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
Image Analogies Aaron Hertzmann (1,2) Charles E. Jacobs (2) Nuria Oliver (2) Brian Curless (3) David H. Salesin (2,3) 1 New York University 1 New York.
24 November, 2011National Tsin Hua University, Taiwan1 Mathematical Structures of Belief Propagation Algorithms in Probabilistic Information Processing.
EE513 Audio Signals and Systems Statistical Pattern Classification Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Computer vision: models, learning and inference
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Example Clustered Transformations MAP Adaptation Resources: ECE 7000:
1 Physical Fluctuomatics 5th and 6th Probabilistic information processing by Gaussian graphical model Kazuyuki Tanaka Graduate School of Information Sciences,
Lecture 26: Single-Image Super-Resolution CAP 5415.
Mixed Cumulative Distribution Networks Ricardo Silva, Charles Blundell and Yee Whye Teh University College London AISTATS 2011 – Fort Lauderdale, FL.
Automatic Minirhizotron Root Image Analysis Using Two-Dimensional Matched Filtering and Local Entropy Thresholding Presented by Guang Zeng.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
Image Modeling & Segmentation Aly Farag and Asem Ali Lecture #2.
14 October, 2010LRI Seminar 2010 (Univ. Paris-Sud)1 Statistical performance analysis by loopy belief propagation in probabilistic image processing Kazuyuki.
Ch 8. Graphical Models Pattern Recognition and Machine Learning, C. M. Bishop, Revised by M.-O. Heo Summarized by J.W. Nam Biointelligence Laboratory,
Presented by Jian-Shiun Tzeng 5/7/2009 Conditional Random Fields: An Introduction Hanna M. Wallach University of Pennsylvania CIS Technical Report MS-CIS
Learning low-level vision Computer Examples by Michael Ross.
Lecture 2: Statistical learning primer for biologists
Spatial Smoothing and Multiple Comparisons Correction for Dummies Alexa Morcom, Matthew Brett Acknowledgements.
MIT AI Lab / LIDS Laboatory for Information and Decision Systems & Artificial Intelligence Laboratory Massachusetts Institute of Technology A Unified Multiresolution.
CPSC 422, Lecture 17Slide 1 Intelligent Systems (AI-2) Computer Science cpsc422, Lecture 17 Oct, 19, 2015 Slide Sources D. Koller, Stanford CS - Probabilistic.
Speaker Min-Koo Kang March 26, 2013 Depth Enhancement Technique by Sensor Fusion: MRF-based approach.
Markov Networks: Theory and Applications Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208
Bayesian Belief Propagation for Image Understanding David Rosenberg.
Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability Primer Bayesian Brain Probabilistic Approaches to Neural Coding 1.1 A Probability.
Jianchao Yang, John Wright, Thomas Huang, Yi Ma CVPR 2008 Image Super-Resolution as Sparse Representation of Raw Image Patches.
SIFT Scale-Invariant Feature Transform David Lowe
Statistical-Mechanical Approach to Probabilistic Image Processing -- Loopy Belief Propagation and Advanced Mean-Field Method -- Kazuyuki Tanaka and Noriko.
Graduate School of Information Sciences, Tohoku University
Today.
Dynamical Statistical Shape Priors for Level Set Based Tracking
Chapter 3: Maximum-Likelihood and Bayesian Parameter Estimation (part 2)
Graduate School of Information Sciences, Tohoku University, Japan
Graduate School of Information Sciences, Tohoku University
Hidden Markov Models Part 2: Algorithms
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Graduate School of Information Sciences, Tohoku University
Inferring Edges by Using Belief Propagation
Markov Random Fields Presented by: Vladan Radosavljevic.
Graduate School of Information Sciences, Tohoku University
Expectation-Maximization & Belief Propagation
Probabilistic image processing and Bayesian network
Probabilistic image processing and Bayesian network
Cluster Variation Method for Correlation Function of Probabilistic Model with Loopy Graphical Structure Kazuyuki Tanaka Graduate School of Information.
Chapter 3: Maximum-Likelihood and Bayesian Parameter Estimation (part 2)
Kazuyuki Tanaka Graduate School of Information Sciences
Presentation transcript:

Super-Resolution of Remotely-Sensed Images Using a Learning-Based Approach Isabelle Bégin and Frank P. Ferrie Abstract Super-resolution addresses the problem of estimating a high-resolution version of a low-resolution image. In this research, a learning-based approach for super-resolution is applied to remotely-sensed images. These images contain fewer sharp edges than those previously used to test the method, making the problem more difficult. The approach uses a Markov network to model the relationship between an image/scene training pair (low-resolution/high-resolution pair). Bayesian belief propagation is used to obtain the posterior probability of the scene, given a low-resolution input image. Preliminary results on Landsat-7 images show that the framework works reasonably well when given a sufficiently large training set. Training The goal of this experiment was to test whether a learning-based approach for super- resolution can be used on remotely-sensed images. These images contain fewer sharp edges than those previously used to test the method. The training set is a pair of blurred and non-blurred Landsat-7 images (Band #3). The 350x350 image was first blurred with a Gaussian filter, subsampled and interpolated back to the original resolution. Problem Is it possible to infer a high-resolution version of an image that is first given at low resolution? Stronger approaches than simple interpolation use Bayesian techniques or regularization while assuming prior probabilities or constraints. Furthermore, these techniques are independent of both the structures present in the image and the nature of the image itself. Freeman et al. propose that statistical relationships between pairs of low-resolution and high-resolution images (image/scene pairs) can be learned. Thus, given a new image to super-resolve (of the same nature as the images in the training set), it is possible to obtain a higher-resolution image based on the statistics acquired from the training pairs. The most important features learned by the method are edges. The algorithm learns what a blurry edge should look like in its corresponding high-resolution image. Thus, the training data need to contain many sharp edges. The experiments described in Freeman et al. were done mostly on such training sets. In the present research, the specific problem of super-resolution of remotely-sensed images (with few sharp edges) is addressed. Discussion Many factors will influence the quality of the results. This framework works better with training images that contain many sharp edges. This is not the case for the images used in this experiment. In the examples shown, no pre-processing steps were done on the images. Results may be improved by first enhancing the edges in the image. Because the framework is statistical, the size and completeness of the training set will have a big impact on the result. In the experiment, the size of the training set was small (350 x 350 image) and therefore might not contain the full variety of structures possibly present in a remotely-sensed image. The fitting of Gaussian mixture models is also very important. Models were obtained with the Expectation Maximization algorithm. The number of iterations, the structure of the covariance and the number of clusters can all modify the computation of the joint probabilities, which are the core of the framework. The size of the patches, the number of principal components and the number of closest patches considered will also impact the final result. Conclusions This research presented a learning-based method for super-resolution applied to remotely-sensed images. A Markov network was used to express the statistical dependencies between a low-resolution/high-resolution training pair. The maximum a posteriori estimate of the scene was obtained by Bayesian propagation. Preliminary results show that such an approach can be used on remotely-sensed images provided that the training set is sufficiently large. Future work will include pre- processing of the images for the enhancement of contained edges as well as the use of a more complete training set. Reference Freeman, W.T., Pasztor, E.C. and Carmichael, O.T., "Learning Low-Level Vision", International Journal of Computer Vision, 40(1), pp , Theory The statistical dependencies between image/scene training pairs (low-resolution and high-resolution images) are modeled in a Markov network where each node describes an image or a scene patch. The topology of the network is shown in the following diagram. Statistical dependencies are shown by lines connecting nodes. Scene patches are depicted in white and image patches in red. There are two types of connections. First, each scene patch is connected to its neighbours. Second, a patch in the scene is connected to its corresponding patch in the low-resolution image. Two nodes not directly connected are assumed to be statistically independent. The Markov assumption allows a “message-passing” rule that involves only local computations. The maximum a posteriori estimate (MAP) for node j is calculated using the following equation (Freeman et al.): where is the message in the previous iteration. In principle, this rule should only be used on chains or trees. However, the topology of the Markov network for the super- resolution problem contains loops. In this situation, it was previously demonstrated that equations (1) and (2) can still be used. It is thus assumed that the network consists of vertical or horizontal chains. Results A new image was also blurred, subsampled and interpolated to form the input image. The results of the super-resolution algorithm (Figure 3b) approximates the high- resolution image reasonably well (Figure 3c). Edges are better preserved than the interpolated version of the input image (Figure 3a). For comparison, the nearest neighbour interpolation is shown in Figure 3d, where the edges appear more choppy. However, thin structures (such as road networks) are lost in the super-resolution process. This problem might be caused by a small patch-size as well as an incomplete training set. Figure 2: Training set (left: sharp image; right: blurred image) Figure 1: Markov network for the super-resolution problem Figure 3. (a) Input image (bilinear interpolation) (b) Result of the super-resolution algorithm (c) Original full-resolution image (d) Nearest neighbour interpolation (a)(b)(c)(d) Construction of the Training Set 1.Blur and subsample a set of high-resolution images. The set should be large enough so as to include the widest range of intensity structures as possible. 2.Interpolate the images back to the original resolution. 3.Break the images and scenes into center-aligned patches. 4.Apply Principal Components Analysis on the training set. 5.Fit Gaussian mixtures to the joint probabilities P(x k, x j ) (neighbouring scene patches k and j ) and P(y k,x k ) (image/scene pair), as well as the prior probability of a scene patch P(x j ). Computation of the Conditional Probabilities 1.Break a new low-resolution (interpolated) image into patches and apply Principal Components Analysis. 2.For each patch y k, find the n closest patches in the training set. The n corresponding scenes will be used in the calculations. 3.The conditional probabilities for each candidate are computed with the following equations: where l and m run from 1 to n and are the candidates for patches x k and x j, respectively. P(y k |x k l ) is thus a n x 1 vector and P(x k l |x j m ) is a n x n matrix. The messages M j k are n x 1 vectors whose elements are set to 1 for the first iteration. 4.The MAP estimate for patch j ( x j MAP ) is obtained from equations (1) and (2). The steps involved in the construction of the training set and in the computation of the joint probabilities are the following: y2y2 y1y1 x4x4 x3x3 x1x1 x2x2 y3y3 y4y4 The advantage of this learning-based algorithm compared to other learning methods is that local support is added in the form of conditional probabilities. In other learning-based approaches, the closest patch in the training set is chosen regardless of its neighbours. This Bayesian technique also has the advantage of learning the prior probabilities instead of assuming them. (1) (2) (3) (4)