Hongliang Li, Senior Member, IEEE, Linfeng Xu, Member, IEEE, and Guanghui Liu Face Hallucination via Similarity Constraints.

Slides:



Advertisements
Similar presentations
CSCE 643 Computer Vision: Template Matching, Image Pyramids and Denoising Jinxiang Chai.
Advertisements

QR Code Recognition Based On Image Processing
Zhimin CaoThe Chinese University of Hong Kong Qi YinITCS, Tsinghua University Xiaoou TangShenzhen Institutes of Advanced Technology Chinese Academy of.
Fast Algorithms For Hierarchical Range Histogram Constructions
Chapter 3 Image Enhancement in the Spatial Domain.
Principal Component Analysis (PCA) for Clustering Gene Expression Data K. Y. Yeung and W. L. Ruzzo.
Patch-based Image Deconvolution via Joint Modeling of Sparse Priors Chao Jia and Brian L. Evans The University of Texas at Austin 12 Sep
Automatic Feature Extraction for Multi-view 3D Face Recognition
Intelligent Systems Lab. Recognizing Human actions from Still Images with Latent Poses Authors: Weilong Yang, Yang Wang, and Greg Mori Simon Fraser University,
IMAGE UPSAMPLING VIA IMPOSED EDGE STATISTICS Raanan Fattal. ACM Siggraph 2007 Presenter: 이성호.
IMAGE RESTORATION AND REALISM MILLIONS OF IMAGES SEMINAR YUVAL RADO.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
One-Shot Multi-Set Non-rigid Feature-Spatial Matching
Modeling Pixel Process with Scale Invariant Local Patterns for Background Subtraction in Complex Scenes (CVPR’10) Shengcai Liao, Guoying Zhao, Vili Kellokumpu,
Exampled-based Super resolution Presenter: Yu-Wei Fan.
A Convex Optimization Approach for Depth Estimation Under Illumination Variation Wided Miled, Student Member, IEEE, Jean-Christophe Pesquet, Senior Member,
1 Learning to Detect Objects in Images via a Sparse, Part-Based Representation S. Agarwal, A. Awan and D. Roth IEEE Transactions on Pattern Analysis and.
Digital Image Processing
Very Low Resolution Face Recognition Problem
Adaptive Rao-Blackwellized Particle Filter and It’s Evaluation for Tracking in Surveillance Xinyu Xu and Baoxin Li, Senior Member, IEEE.
7. Neighbourhood operations A single pixel considered in isolation conveys information on the intensity and colour at a single location in an image, but.
ON THE IMPROVEMENT OF IMAGE REGISTRATION FOR HIGH ACCURACY SUPER-RESOLUTION Michalis Vrigkas, Christophoros Nikou, Lisimachos P. Kondi University of Ioannina.
Detecting Image Region Duplication Using SIFT Features March 16, ICASSP 2010 Dallas, TX Xunyu Pan and Siwei Lyu Computer Science Department University.
Distinctive Image Feature from Scale-Invariant KeyPoints
Scale Invariant Feature Transform (SIFT)
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Super-Resolution of Remotely-Sensed Images Using a Learning-Based Approach Isabelle Bégin and Frank P. Ferrie Abstract Super-resolution addresses the problem.
3D-Assisted Facial Texture Super-Resolution Pouria Mortazavian, Josef Kittler, William Christmas 10 September 2009 Centre for Vision, Speech and Signal.
Super-Resolution Barak Zackay Yaron Kassner. Outline Introduction to Super-Resolution Reconstruction Based Super Resolution –An Algorithm –Limits on Reconstruction.
Image Analogies Aaron Hertzmann (1,2) Charles E. Jacobs (2) Nuria Oliver (2) Brian Curless (3) David H. Salesin (2,3) 1 New York University 1 New York.
Introduction Autostereoscopic displays give great promise as the future of 3D technology. These displays spatially multiplex many views onto a screen,
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
Graph Embedding: A General Framework for Dimensionality Reduction Dong XU School of Computer Engineering Nanyang Technological University
Reporter: Fei-Fei Chen. Wide-baseline matching Object recognition Texture recognition Scene classification Robot wandering Motion tracking.
Local Non-Negative Matrix Factorization as a Visual Representation Tao Feng, Stan Z. Li, Heung-Yeung Shum, HongJiang Zhang 2002 IEEE Presenter : 張庭豪.
Digital Image Processing Lecture12: Basics of Spatial Filtering.
Jointly Optimized Regressors for Image Super-resolution Dengxin Dai, Radu Timofte, and Luc Van Gool Computer Vision Lab, ETH Zurich 1.
Structured Face Hallucination Chih-Yuan Yang Sifei Liu Ming-Hsuan Yang Electrical Engineering and Computer Science 1.
CS482 Selected Topics in Digital Image Processing بسم الله الرحمن الرحيم Instructor: Dr. Abdullah Basuhail,CSD, FCIT, KAU, 1432H Chapter 2: Digital Image.
Wenqi Zhu 3D Reconstruction From Multiple Views Based on Scale-Invariant Feature Transform.
2005/12/021 Content-Based Image Retrieval Using Grey Relational Analysis Dept. of Computer Engineering Tatung University Presenter: Tienwei Tsai ( 蔡殿偉.
CVPR2013 Poster Detecting and Naming Actors in Movies using Generative Appearance Models.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
A NOVEL METHOD FOR COLOR FACE RECOGNITION USING KNN CLASSIFIER
Data Mining, ICDM '08. Eighth IEEE International Conference on Duy-Dinh Le National Institute of Informatics Hitotsubashi, Chiyoda-ku Tokyo,
Evaluation of gene-expression clustering via mutual information distance measure Ido Priness, Oded Maimon and Irad Ben-Gal BMC Bioinformatics, 2007.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Speaker Min-Koo Kang March 26, 2013 Depth Enhancement Technique by Sensor Fusion: MRF-based approach.
Course 5 Edge Detection. Image Features: local, meaningful, detectable parts of an image. edge corner texture … Edges: Edges points, or simply edges,
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
南台科技大學 資訊工程系 Data hiding based on the similarity between neighboring pixels with reversibility Author:Y.-C. Li, C.-M. Yeh, C.-C. Chang. Date:
Jianchao Yang, John Wright, Thomas Huang, Yi Ma CVPR 2008 Image Super-Resolution as Sparse Representation of Raw Image Patches.
Matte-Based Restoration of Vintage Video 指導老師 : 張元翔 主講人員 : 鄭功運.
Shadow Detection in Remotely Sensed Images Based on Self-Adaptive Feature Selection Jiahang Liu, Tao Fang, and Deren Li IEEE TRANSACTIONS ON GEOSCIENCE.
Image Enhancement in the Spatial Domain.
Motivations Paper: Directional Weighted Median Filter Paper: Fast Median Filters Proposed Strategy Simulation Results Conclusion References.
Heechul Han and Kwanghoon Sohn
SIFT Scale-Invariant Feature Transform David Lowe
Source: IEEE Signal Processing Letters (Accepted)2016
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
PRESENTED BY Yang Jiao Timo Ahonen, Matti Pietikainen
Fast Preprocessing for Robust Face Sketch Synthesis
Histogram—Representation of Color Feature in Image Processing Yang, Li
3.1 Clustering Finding a good clustering of the points is a fundamental issue in computing a representative simplicial complex. Mapper does not place any.
Cheng-Ming Huang, Wen-Hung Liao Department of Computer Science
Multi-modality image registration using mutual information based on gradient vector flow Yujun Guo May 1,2006.
Computer Vision Lecture 9: Edge Detection II
Presented by: Chang Jia As for: Pattern Recognition
Fundamentals of Spatial Filtering
Random Neural Network Texture Model
Presentation transcript:

Hongliang Li, Senior Member, IEEE, Linfeng Xu, Member, IEEE, and Guanghui Liu Face Hallucination via Similarity Constraints

Outline Introduction Proposed Method Framework of the Proposed Method Similarity Constraints Computation LR-LR Similarity Constraint LR-HR Similarity Constraint HR Smoothness Constraint Spatial Similarity Experiments Conclusion

Introduction In many cases the face images captured by live cameras are often of low resolutions due to the environment or equipment limitations. In order to generate a high resolution face image effectively, a lot of methods have been presented in the last decade.

Introduction In this letter, a new face hallucination approach based on similarity constraints is proposed to hallucinate a high resolution face image from an input low-resolution face image. The proposed method formulates the face hallucination as a local linear filtering progress based on training LR-HR face image pairs.

Proposed Method A. Framework of the Proposed Method Let Z L and Z H denote the low resolution and high resolution training face images, respectively, where Z L is downsampled from Z H by an integer factor. Assume I L be an input low-resolution face image, while I H represents its high-resolution face image to be hallucinated.

Framework of the Proposed Method Fig.1. Framework of our face hallucination approach.

Framework of the Proposed Method Three stages are involved in this work. We first search a LR-HR face database for all patches that are stored beforehand. The similarities between the input patch and each pair of LR- HR face patches are measured under different constraint conditions. Finally, we hallucinate a high-resolution image by inferring the lost details within the input low-resolution image.

Framework of the Proposed Method Assume each image has been divided into N overlapping patches with identical spacing. Let denote the set of pairs of training LR-HR patches, i and j are patch indices. For an input LR face patch I L (i), our goal is to utilize the training patch pairs to recover the missing high frequency details in the hallucinated patch I H (i).

Framework of the Proposed Method and are the mean values of the input LR patch I L (i) and the HR patch I H (j), respectively. The second term (Z H (j) - )is to perform the normalization by subtracting the mean from the HR patch. is defined as a filter kernel that depends on I L, Z L, and Z H.

Framework of the Proposed Method C ij is to ensure that the sum of is equal to one. Here represents the neighborhood of patch i. It is noticed that there are four terms defined in the kernel W, which perform the similarity constraints, i.e., LR-LR similarity, LR-HRsimilarity, smoothness constraint and spatial similarity.

Proposed Method B. Similarity Constraint Computation 1) LR-LR Similarity Constraint Given a LR training face image, we have stored its corresponding HR training image beforehand. It means that all the missing high-frequency details in the LR image can be accurately estimated from its HR one.

Similarity Constraints Computation The control parameter θ 1 adjusts the range of intensity similarity, which means that smaller allows large changes between the two LR patches. A straightforward computation of S is their Euclidean distance, which may result in poor performance in the case of the significant lighting variation or noise corruption.

Similarity Constraints Computation The distance can be expressed as where the operation denotes the l-norm distance.

Similarity Constraints Computation 2) LR-HR Similarity Constraint The LR-HR constraint is designed to measure the similarity between an input photo patch I L (i) and a HR patch Z H (j). Since HR patches usually contain a great of high frequency contents that are missed for the LR patches, it is difficult to compare their similarity directly based on their difference.

Similarity Constraints Computation We design a new descriptor called local appearance similarity (LAS) descriptor to measure the similarity between LR and HR patches. This descriptor is generated based on patch pairs similarity within a local region, which is illustrated in Fig. 2.

Similarity Constraints Computation Fig. 2. Illustration of computation.

Similarity Constraints Computation Given a LR patch I L (i) and a HR training patch Z H (j), i.e., the patches marked with solid yellow line, the LR-HR constraint is defined to measure the similarity between them. The final LAS descriptor for a patch is the concatenation of the matrix elements in terms of the raster scan order.

Similarity Constraints Computation Let and denote the 1 x d dimensional LAS descriptors for patches I L (i) and Z H (j).

Similarity Constraints Computation The parameter θ 2 and σ s adjust the descriptors similarity, and denote the neighborhoods of patches I L (i) and I H (j), respectively. In our work, we set unless otherwise specified. The final LAS descriptor will be a 25-dimensional vector.

Similarity Constraints Computation 3) HR Smoothness Constraint We tend to design a constraint to answer if those similar patches have good compatibilities with the neighboring ones. We call as a smoothness term, which aims to impose the smoothness constraint between neighboring hallucinated patches.

Similarity Constraints Computation The HR smoothness constraint can be formulated as where ∆ t and ∆ l denote the top and left overlapping regions for pairs of patches Z H (j) – I H (i t ) and Z H (j) – I H (i l ), respectively. Here, θ 3 is used to control the range of smoothness variation.

Similarity Constraints Computation 4) Spatial Similarity It is reasonable to assign small constraints for those patches that are far from the hallucinating patch I H (i). We define a new constraint to compute the similarity between Z H (j) and I L (i) based on the spatial distance.

Similarity Constraints Computation The parameter θ 4 adjusts the spatial similarity. D(i,j) is a spatial window function defined by the set of the neighborhood of t i (i.e., ).

Experiments Given an input LR face image, we divide it into a number of overlapping patches with the size of 4 × 4. The overlapping pixel is set to 3, which corresponds to 12 pixels in the HR face image. We employ laplacian cost function, i.e., l = 1, to compute the similarity constraints.

Experiments We first perform the evaluation on a large number of face images taken from FERET face database. About 1200 images of 873 persons were selected as training images and 300 images of 227 persons for testing. We compare our method with the state-of-the-art methods, which include the general bicubic interpolation, Liu et al. [3], Wang et al. [4], Ma et al. [7], and Zhang et al. [11].

Experiments Fig. 3. (a) Some examples of face hallucination results. (b) Locally enlarged results for the last two face images.

Experiments In addition, we also evaluate our proposed method on some face images taken from the CMU+MIT face database. Fig. 4. Experimental results on some LR face images.

Experiments We also performthe objective evaluation on our method. Two quantitative parameters are used to measure the similarity between the original HR face image and the hallucinated one, namely peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM). The default parameters in SSIM are set to K ssim =[ ](constant term), window =8(local window size), and L ssim =100(dynamic range of the pixel values), which were recommended by the authors.

Experiments However, as discussed in [11] and [12], we also found a similar phenomenon that PSNR and SSIM are not always consistent with the human perceptual quality.

Conclusion Inspired by our guided synthesis framework, this method provides an effective way to infer the missing high frequency details within the input LR face image based on the similarity constraints. Given the training set, four constraint functions are designed to learn the lost information from the most similar training examples. Experimental evaluation demonstrates the good performance of the proposed method on the face hallucination task.

Thank you for your listening