Fast Image Replacement Using Multi-Resolution Approach Chih-Wei Fang and Jenn-Jier James Lien Robotics Lab Department of Computer Science and Information.

Slides:



Advertisements
Similar presentations
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Advertisements

K-means method for Signal Compression: Vector Quantization
Input Space versus Feature Space in Kernel- Based Methods Scholkopf, Mika, Burges, Knirsch, Muller, Ratsch, Smola presented by: Joe Drish Department of.
Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction.
Machine Learning Lecture 8 Data Processing and Representation
Texture Segmentation Based on Voting of Blocks, Bayesian Flooding and Region Merging C. Panagiotakis (1), I. Grinias (2) and G. Tziritas (3)
Image Indexing and Retrieval using Moment Invariants Imran Ahmad School of Computer Science University of Windsor – Canada.
As applied to face recognition.  Detection vs. Recognition.
Mining for High Complexity Regions Using Entropy and Box Counting Dimension Quad-Trees Rosanne Vetro, Wei Ding, Dan A. Simovici Computer Science Department.
Content Based Image Clustering and Image Retrieval Using Multiple Instance Learning Using Multiple Instance Learning Xin Chen Advisor: Chengcui Zhang Department.
A new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs Combines both geometry-based and image.
CONTENT BASED FACE RECOGNITION Ankur Jain 01D05007 Pranshu Sharma Prashant Baronia 01D05005 Swapnil Zarekar 01D05001 Under the guidance of Prof.
 Image Search Engine Results now  Focus on GIS image registration  The Technique and its advantages  Internal working  Sample Results  Applicable.
Face Recognition Jeremy Wyatt.
FACE RECOGNITION, EXPERIMENTS WITH RANDOM PROJECTION
November 30, 2010Neural Networks Lecture 20: Interpolative Associative Memory 1 Associative Networks Associative networks are able to store a set of patterns.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 7: Coding and Representation 1 Computational Architectures in.
Face Processing System Presented by: Harvest Jang Group meeting Fall 2002.
CS 485/685 Computer Vision Face Recognition Using Principal Components Analysis (PCA) M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Face Recognition Using EigenFaces Presentation by: Zia Ahmed Shaikh (P/IT/2K15/07) Authors: Matthew A. Turk and Alex P. Pentland Vision and Modeling Group,
H euristic Pre-Clustering Relevance Feedback in Attention-Based Image Retrieval Wan-Ting Su, Wen-Sheng Chu and Jenn-Jier James Lien Speaker: Wen-Sheng.
Image Segmentation by Clustering using Moments by, Dhiraj Sakumalla.
Computer vision.
Principle Component Analysis (PCA) Networks (§ 5.8) PCA: a statistical procedure –Reduce dimensionality of input vectors Too many features, some of them.
Training Database Step 1 : In general approach of PCA, each image is divided into nxn blocks or pixels. Then all pixel values are taken into a single one.
Machine Vision for Robots
Graphite 2004 Statistical Synthesis of Facial Expressions for the Portrayal of Emotion Lisa Gralewski Bristol University United Kingdom
BACKGROUND LEARNING AND LETTER DETECTION USING TEXTURE WITH PRINCIPAL COMPONENT ANALYSIS (PCA) CIS 601 PROJECT SUMIT BASU FALL 2004.
INDEPENDENT COMPONENT ANALYSIS OF TEXTURES based on the article R.Manduchi, J. Portilla, ICA of Textures, The Proc. of the 7 th IEEE Int. Conf. On Comp.
Blind Pattern Matching Attack on Watermark Systems D. Kirovski and F. A. P. Petitcolas IEEE Transactions on Signal Processing, VOL. 51, NO. 4, April 2003.
Under Supervision of Dr. Kamel A. Arram Eng. Lamiaa Said Wed
September 23, 2014Computer Vision Lecture 5: Binary Image Processing 1 Binary Images Binary images are grayscale images with only two possible levels of.
Digital Image Processing CCS331 Relationships of Pixel 1.
Artificial Intelligence Chapter 6 Robot Vision Biointelligence Lab School of Computer Sci. & Eng. Seoul National University.
Dengsheng Zhang and Melissa Chen Yi Lim
Wavelets and Multiresolution Processing (Wavelet Transforms)
Digital imaging By : Alanoud Al Saleh. History: It started in 1960 by the National Aeronautics and Space Administration (NASA). The technology of digital.
Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg.
Project 11: Determining the Intrinsic Dimensionality of a Distribution Okke Formsma, Nicolas Roussis and Per Løwenborg.
Synthesizing Natural Textures Michael Ashikhmin University of Utah.
Digital Image Processing & Pattern Analysis (CSCE 563) Introduction to Pattern Analysis Prof. Amr Goneid Department of Computer Science & Engineering The.
Towards Real-Time Texture Synthesis With the Jump Map Steve Zelinka Michael Garland University of Illinois at Urbana-Champaign Thirteenth Eurographics.
Design of PCA and SVM based face recognition system for intelligent robots Department of Electrical Engineering, Southern Taiwan University, Tainan County,
2D Texture Synthesis Instructor: Yizhou Yu. Texture synthesis Goal: increase texture resolution yet keep local texture variation.
Digital imaging By : Alanoud Al Saleh. History: It started in 1960 by the National Aeronautics and Space Administration (NASA). The technology of digital.
Graphcut Textures Image and Video Synthesis Using Graph Cuts
Supervisor: Nakhmani Arie Semester: Winter 2007 Target Recognition Harmatz Isca.
A Multiresolution Symbolic Representation of Time Series Vasileios Megalooikonomou Qiang Wang Guo Li Christos Faloutsos Presented by Rui Li.
Vector Quantization CAP5015 Fall 2005.
Yizhou Yu Texture-Mapping Real Scenes from Photographs Yizhou Yu Computer Science Division University of California at Berkeley Yizhou Yu Computer Science.
Wonjun Kim and Changick Kim, Member, IEEE
Texture Synthesis by Image Quilting CS766 Class Project Fall 2004 Eric Robinson.
Student: Chih-Wei Fang ( 方志偉 ) Adviser: Jenn-Jier James Lien ( 連震杰 ) Robotics Laboratory, Department of Computer Science and Information Engineering, National.
Irfan Ullah Department of Information and Communication Engineering Myongji university, Yongin, South Korea Copyright © solarlits.com.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
- photometric aspects of image formation gray level images
An Image Database Retrieval Scheme Based Upon Multivariate Analysis and Data Mining Presented by C.C. Chang Dept. of Computer Science and Information.
Dimensionality Reduction
Fast Preprocessing for Robust Face Sketch Synthesis
Final Year Project Presentation --- Magic Paint Face
Chapter 6. Robot Vision.
Principal Component Analysis
Object Modeling with Layers
Historic Document Image De-Noising using Principal Component Analysis (PCA) and Local Pixel Grouping (LPG) Han-Yang Tang1, Azah Kamilah Muda1, Yun-Huoy.
Outline S. C. Zhu, X. Liu, and Y. Wu, “Exploring Texture Ensembles by Efficient Markov Chain Monte Carlo”, IEEE Transactions On Pattern Analysis And Machine.
第 四 章 VQ 加速運算與編碼表壓縮 4-.
Chair Professor Chin-Chen Chang Feng Chia University
第 九 章 影像邊緣偵測 9-.
Presentation transcript:

Fast Image Replacement Using Multi-Resolution Approach Chih-Wei Fang and Jenn-Jier James Lien Robotics Lab Department of Computer Science and Information Engineering National Cheng Kung University

Robotics Lab. Motivation Photographs sometimes include unwanted objects. Lose data when the network transmits media (ex. video)

System: Two modules Texture Analysis (or Training) Module Texture Synthesis Module

Texture Analysis Module Input image for training (Size: W x H) Is Periodic Yes No Non-periodic pattern side-by-side Periodic pattern side-by-side Processing for Γ-shaped pattern Principal Component Analysis (PCA) : Weight vector (W 1… W N ) for each Γ-shaped pattern Vector Quantization (VQ) : Speeding up to search the best patches. Mp > Mn M = Mp or Mn Processing for periodic pattern There are Mp patches. Processing for non- periodic pattern There are Mn patches. …………

1. Sampling Processing for non-periodic pattern  A pixel-by-pixel shifting method is used to divide the input texture into M patches. M = (W–Wp+1)× (H–Hp+1)  W is the width of the input image (or texture)  H is the height of the input image  Wp is the width of the patch.  Hp is the height of the patch.

Processing for periodic pattern  The pattern is divided into M patches, where M=W×H. Periodic patterns have continuous veins between two equivalent patterns when positioned side by side. When the patch reaches the boundary of the input image, in order to form a complete patch, additional pixels are supplied from the opposite border.

Processing for the Γ-shaped pattern  Taking the whole pieces of the patch as training data may lead to an overestimation of the underlying structure of the patch and will certainly increase the length of the training time.  In addition, searching the matching patches by considering their whole contents generally produce unsatisfactory results since the whole contents tend to be quite different from the initial random values and it may cause the rim effect to become distinct. ω pixels

Each patch has K elements. During training, PCA is used to transform the original K×M matrix to an N×K matrix, where M>>K>N. And there are N eigenvectors. This can reduce most of the time complexity operations, and hence dramatically increases the performance. Recombining the appearance of the features while maintaining the coherence of the characteristic content 2. Principal Component Analysis (PCA) PCA

After the PCA process and a sort based on the eigenvalues with corresponding eigenvectors:  The first several eigenvectors control the global geometrical structure.  The middle eigenvectors control the local features.  The last few eigenvectors are controlled by some noises. These noises cause the photograph to appear truer, but have no influence on the geometric structure. Therefore a good matching structure need only compare the first several eigenvectors.

3. Vector Quantization (VQ) Searching for the best matching pattern from the eigenspace Ψ during the synthesis process is a computationally expensive task. In order to speed up the synthesis process, the training data is initially projected onto the eigenspace Ψ in order to retrieve the weight vectors. The data is then separated into C clusters by means of VQ. Using this approach, the computational time can be reduced from O(M) to O( ).

3. Similarity measure (based on SSD): Using the weight vector to find the closest cluster and then find the best matching patch from this cluster in the input image. Then this matching patch is called to fill the region of the search window. Texture Synthesis Module 1. Initialization: Each pixel is assigned a random value in the WoxHo-pixel output image area. Shifting this search window over this image in the scan-line order from the top-left order to the bottom-right order. …… … 2. Γ-shaped pattern projectiong: Each WpxHp-pixel search window (or patch) is taken the Γ-shaped pattern and then projected onto the eigenspace Ψ to obtain one N-dimensional weight vector. …… … 4. Result: Find the best patches to fill in the WoxHo-pixel output image.

Texture Synthesis Module : Demo

Image Replacement Down-sampling with Texture Analysis Up-sampling with Texture Synthesis I0I0 I1I1 I2I2 I3I3 I4I4 β4β4 β3β3 β2β2 β1β1 β0β0 C0C0 C3C3 C4C4 C2C2 C1C1

1. Preprocessing Using Multi-resolution Approach The input image I 0 and the inverse matte β 0 to do l times down-sampling ↓, then to get each level of input image I i and inverse matte β i, i=0~l B i denotes the background region and the foreground region F i is then utilized to search the background B i

2. Training Process Based on Background Region If the same Γ-shaped pattern of the search window (or patch) is applied, the fragment between two different textures will be mapped by only one texture rather than being mapped by the mixing of two different textures. Hence, the search window is modified. ω K pixels …

3. Completion Process The reconstructed image C i  First, the system reconstructs the level i (i=l) image I i  The replaced part is reconstructed (or synthesized), called I i ’  Only the reconstructed part of I i ’ is preserved at position F i  The other parts are replaced by the original background B i The system applies the up-sampling ↑ technique  from the reconstructed image C i at level i and proceeds to only replace the level i-1 image I i-1 at position F i-1

Image Replacement : Demo

Experimental Results Table 1: Average training time in Section 3 for input images of various sizes. Units of time: milliseconds (ms). Size of input image (pixels) 64x6496x96128x128 PCA Projecting to eigenspace VQ Total time (ms) Table 2: Average synthesizing time in Section 4 for output images of various sizes. Size of output image (pixels) 200x200300x300400x400600x600 Synthesis time (ms)

Table 3: The replacement information of each level in Figure 9 and Figure 12 for multi-resolution approach. “~0” means close to 0 ms. LevelWidth (pixels)Height (pixels) Number of whole patches Synthesis time (ms) ~ ~ ~0

Table 4: The time in Figure 9 and Figure 12 for various processes. The total time of the training and synthesis processes include the processes of the multi-resolution approach, PCA, projection (the patches are projected onto eigenspace Ψ to obtain the weight vectors), VQ, and synthesis. MethodTime (ms) Data typeGray valueRGB space Training process Multi-resolution3231 PCA Projecting to eigenspace VQ Synthesis process Synthesis78108 Total time

Conclusions We developed a system including two modules  the texture analysis and synthesis modules. This system is able to be applied to the two different purposes: the synthesis of a large image, and the replacement of local removed region. According to the training non-periodic or periodic pattern  we use different sampling methods to obtain different amount of patches in order to reduce the emergences of the seams of the output synthesized image. Reduce most of the time complexity operations  the analysis module can reduce dimensions of the training data and cluster these data, so the synthesis module can synthesize a large output image very fast and keep geometrical structures and veins continuous.

Conclusions Without needing to assign initial random values  The same process can also be used to replace the removed regions. Here, the multi-resolution approach is applied to the image replacement without needing to assign initial random values or approximate values. Multi-resolution go through the image replacement process  The down-sampling step is used for the analysis process as compiling the training data, and the up-sampling step is used for the reconstructing process as assigning initial values. Handle the large removed region  So this approach enables the system to handle the large removed region and obtain more realistic image (or textures) quickly.

Experimental Results Texture Synthesis (1)

Experimental Results Texture Synthesis (2)

Experimental Results Texture Synthesis (3)

Experimental Results Texture Synthesis (4)

Experimental Results Texture Synthesis (5)

Experimental Results Image Replacement (1)

Experimental Results Image Replacement (2)