Synthesizing Natural Textures Michael Ashikhmin University of Utah.

Slides:



Advertisements
Similar presentations
Image Quilting and Apples
Advertisements

Heuristic Search techniques
Error Characterization of the Alpha Residuals Emissivity Extraction Technique Michael C. Baglivio, Dr. John Schott, Scott Brown Center for Imaging Science.
Beyond Linear Separability
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Texture Synthesis on [Arbitrary Manifold] Surfaces Presented by: Sam Z. Glassenberg* * Several slides borrowed from Wei/Levoy presentation.
September 10, 2013Computer Vision Lecture 3: Binary Image Processing 1Thresholding Here, the right image is created from the left image by thresholding,
Texture Synthesis Tiantian Liu. Definition Texture – Texture refers to the properties held and sensations caused by the external surface of objects received.
Simple Neural Nets For Pattern Classification
Morris LeBlanc.  Why Image Retrieval is Hard?  Problems with Image Retrieval  Support Vector Machines  Active Learning  Image Processing ◦ Texture.
Efficient Moving Object Segmentation Algorithm Using Background Registration Technique Shao-Yi Chien, Shyh-Yih Ma, and Liang-Gee Chen, Fellow, IEEE Hsin-Hua.
Image Analysis Preprocessing Image Quantization Binary Image Analysis
Order-Independent Texture Synthesis Li-Yi Wei Marc Levoy Gcafe 1/30/2003.
Fast Texture Synthesis using Tree-structured Vector Quantization Li-Yi Wei Marc Levoy Computer Graphics Group Stanford University.
Fast Image Replacement Using Multi-Resolution Approach Chih-Wei Fang and Jenn-Jier James Lien Robotics Lab Department of Computer Science and Information.
T.Sharon 1 Internet Resources Discovery (IRD) Introduction to MMIR.
MAE 552 – Heuristic Optimization
Texture Reading: Chapter 9 (skip 9.4) Key issue: How do we represent texture? Topics: –Texture segmentation –Texture-based matching –Texture synthesis.
Region Filling and Object Removal by Exemplar-Based Image Inpainting
Image Enhancement.
Fitting a Model to Data Reading: 15.1,
Andrew Nealen and Marc Alexa, Discrete Geometric Modeling Group, TU Darmstadt, 2003 Hybrid Texture Synthesis Andrew Nealen Marc Alexa Discrete Geometric.
The Analysis of Variance
Fast Texture Synthesis Tree-structure Vector Quantization Li-Yi WeiMarc Levoy Stanford University.
Graphcut Texture: Image and Video Synthesis Using Graph Cuts
Image Analogies Aaron Hertzmann (1,2) Charles E. Jacobs (2) Nuria Oliver (2) Brian Curless (3) David H. Salesin (2,3) 1 New York University 1 New York.
Tal Mor  Create an automatic system that given an image of a room and a color, will color the room walls  Maintaining the original texture.
Image Renaissance Using Discrete Optimization Cédric AllèneNikos Paragios ENPC – CERTIS ESIEE – A²SI ECP - MAS France.
Digital Images The digital representation of visual information.
Chapter 10: Image Segmentation
CS 376b Introduction to Computer Vision 04 / 02 / 2008 Instructor: Michael Eckmann.
Wavelet-Based Multiresolution Matching for Content-Based Image Retrieval Presented by Tienwei Tsai Department of Computer Science and Engineering Tatung.
Surface Simplification Using Quadric Error Metrics Michael Garland Paul S. Heckbert.
Light Using Texture Synthesis for Non-Photorealistic Shading from Paint Samples. Christopher D. Kulla, James D. Tucek, Reynold J. Bailey, Cindy M. Grimm.
CS 6825: Binary Image Processing – binary blob metrics
September 23, 2014Computer Vision Lecture 5: Binary Image Processing 1 Binary Images Binary images are grayscale images with only two possible levels of.
Texture Optimization for Example-based Synthesis Vivek Kwatra Irfan Essa Aaron Bobick Nipun Kwatra.
Digital Image Processing CCS331 Relationships of Pixel 1.
G52IVG, School of Computer Science, University of Nottingham 1 Edge Detection and Image Segmentation.
TEXTURE SYNTHESIS BY NON-PARAMETRIC SAMPLING VIVA-VITAL Nazia Tabassum 27 July 2015.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
CS654: Digital Image Analysis Lecture 25: Hough Transform Slide credits: Guillermo Sapiro, Mubarak Shah, Derek Hoiem.
Content-Based Image Retrieval Using Fuzzy Cognition Concepts Presented by Tienwei Tsai Department of Computer Science and Engineering Tatung University.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
2005/12/021 Content-Based Image Retrieval Using Grey Relational Analysis Dept. of Computer Engineering Tatung University Presenter: Tienwei Tsai ( 蔡殿偉.
The concept of RAID in Databases By Junaid Ali Siddiqui.
Towards Real-Time Texture Synthesis With the Jump Map Steve Zelinka Michael Garland University of Illinois at Urbana-Champaign Thirteenth Eurographics.
Autonomous Robots Vision © Manfred Huber 2014.
UNIT 5.  The related activities of sorting, searching and merging are central to many computer applications.  Sorting and merging provide us with a.
2D Texture Synthesis Instructor: Yizhou Yu. Texture synthesis Goal: increase texture resolution yet keep local texture variation.
MIT AI Lab / LIDS Laboatory for Information and Decision Systems & Artificial Intelligence Laboratory Massachusetts Institute of Technology A Unified Multiresolution.
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
1 Arrays of Arrays An array can represent a collection of any type of object - including other arrays! The world is filled with examples Monthly magazine:
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Chapter 13 Discrete Image Transforms
Color Image Segmentation Mentor : Dr. Rajeev Srivastava Students: Achit Kumar Ojha Aseem Kumar Akshay Tyagi.
1 Shape Descriptors for Maximally Stable Extremal Regions Per-Erik Forss´en and David G. Lowe Department of Computer Science University of British Columbia.
Digital Image Processing CCS331 Relationships of Pixel 1.
Shadow Detection in Remotely Sensed Images Based on Self-Adaptive Feature Selection Jiahang Liu, Tao Fang, and Deren Li IEEE TRANSACTIONS ON GEOSCIENCE.
April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.
Graphcut Textures:Image and Video Synthesis Using Graph Cuts
DIGITAL SIGNAL PROCESSING
School of Electrical and
Computer Vision Lecture 5: Binary Image Processing
Computer Vision Lecture 16: Texture II
Efficient Distribution-based Feature Search in Multi-field Datasets Ohio State University (Shen) Problem: How to efficiently search for distribution-based.
Fourier Transform of Boundaries
The N-Queens Problem Search The N-Queens Problem Most slides from Milos Hauskrecht.
Intensity Transformation
Presentation transcript:

Synthesizing Natural Textures Michael Ashikhmin University of Utah

Outline Wei and Levoy texture synthesis New texture synthesis algorithm User control Result and discussion

Wei and Levoy texture synthesis The output image is initialized to random noise with a histogram equal that of the input image. For each pixel in the output image, in scanline order, do  In the output image, an L-shaped neighborhood of current pixel of a specific size is considered, see Figure 2.  A search is performed in the input sample for a pixel with a neighborhood most similar to the one identified in the previous step  The current pixel value in the output image is copied from the position in the input sample identified as most similar by this search

Wei and Levoy texture synthesis The similarity of two neighborhoods N1 and N2 is computed according to the L2 distance between them which is a sum over all pixels in the neighborhood of squared differences of pixel values at a particular position:  Here R, G and b are pixel values at position p in red, green and blue channels respectively and subscripts refer to the image to which particular neighborhood belongs

Wei and Levoy texture synthesis The WL algorithm runs into difficulties in cases where the texture to be synthesized should consist of an arrangement of small objects. The problem gets only worse with TSVQ acceleration which, as Wei and Levoy point out, tends to blur the output even more and special measures to avoid this behavior become more necessary.

New texture synthesis algorithm

Our algorithm follows:  The array of original pixel positions is initialized to random valid positions in the input image  For each pixel in the output image, in scanline order, do: In the output image, an L-shape neighborhood of current pixel of a specific size is considered For each pixel from this neighborhood, use its original position in the input image to generate a candidate pixel which location is appropriately shifted Remove duplicate candidates Search the candidate list for a pixel with a neighborhood most similar to the current L-shaped neighborhood in the output image

New texture synthesis algorithm The current pixel value in the output image is copied from the position in the input sample identified as the most similar by this search and this position is recorded in the array of original positions If necessary, modify the algorithm for the last few rows and go over the top few rows again to improve tileability of the result texture Our method can be extended in a straightforward way to cerate a multiresoultion synthesis algorithm similar to the one described by Wei and Levoy.

Multiresolution

multiresoultion

User control We believe that the simplest and most general way for a user to control the output of a texture synthesis algorithm is to provide a target image which would show how the result should look But we consider a complete square neighborhood of a pixel instead of the L- shaped casual one The candidates in the input image are found as before

User control Computing the value used to choose the best candidate proceeds now in two steps:  First, as in the basic algorithm, pixel-by-pixel L2 difference of the top L-shaped parts of the neighborhoods in the input and output images is computed  We then add the L2 difference between the pixels in the bottom part of the candidate neighborhood in the input texture and pixels in the output neighborhood under consideration. If a greater similarity is desired, one or more further iterations of the algorithm can be performed with each next iteration using the result of the previous one as the starting point of the synthesis We believe that the best way is to simply show the user the result after some number of iterations and let the user judge when to stop the process

Results and discussion We can see that the algorithm achieves good results for the class of natural textures it is designed for, as shown on the top half of figure 9.

Results and discussion There are situations where both algorithms do not do a very good job, but the types of artifacts are quite different:  Excessive blurring for the WL algorithm and “too rough” images for our algorithm.

Results and discussion Clouds and waves are too smooth for our algorithm. In some such cases the quality can be somewhat improved by increasing the neighborhood size, in others it is simpler to just use the original WL algorithm

Results and discussion Compare this result with the basic version and user control version The period of the grid is controlled by the user input rather than by the original texture

Results and discussion The neighborhood size plays a somewhat different role in our method from that in the original WL algorithm Since we compare neighborhoods only among a few well- chosen candidates, the size need only be large enough to reliably distinguish the best choice among them. In general, the smoother the texture is, the larger the neighborhood is needed for perceptually better results

Results and discussion All images shown were created with the 5*5 neighborhood size (5*2 L-shape region). Note that most of textures in the original Wei and Levoy paper required a significantly larger neighborhoods, at least 9*9 pixels We create only at most a few dozen candidates instead of considering the complete space of many thousands pixels This makes acceleration techniques such as TSVQ used in the WL algorithm unnecessary for our method. These are two factors together increase the speed of the algorithm substantially.

Results and discussion TSVQ accelerated WL runs on the order of 20 to 30 seconds to synthesize a 200x200 texture Our timing for the first pass of the user controlled version of the algorithm is only about 0.3 seconds per iterations for a texture of this size and about 1.5 seconds for a 500x500 image. The run time is slightly better (1.3 seconds) for the basic (no user control) version of the algorithm since it deals with only two images instead of three (input, output, target)

Results and discussion

Unfortunately, regardless of its efficiency, our algorithm is not well-suited for temporal texture synthesis in most cases. The problem is that most motions one would want to synthesize are rather smooth and do not contain any significant amount of high frequency component in the temporal domain which we would need to mask transitions between separate time-space patches generated by our method.