Texture. Limitation of pixel based processing Edge detection with different threshold.

Slides:



Advertisements
Similar presentations
Filling Algorithms Pixelwise MRFsChaos Mosaics Patch segments are pasted, overlapping, across the image. Then either: Ambiguities are removed by smoothing.
Advertisements

Are the basic building blocks to create a work of art.
CSCE 643 Computer Vision: Template Matching, Image Pyramids and Denoising Jinxiang Chai.
Data-driven methods: Texture (Sz 10.5) Cs129 Computational Photography James Hays, Brown, Spring 2011 Many slides from Alexei Efros.
CS Spring 2009 CS 414 – Multimedia Systems Design Lecture 4 – Digital Image Representation Klara Nahrstedt Spring 2009.
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Recognizing Surfaces using Three-Dimensional Textons Thomas Leung and Jitendra Malik Computer Science Division University of California at Berkeley.
Pyramids and Texture. Scaled representations Big bars and little bars are both interesting Spots and hands vs. stripes and hairs Inefficient to detect.
Introduction To Tracking
Texture. Edge detectors find differences in overall intensity. Average intensity is only simplest difference. many slides from David Jacobs.
Lecture 3 Nonparametric density estimation and classification
電腦視覺 Computer and Robot Vision I Chapter2: Binary Machine Vision: Thresholding and Segmentation Instructor: Shih-Shinh Huang 1.
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Lecture 4 Linear Filters and Convolution
Modeling Pixel Process with Scale Invariant Local Patterns for Background Subtraction in Complex Scenes (CVPR’10) Shengcai Liao, Guoying Zhao, Vili Kellokumpu,
Announcements Project 4 questions? Guest lectures Thursday: Richard Ladner “tactile graphics” Next Tuesday: Jenny Yuen and Jeff Bigham.
Texture This isn’t described in Trucco and Verri Parts are described in: – Computer Vision, a Modern Approach by Forsyth and Ponce –“Texture Synthesis.
Announcements Big mistake on hint in problem 1 (I’m very sorry).
Texture Turk, 91.
Segmentation Divide the image into segments. Each segment:
Image Segmentation. Introduction The purpose of image segmentation is to partition an image into meaningful regions with respect to a particular application.
Clustering Color/Intensity
Segmentation by Clustering Reading: Chapter 14 (skip 14.5) Data reduction - obtain a compact representation for interesting image data in terms of a set.
Announcements For future problems sets: matlab code by 11am, due date (same as deadline to hand in hardcopy). Today’s reading: Chapter 9, except.
Texture Reading: Chapter 9 (skip 9.4) Key issue: How do we represent texture? Topics: –Texture segmentation –Texture-based matching –Texture synthesis.
Computer Vision - A Modern Approach Set: Segmentation Slides by D.A. Forsyth Segmentation and Grouping Motivation: not information is evidence Obtain a.
Introduction --Classification Shape ContourRegion Structural Syntactic Graph Tree Model-driven Data-driven Perimeter Compactness Eccentricity.
Texture Synthesis by Non-parametric Sampling Alexei Efros and Thomas Leung UC Berkeley.
Chapter 14: SEGMENTATION BY CLUSTERING 1. 2 Outline Introduction Human Vision & Gestalt Properties Applications – Background Subtraction – Shot Boundary.
INDEPENDENT COMPONENT ANALYSIS OF TEXTURES based on the article R.Manduchi, J. Portilla, ICA of Textures, The Proc. of the 7 th IEEE Int. Conf. On Comp.
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
September 23, 2014Computer Vision Lecture 5: Binary Image Processing 1 Binary Images Binary images are grayscale images with only two possible levels of.
Core Ideas 2 Art terminology. Line A line is a basic element of art, referring to a continuous mark, made on a surface, by a moving point. A line is long.
Texture Key issue: representing texture –Texture based matching little is known –Texture segmentation key issue: representing texture –Texture synthesis.
TEXTURE SYNTHESIS BY NON-PARAMETRIC SAMPLING VIVA-VITAL Nazia Tabassum 27 July 2015.
Image Segmentation and Edge Detection Digital Image Processing Instructor: Dr. Cheng-Chien LiuCheng-Chien Liu Department of Earth Sciences National Cheng.
Statistics in the Image Domain for Mobile Robot Environment Modeling L. Abril Torres-Méndez and Gregory Dudek Centre for Intelligent Machines School of.
Texture Texture refers to the surface quality or "feel" of an object - smooth, rough, soft, etc. Textures may be actual (felt with touch - tactile) or.
Autonomous Robots Vision © Manfred Huber 2014.
2D Texture Synthesis Instructor: Yizhou Yu. Texture synthesis Goal: increase texture resolution yet keep local texture variation.
Segmentation & Grouping Tuesday, Sept 23 Kristen Grauman UT-Austin.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
CS654: Digital Image Analysis
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Texture Analysis and Synthesis. Texture Texture: pattern that “looks the same” at all locationsTexture: pattern that “looks the same” at all locations.
Elements of Art & Principles of Design. Elements of Art – (building blocks of visual art)
The ingredients or building blocks of all art
- photometric aspects of image formation gray level images
Announcements Project 4 out today help session at the end of class.
TEXTURE.
DIGITAL SIGNAL PROCESSING
The ingredients or building blocks of all art
Texture Synthesis by Non-parametric Sampling
Test Review Fine Arts.
Announcements Final Project 3 artifacts Evals
Segmentation and Grouping
Image Segmentation Techniques
Texture Texture refers to the surface quality or "feel" of an object - smooth, rough, soft, etc. Textures may be actual (felt with touch - tactile) or.
Image filtering Images by Pawan Sinha.
Brief Review of Recognition + Context
Texture.
Image filtering Images by Pawan Sinha.
Announcements Guest lecture next Tuesday
Image Segmentation.
Fourier Transform of Boundaries
A Block Based MAP Segmentation for Image Compression
Some slides: courtesy of David Jacobs
Sequences II Prof. Noah Snavely CS1114
Outline Texture modeling - continued Markov Random Field models
Presentation transcript:

Texture

Limitation of pixel based processing

Edge detection with different threshold

What is texture ? There is no accurate definition. It is often used to represent all the “details” in the image. (F.e, sometimes images are divided to shape + texture. In our case we refer to the texture as images or patterns with some kind of “structure”.

What is Texture? For example, an image has a 50% black and 50% white distribution of pixels. Three different images with the same intensity distribution, but with different textures.

What is texture ? (cont’) repetition stochastic both

Texture Texture refers to the surface quality or "feel" of an object - smooth, rough, soft, etc. Textures may be actual (felt with touch - tactile) or implied (suggested by the way an artist has created the work of art -visual).

What would we like to do with textures? Detect regions / images with textures. Classify using texture. Segmentation: divide the image into regions with uniform texture. Synthesis – given a sample of the texture, generate random images with the same texture. Compression (Especially fractals)

Internet source for textures omeCreative.aspx

Actual Texture Texture is the tactile quality of a surface or the representation of that surface. If it is the way something feels when you touch it, it is called real texture.

Simulated or Implied texture Texture is what your eyes tell you about how things in the drawing would feel if you could touch them, called simulated texture. Photography is very good at translating real texture into implied or simulated texture, but painters and draftsmen can also learn to recreate the visual appearance of textures in very convincing ways.

OBJECT by Meret Oppenheim

Simplest Texture Discrimination Compare histograms. –Divide intensities into discrete ranges. –Count how many pixels in each range

How/why to compare Simplest comparison is SSD, many others. Can view probabilistically. –Histogram is a set of samples from a probability distribution. –With many samples it approximates distribution. –Test probability samples drawn from same distribution. Ie., is difference greater than expected when two samples come from same distribution?

i j k Chi-square Chi square distance between texton histograms (Malik)

More Complex Discrimination Histogram comparison is very limiting –Every pixel is independent. –Everything happens at a tiny scale. Use output of filters of different scales.

What are Right Filters? Multi-scale is good, since we don’t know right scale a priori. Easiest to compare with naïve Bayes: Filter image one: (F1, F2, …) Filter image two: (G1, G2, …) S means image one and two have same texture. Approximate: P(F1,G1,F2,G2, …| S) By P(F1,G1|S)*P(F2,G2|S)*…

What are Right Filters? The more independent the better. –In an image, output of one filter should be independent of others. –Because our comparison assumes independence.

Difference of Gaussian Filters

Spots and Oriented Bars (Malik and Perona)

Gabor Filters Gabor filters at different scales and spatial frequencies top row shows anti-symmetric (or odd) filters, bottom row the symmetric (or even) filters.

Gabor filters are examples of Wavelets We know two bases for images: –Pixels are localized in space. –Fourier are localized in frequency. Wavelets are a little of both. Good for measuring frequency locally.

Markov Model Captures local dependencies. –Each pixel depends on neighborhood. Example, 1D first order model P(p1, p2, …pn) = P(p1)*P(p2|p1)*P(p3|p2,p1)*… = P(p1)*P(p2|p1)*P(p3|p2)*P(p4|p3)*…

Markov Chains Markov Chain –a sequence of random variables – is the state of the model at time t –Markov assumption: each state is dependent only on the previous one dependency given by a conditional probability: –The above is actually a first-order Markov chain –An N’th-order Markov chain:

Markov Random Field A Markov random field (MRF) generalization of Markov chains to two or more dimensions. First-order MRF: probability that pixel X takes a certain value given the values of neighbors A, B, C, and D: D C X A B X    X        Higher order MRF’s have larger neighborhoods

Texture Synthesis [Efros & Leung, ICCV 99] [Efros & Leung, ICCV 99] Can apply 2D version of text synthesis

Synthesizing One Pixel sample image Generated image –What is ? –Find all the windows in the image that match the neighborhood consider only pixels in the neighborhood that are already filled in –To synthesize x pick one matching window at random assign x to be the center pixel of that window SAMPLE x

Really Synthesizing One Pixel sample image –An exact neighbourhood match might not be present –So we find the best matches using SSD error and randomly choose between them, preferring better matches with higher probability SAMPLE Generated image x

Growing Texture –Starting from the initial image, “grow” the texture one pixel at a time

Window Size Controls Regularity

More Synthesis Results Increasing window size

More Results aluminum wire reptile skin

Failure Cases Growing garbageVerbatim copying

Image-Based Text Synthesis

Image Segmentation Goal: identify groups of pixels that go together

The Goals of Segmentation Separate image into coherent “objects”

The Goals of Segmentation Separate image into coherent “objects” Group together similar ‐ looking pixels for efficiency of further processing

Segmentation Compact representation for image data in terms of a set of components Components share “common” visual properties Properties can be defined at different level of abstractions

Image Segmentation

Introduction to image segmentation Example 1 –Segmentation based on greyscale –Very simple ‘model’ of greyscale leads to inaccuracies in object labelling

Introduction to image segmentation Example 2 –Segmentation based on texture –Enables object surfaces with varying patterns of grey to be segmented

Introduction to image segmentation

Example 3 –Segmentation based on motion –The main difficulty of motion segmentation is that an intermediate step is required to (either implicitly or explicitly) estimate an optical flow field –The segmentation must be based on this estimate and not, in general, the true flow

Introduction to image segmentation

Example 3 –Segmentation based on depth –This example shows a range image, obtained with a laser range finder –A segmentation based on the range (the object distance from the sensor) is useful in guiding mobile robots

Introduction to image segmentation Original image Range image Segmented image

General ideas Tokens –whatever we need to group (pixels, points, surface elements, etc., etc.) Bottom up segmentation –tokens belong together because they are locally coherent Top down segmentation –tokens belong together because they lie on the same visual entity (object, scene…) > These two are not mutually exclusive

What is Segmentation? Clustering image elements that “belong together” Partitioning – Divide into regions/sequences with coherent internal properties Grouping –Identify sets of coherent tokens in image

Basic ideas of grouping in human vision Gestalt properties Figure ‐ ground discrimination

Examples of Grouping in Vision

Similarity

Symmetry

Common Fate

Proximity

Gestalt Factors

Image Segmentation – Toy Example