Texture perception Lavanya Sharan February 23rd, 2011.

Slides:



Advertisements
Similar presentations
Chapter 2: Marr’s theory of vision. Cognitive Science  José Luis Bermúdez / Cambridge University Press 2010 Overview Introduce Marr’s distinction between.
Advertisements

Internet Vision - Lecture 3 Tamara Berg Sept 10. New Lecture Time Mondays 10:00am-12:30pm in 2311 Monday (9/15) we will have a general Computer Vision.
Proportion Priors for Image Sequence Segmentation Claudia Nieuwenhuis, etc. ICCV 2013 Oral.
Lapped Textures Emil Praun and Adam Finkelstien (Princeton University) Huges Hoppe (Microsoft Research) SIGGRAPH 2000 Presented by Anteneh.
Image Quilting for Texture Synthesis & Transfer Alexei Efros (UC Berkeley) Bill Freeman (MERL) +=
Shape From Texture Nick Vallidis March 20, 2000 COMP 290 Computer Vision.
Spatial cognition Lavanya Sharan April 11th, 2011.
Pictorial space Lavanya Sharan March 21st, Real world vs. 2-D depiction Real world (this is obviously a photographic substitute) Image sources:
Shadows (P) Lavanya Sharan February 21st, Anomalously lit objects are easy to see Kleffner & Ramchandran 1992Enns & Rensink 1990.
Name: Handin: Mon April 14 (start of class) Perceptual Coding and Interaction – Treemap
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 6: Low-level features 1 Computational Architectures in Biological.
When Texture takes precedence over Motion By Justin O’Brien and Alan Johnston.
Perception of illumination and shadows Lavanya Sharan February 14th, 2011.
Computing With Images: Outlook and applications
TEXTURE SYNTHESIS PEI YEAN LEE. What is texture? Images containing repeating patterns Local & stationary.
Announcements For future problems sets: matlab code by 11am, due date (same as deadline to hand in hardcopy). Today’s reading: Chapter 9, except.
Perception of illumination and shadows Lavanya Sharan February 14th, 2011.
Color, lightness & brightness Lavanya Sharan February 7, 2011.
Texture Reading: Chapter 9 (skip 9.4) Key issue: How do we represent texture? Topics: –Texture segmentation –Texture-based matching –Texture synthesis.
Texture Synthesis from Multiple Sources Li-Yi Wei Stanford University (was) NVIDIA Corporation (now)
Texture Recognition and Synthesis A Non-parametric Multi-Scale Statistical Model by De Bonet & Viola Artificial Intelligence Lab MIT Presentation by Pooja.
Computational Vision Jitendra Malik University of California at Berkeley Jitendra Malik University of California at Berkeley.
Blackshot: An Unexpected Dimension of Human Sensitivity to Contrast Michael S. Landy New York University Charles Chubb University of California, Irvine.
Materials II Lavanya Sharan March 2nd, Computational thinking about materials Physics-basedPseudo physics-based.
Active Appearance Models Computer examples A. Torralba T. F. Cootes, C.J. Taylor, G. J. Edwards M. B. Stegmann.
CS292 Computational Vision and Language Visual Features - Colour and Texture.
Lecture 4: Perception and Cognition in Immersive Virtual Environments Dr. Xiangyu WANG.
December 2, 2014Computer Vision Lecture 21: Image Understanding 1 Today’s topic is.. Image Understanding.
Convergence of vision and graphics Jitendra Malik University of California at Berkeley Jitendra Malik University of California at Berkeley.
Categorization: Scenes & Objects (P) Lavanya Sharan March 16th, 2011.
Cognitive Processes PSY 334 Chapter 2 – Perception.
Image Statistics and the Perception of 3D Shape Roland W. Fleming Max Planck Institute for Biological Cybernetics Yuanzhen Li Edward H. Adelson Massachusetts.
Last tuesday, you talked about active shape models Data set of 1,500 hand-labeled faces 20 facial features (eyes, eye brows, nose, mouth, chin) Train 40.
CAP5415: Computer Vision Lecture 4: Image Pyramids, Image Statistics, Denoising Fall 2006.
Active Vision Key points: Acting to obtain information Eye movements Depth from motion parallax Extracting motion information from a spatio-temporal pattern.
Perception Introduction Pattern Recognition Image Formation
INDEPENDENT COMPONENT ANALYSIS OF TEXTURES based on the article R.Manduchi, J. Portilla, ICA of Textures, The Proc. of the 7 th IEEE Int. Conf. On Comp.
Lecture 2b Readings: Kandell Schwartz et al Ch 27 Wolfe et al Chs 3 and 4.
Graphical Models in Vision. Alan L. Yuille. UCLA. Dept. Statistics.
Modelling Language Evolution Lecture 1: Introduction to Learning Simon Kirby University of Edinburgh Language Evolution & Computation Research Unit.
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Perception By: Alyssa Beavers, Chris Gordon, Yelena Pham, Hannah Schulte.
A virtual reality? Human perceptual function R.Fielding Community Medicine & BSU HKU.
Image Quilting for Texture Synthesis and Transfer Alexei A. Efros (UC Berkeley) William T. Freeman (MERL) Siggraph01 ’
Lecture 4 – Attention 1 Three questions: What is attention? Are there different types of attention? What can we do with attention that we cannot do without.
Goal and Motivation To study our (in)ability to detect inconsistencies in the illumination of objects in images Invited Talk! – Hany Farid: Photo Forensincs:
Learning to Perceive Transparency from the Statistics of Natural Scenes Anat Levin School of Computer Science and Engineering The Hebrew University of.
1 Computational Vision CSCI 363, Fall 2012 Lecture 21 Motion II.
Learning to Detect Faces A Large-Scale Application of Machine Learning (This material is not in the text: for further information see the paper by P.
MIT AI Lab / LIDS Laboatory for Information and Decision Systems & Artificial Intelligence Laboratory Massachusetts Institute of Technology A Unified Multiresolution.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
Ch.9 Bayesian Models of Sensory Cue Integration (Mon) Summarized and Presented by J.W. Ha 1.
From cortical anisotropy to failures of 3-D shape constancy Qasim Zaidi Elias H. Cohen State University of New York College of Optometry.
Using the Forest to see the Trees: A computational model relating features, objects and scenes Antonio Torralba CSAIL-MIT Joint work with Aude Oliva, Kevin.
Computational Vision Jitendra Malik University of California, Berkeley.
Problem Set 2 Reconstructing a Simpler World COS429 Computer Vision Due October (one week from today)13 th.
Cognitive Processes PSY 334
Color-Texture Analysis for Content-Based Image Retrieval
Measuring motion in biological vision systems
Content-based Image Retrieval
Common Classification Tasks
Image Based Modeling and Rendering (PI: Malik)
Walter J. Scheirer, Samuel E. Anthony, Ken Nakayama & David D. Cox
Measuring motion in biological vision systems
Institute of Neural Information Processing (Prof. Heiko Neumann •
Cognitive Processes PSY 334
Image Quilting for Texture Synthesis & Transfer
Grouping/Segmentation
Mitosis Microscope Lab
Optical flow and keypoint tracking
Presentation transcript:

Texture perception Lavanya Sharan February 23rd, 2011

Typical texture perception display Image source: Landy & Graham (2004)

Typical texture perception display Image source: Landy & Graham (2004)

Typical texture perception display Image source: Landy & Graham (2004)

Typical texture perception display Image source: VPfaCGP Fig 8.5

Typical texture perception display Image source: VPfaCGP Fig 8.5 These are examples of texture segregation/segmentation tasks. Similar tasks in visual search (e.g., find a T among Ls)

Explaining performance at these tasks The ‘back pocket’ model. Image source: Landy & Graham (2004)

‘Back pocket’/LNL/FRF/Second-order model etc. Image source: Landy & Graham (2004) Input After 1st stage After 2nd stage Output

‘Back pocket’/LNL/FRF/Second-order model etc. Image source: Landy & Graham (2004) Lots of work on these models. Not tied to specific features (e.g., line terminations). Explain performance on many texture segregation tasks. Biological plausibility.

For example, Malik & Perona (1990) Image source: VPfaCGP Fig 8.3

For example, Bergen & Adelson (1988)

Back pocket model works on most lab stimuli. Image source: Ben-Shahar 2006

An failure case for the back pocket model. Image source: Ben-Shahar 2006 Text Textures Manual annotations The orientation gradient is negligible across the perceptually salient boundaries.

Lab stimuli vs. Real world stimuli Image source: Landy & Graham (2004)Image source: VPfaCGP Fig 8.2 Lots of psychophysics. Many computational models of perception. Hardly any psychophysics. Very few computational models of perception (mostly in computer vision).

Modeling texture appearance (Portilla & Simoncelli 2001) Like Heeger & Bergen, impose constraints iteratively. Four classes of constraints. Each set adds something about real world texture appearance. Analytical model (as opposed to patch-based models) allows a framework for understanding texture perception.

Modeling texture appearance (Portilla & Simoncelli 2001) Like Heeger & Bergen, impose constraints iteratively. Four classes of constraints. Each set adds something about real world texture appearance. Analytical model (as opposed to patch-based models) allows a framework for understanding texture perception.

Shape from texture Under assumption of isotropic texture patterns, one can estimate slant and tilt of surfaces. Image source: VPfaCGP Fig 8.7

Slant, tilt & perspective interact to produce texture distortions Image source: Todd et al. 2005

What about real world images? Torralba & Oliva (2002)

Summary ✓ Most perceptual studies think of texture as black-and-white simple shapes. ✓ We have learnt a lot from these stimuli. ✓ Time to examine real-world textures. Some methods to manipulate these exist (e.g., computer vision methods). ✓ Real world texture overlaps with real world materials. More next time.