Materials II Lavanya Sharan March 2nd, 2011. Computational thinking about materials Physics-basedPseudo physics-based.

Slides:



Advertisements
Similar presentations
Dynamic View Selection for Time-Varying Volumes Guangfeng Ji* and Han-Wei Shen The Ohio State University *Now at Vital Images.
Advertisements

Fleming, Breidt & Bülthoff Supplementary Images The following slides contain a number of example photographs to show how the claims in the main submission.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #12.
3D Modeling from a photograph
Week 9 - Monday.  What did we talk about last time?  BRDFs  Texture mapping and bump mapping in shaders.
Measuring BRDFs. Why bother modeling BRDFs? Why not directly measure BRDFs? True knowledge of surface properties Accurate models for graphics.
Advanced Computer Graphics
Automatic scene inference for 3D object compositing Kevin Karsch (UIUC), Sunkavalli, K. Hadap, S.; Carr, N.; Jin, H.; Fonte, R.; Sittig, M., David Forsyth.
IIIT Hyderabad Pose Invariant Palmprint Recognition Chhaya Methani and Anoop Namboodiri Centre for Visual Information Technology IIIT, Hyderabad, INDIA.
Shape-from-X Class 11 Some slides from Shree Nayar. others.
A Signal-Processing Framework for Inverse Rendering Ravi Ramamoorthi Pat Hanrahan Stanford University.
Lecture 23: Photometric Stereo CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein.
Exchanging Faces in Images SIGGRAPH ’04 Blanz V., Scherbaum K., Vetter T., Seidel HP. Speaker: Alvin Date: 21 July 2004.
A Signal-Processing Framework for Forward and Inverse Rendering COMS , Lecture 8.
Advanced Computer Graphics (Spring 2005) COMS 4162, Lecture 21: Image-Based Rendering Ravi Ramamoorthi
Measurement, Inverse Rendering COMS , Lecture 4.
Perception of illumination and shadows Lavanya Sharan February 14th, 2011.
High-Quality Video View Interpolation
Perception of illumination and shadows Lavanya Sharan February 14th, 2011.
Color, lightness & brightness Lavanya Sharan February 7, 2011.
Texture perception Lavanya Sharan February 23rd, 2011.
Object recognition under varying illumination. Lighting changes objects appearance.
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
Convergence of vision and graphics Jitendra Malik University of California at Berkeley Jitendra Malik University of California at Berkeley.
Measure, measure, measure: BRDF, BTF, Light Fields Lecture #6
David Luebke Modeling and Rendering Architecture from Photographs A hybrid geometry- and image-based approach Debevec, Taylor, and Malik SIGGRAPH.
Face Relighting with Radiance Environment Maps Zhen Wen 1, Zicheng Liu 2, Thomas Huang 1 Beckman Institute 1 University of Illinois Urbana, IL61801, USA.
Computer Graphics Inf4/MSc Computer Graphics Lecture Notes #16 Image-Based Lighting.
Computer Vision Spring ,-685 Instructor: S. Narasimhan PH A18B T-R 10:30am – 11:50am Lecture #13.
CS 445 / 645: Introductory Computer Graphics
Shading / Light Thanks to Srinivas Narasimhan, Langer-Zucker, Henrik Wann Jensen, Ravi Ramamoorthi, Hanrahan, Preetham.
Image-Based Rendering. 3D Scene = Shape + Shading Source: Leonard mcMillan, UNC-CH.
Image-Based Rendering from a Single Image Kim Sang Hoon Samuel Boivin – Andre Gagalowicz INRIA.
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
Perception Introduction Pattern Recognition Image Formation
03/10/03© 2003 University of Wisconsin Last Time Tone Reproduction and Perceptual Issues Assignment 2 all done (almost)
Computer Graphics Psychophysics Heinrich H. Bülthoff Max-Planck-Institute for Biological Cybernetics Tübingen, Germany Heinrich H. Bülthoff Max-Planck-Institute.
-Global Illumination Techniques
Eurographics 2012, Cagliari, Italy 3D Material Style Transfer Chuong H. Nguyen 1, Tobias Ritschel 2, Karol Myszkowski 1, Elmar Eisemann 2, Hans-Peter Seidel.
Advanced Computer Graphics Advanced Shaders CO2409 Computer Graphics Week 16.
Why is computer vision difficult?
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Advanced Illumination Models Chapter 7 of “Real-Time Rendering, 3 rd Edition”
Image-Based Rendering of Diffuse, Specular and Glossy Surfaces from a Single Image Samuel Boivin and André Gagalowicz MIRAGES Project.
Week 10:Rendering 1. In the last lecture we saw how to model objects and represent them as wireframe models. Wire frame models depict the outer hull of.
Inverse Global Illumination: Recovering Reflectance Models of Real Scenes from Photographs Computer Science Division University of California at Berkeley.
Announcements Office hours today 2:30-3:30 Graded midterms will be returned at the end of the class.
04/30/02(c) 2002 University of Wisconsin Last Time Subdivision techniques for modeling We are now all done with modeling, the standard hardware pipeline.
A survey of Light Source Detection Methods Nathan Funk University of Alberta Nov
Adrian Jarabo, Hongzhi Wu, Julie Dorsey,
October Andrew C. Gallagher, Jiebo Luo, Wei Hao Improved Blue Sky Detection Using Polynomial Model Fit Andrew C. Gallagher, Jiebo Luo, Wei Hao Presented.
1 Surface Reflectance Estimation and Natural Illumination Statistics Ron Dror, Ted Adelson, Alan Willsky Artificial Intelligence Lab, Lab for Information.
The Lit Sphere: A Model for Capturing NPR Shading from Art Peter-Pike Sloan, William Martin, Amy Gooch & Bruce Gooch.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Radiometry of Image Formation Jitendra Malik. A camera creates an image … The image I(x,y) measures how much light is captured at pixel (x,y) We want.
Radiometry of Image Formation Jitendra Malik. What is in an image? The image is an array of brightness values (three arrays for RGB images)
Computer Graphics: Illumination
From local motion estimates to global ones - physiology:
MAN-522 Computer Vision Spring
- Introduction - Graphics Pipeline
Image-Based Rendering
Removing Highlight Spots in Visual Hull Rendering
Merle Norman Cosmetics, Los Angeles
© 2005 University of Wisconsin
Image Based Modeling and Rendering (PI: Malik)
CSCE 441: Computer Graphics Ray Tracing
FOCUS PRIOR ESTIMATION FOR SALIENT OBJECT DETECTION
Occlusion and smoothness probabilities in 3D cluttered scenes
Artistic Rendering Final Project Initial Proposal
20 November 2019 Output maps Normal Diffuse Roughness Specular
Presentation transcript:

Materials II Lavanya Sharan March 2nd, 2011

Computational thinking about materials Physics-basedPseudo physics-based

Computational thinking about materials Physics-based Precise, detailed, complex models. Pseudo physics-based Quick, dirty, hacky models.

Computational thinking about materials Physics-based Precise, detailed, complex models. BRDF, BTF, BSSRDF etc. Pseudo physics-based Quick, dirty, hacky models. Histogram statistics, contrast tricks etc. Image sources: VPfaCGP Fig. 10.3, Fleming & Buelthoff (2005)

Computational thinking about materials Physics-based Precise, detailed, complex models. BRDF, BTF, BSSRDF etc. Useful for recreating appearance. Pseudo physics-based Quick, dirty, hacky models. Histogram statistics, contrast tricks etc. Useful for conveying appearance. Image sources: Henrik Wann Jensen, Motoyoshi et al. (2007)

Computational thinking about materials Physics-based Large, unwieldy models for real world stimuli. Perceptual testing limited to toy worlds. Pseudo physics-based Simple techniques to manipulate real world stimuli. Perceptual testing possible on real world imagery. Image sources: Boyaci et al. (2004), Doerschner et al. (2007), Sharan et al. (2008)

Computational thinking about materials Physics-based Precise, detailed, complex models. BRDF, BTF, BSSRDF etc. Useful for recreating appearance. Large, unwieldy models for real world stimuli. Perceptual testing limited to toy worlds. ‘Inverse optics’, ‘estimation’-based approaches Pseudo physics-based Quick, dirty, hacky models. Histogram statistics, contrast tricks etc. Useful for conveying appearance. Simple techniques to manipulate real world stimuli. Perceptual testing possible on real world imagery. ‘Image statistics’, ‘classification’-based approaches

Today Physics-basedPseudo physics-based Khan et al., SIGGRAPH 2006 Liu et al., CVPR 2010

Image-based material editing Khan et al. (2006) Goal: Change the material of an object in a photograph. Given: A single HDR image + alpha matte to identify the object + specification of final material properties. Input Output

Image-based material editing Why is this hard? Hugely under constrained problem. Doing inverse optics from a single, unknown image is very difficult. BRDF estimation techniques make lots of assumptions to get around this problem. Input Output

Image-based material editing What are the unknowns here? 3-D shape of object (we only have the silhouette information in input) Illumination on object Current reflectance properties of object (we only know the final reflectance properties in input) In essence, nothing is known at input.

Image-based material editing Khan et al.’s clever solution. Somehow figure out 3-D shape and illumination, and use this to create object in new material. Make gross assumptions that violate physics but look perceptually okay. (Inverse optics folks will need smelling salts right about now.)

Image-based material editing Recovering shape. Depth map = Luminance values Compress luminance, smooth gradients to improve results. Key insight: Bas-relief ambiguity helps, as do subsequent operations.

Image-based material editing Recovering shape. If the final result is shown from the same viewpoint as before and as static image, can get away with a lot.

Image-based material editing Recovering illumination. Environment map = Background image pixels. As we don’t have a metallic sphere in the scene (ideal way), let’s approximate what the metallic sphere would have seen, i.e., the scene minus the object. Key insight: As long as local consistency exists, we are not very good at estimating illumination.

Image-based material editing Recovering illumination. If the final result is shown in the same environment, this is a good approximation.

Image-based material editing Wait, what about specular highlights? Detect specular highlights, and paste them back in! Works because there is no presumed change in viewpoint, illumination or geometry.

Image-based material editing Totally fake but very believable translucency. Texture map to create translucency. Use estimate of geometry to warp the (filled-in) background pixels and blur. Voila! Original Result

Image-based material editing Can manipulate gloss easily (using known hack). Luminance remapping to change perceived gloss. Original Specular resultDiffuse result

Image-based material editing Change textures entirely. Texture remapping using estimate of geometry. Original Result

Image-based material editing Can do boring BRDF editing too. Given BRDF specification, can render using approximations of illumination and shape. Original Result 1Result 1I

Image-based material editing Contributions and criticisms. + Editing single, unknown real world photograph. + Producing very good results with pseudo-physics. + These hacks tell us something interesting about material perception. - At the expense of physical correctness. - Cannot explicitly change viewpoint, illumination or shape of the object in original image.

Image-based material editing Contributions and criticisms. + Editing single, unknown real world photograph. + Producing very good results with pseudo-physics. + These hacks tell us something interesting about material perception. - At the expense of physical correctness. - Cannot explicitly change viewpoint, illumination or shape of the object in original image. Pseudo physics-based Physics-based Image copyright: Ben Heine, Image source: thepagansphinx.blogspot.com

Real world material recognition Liu et al. (2010) Goal: Categorize the material in a given photograph. Given: A single uncalibrated image + matte + a training database of 10 material categories.

Real world material recognition Why is this hard? Hugely under constrained problem. Doing inverse optics from a single, unknown image is very difficult. BRDF estimation techniques make lots of assumptions to get around this problem. If you only need high-level categories, why not use object/texture recognition approaches? Stone PaperFabric

Real world material recognition What are the unknowns here? 3-D shape of object (we only have the silhouette information in input) Illumination on object Reflectance properties of object In essence, nothing is known at input.

Real world material recognition Liu et al.’s solution. Use learning. Sharan et al. created a dataset that was challenging even for humans. Let’s use that. Try a bunch of standard features, see if they work. Use a standard bag-of-words approach.

Real world material recognition Previous work. Use learning. Use a 3-D texture database that is sub- optimal for material recognition problem. Try a bunch of standard features, see if they work. Use a standard bag-of-words approach. Choice of database matters, a lot. (It changes the problem, and changes the difficulty of the problem.)

Real world material recognition Database matters.

Real world material recognition Use standard features + some new ones.

Micro-texture Shape Reflectance Color & texture

Real world material recognition Use standard LDA model (with some tweaks).

Real world material recognition Beat state-of-art because of stronger features and model. Varma-Zisserman (23.8%)Liu et al. (44.6%) Varma-Zisserman with Liu et al. features (37.4%)

Real world material recognition Trails human performance by a huge margin. Human performance (92.3%)Liu et al. (44.6%) Reproduces Sharan et al. perceptual result on Mechanical Turk.

Real world material recognition Contributions and criticisms. + Categorizing single, uncalibrated real world photograph. + Better results than state-of-art. + Addresses the problem of material categorization, rather than recognizing specific 3-D surface. - Cannot explain human performance. - Database too challenging, tailored for humans not machines.

Real world material recognition Contributions and criticisms. + Categorizing single, uncalibrated real world photograph. + Better results than state-of-art. + Addresses the problem of material categorization, rather than recognizing specific 3-D surface. - Cannot explain human performance. - Database too challenging, tailored for humans not machines. Mid & high-level vision theories needed Low-level explanation insufficient Image copyright: Ben Heine, Image source: thepagansphinx.blogspot.com Explaining material perception