Image Formation (approximately) Vision infers world properties form images. So we need to understand how images depend on these properties. Two key elements.

Slides:



Advertisements
Similar presentations
Image formation. Image Formation Vision infers world properties form images. How do images depend on these properties? Two key elements –Geometry –Radiometry.
Advertisements

Computer Vision Radiometry. Bahadir K. Gunturk2 Radiometry Radiometry is the part of image formation concerned with the relation among the amounts of.
1 Graphics CSCI 343, Fall 2013 Lecture 18 Lighting and Shading.
Week 5 - Friday.  What did we talk about last time?  Quaternions  Vertex blending  Morphing  Projections.
Announcements Project 2 due today Project 3 out today –demo session at the end of class.
Capturing light Source: A. Efros. Image formation How bright is the image of a scene point?
Foundations of Physics
Basic Principles of Surface Reflectance
Radiometry. Outline What is Radiometry? Quantities Radiant energy, flux density Irradiance, Radiance Spherical coordinates, foreshortening Modeling surface.
Physically Based Illumination Models
Graphics Graphics Korea University cgvr.korea.ac.kr Illumination Model 고려대학교 컴퓨터 그래픽스 연구실.
Basic Principles of Imaging and Lenses
Computer Vision - A Modern Approach Set: Radiometry Slides by D.A. Forsyth Radiometry Questions: –how “bright” will surfaces be? –what is “brightness”?
Chapter 26 Geometrical Optics. Units of Chapter 26 The Reflection of Light Forming Images with a Plane Mirror Spherical Mirrors Ray Tracing and the Mirror.
Digital Photography Part 1 A crash course in optics.
Ray Diagrams & Reflection Images in plane mirrors
Capturing light Source: A. Efros. Review Pinhole projection models What are vanishing points and vanishing lines? What is orthographic projection? How.
Cameras (Reading: Chapter 1) Goal: understand how images are formed Camera obscura dates from 15 th century Basic abstraction is the pinhole camera Perspective.
Geometry of Images Pinhole camera, projection A taste of projective geometry Two view geometry:  Homography  Epipolar geometry, the essential matrix.
3-D Computer Vision CSc83029 / Ioannis Stamos 3-D Computer Vision CSc Radiometry and Reflectance.
Stefano Soatto (c) UCLA Vision Lab 1 Homogeneous representation Points Vectors Transformation representation.
Image Formation (approximately) Vision infers world properties form images. So we need to understand how images depend on these properties. Two key elements.
Radiometry, lights and surfaces
Image Formation1 Projection Geometry Radiometry (Image Brightness) - to be discussed later in SFS.
© 2002 by Davi GeigerComputer Vision January 2002 L1.1 Image Formation Light can change the image (and appearances). What is the relation between pixel.
Introduction to Computer Vision CS / ECE 181B Tues, May 18, 2004 Ack: Matthew Turk (slides)
Computer Vision - A Modern Approach
Computer Vision : CISC4/689
Basic Principles of Surface Reflectance
© 2004 by Davi GeigerComputer Vision February 2004 L1.1 Image Formation Light can change the image and appearances (images from D. Jacobs) What is the.
University of British Columbia CPSC 314 Computer Graphics Jan-Apr 2005 Tamara Munzner Lighting and Shading Week.
Image formation 2. Blur circle A point at distance is imaged at point from the lens and so Points a t distance are brought into focus at distance Thus.
© 2003 by Davi GeigerComputer Vision September 2003 L1.1 Image Formation Light can change the image and appearances (images from D. Jacobs) What is the.
Transformations and Projections (Some slides adapted from Amitabh Varshney)
Lighting affects appearance. Light Source emits photons Photons travel in a straight line When they hit an object they: bounce off in a new direction.
Cameras, lenses, and calibration
Perspective projection
Light and shading Source: A. Efros.
CS 445 / 645: Introductory Computer Graphics
EECS 274 Computer Vision Light and Shading. Radiometry – measuring light Relationship between light source, surface geometry, surface properties, and.
Reflectance Map: Photometric Stereo and Shape from Shading
Light in a Newtonian view Chapter 16. Introducing: light Light is the most important source of information for humans Concept of light rays - there are.
Image formation ECE 847: Digital Image Processing Stan Birchfield Clemson University.
Copyright © 2010 Pearson Education, Inc. Lecture Outline Chapter 26 Physics, 4 th Edition James S. Walker.
Computer Vision - A Modern Approach Set: Sources, shadows and shading Slides by D.A. Forsyth Sources and shading How bright (or what colour) are objects?
Image Formation Fundamentals Basic Concepts (Continued…)
10.3 Ray Model of light.  Remember in our first talk, we discussed how images that are formed by light are created by BILLIONS of light rays coming from.
Capturing light Source: A. Efros.
Announcements Office hours today 2:30-3:30 Graded midterms will be returned at the end of the class.
04/30/02(c) 2002 University of Wisconsin Last Time Subdivision techniques for modeling We are now all done with modeling, the standard hardware pipeline.
Light Reflection and Mirrors.  The Law of Reflection  When a wave traveling in two dimensions encounters a barrier, the angle of incidence is equal.
Shape from Shading Course web page: vision.cis.udel.edu/cv February 26, 2003  Lecture 6.
Physics 106 Lesson #25 Dr. Andrew Tomasch 2405 Randall Lab Geometrical Optics: Image Formation.
In the name of God Computer Graphics. Last Time Some techniques for modeling Today Global illumination and raytracing.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Cameras, lenses, and sensors 6.801/6.866 Profs. Bill Freeman and Trevor Darrell Sept. 10, 2002 Reading: Chapter 1, Forsyth & Ponce Optional: Section 2.1,
CS552: Computer Graphics Lecture 33: Illumination and Shading.
Basics Reflection Mirrors Plane mirrors Spherical mirrors Concave mirrors Convex mirrors Refraction Lenses Concave lenses Convex lenses.
EECS 274 Computer Vision Sources, Shadows, and Shading.
Geometrical Optics.
1 Ch. 4: Radiometry–Measuring Light Preview 。 The intensity of an image reflects the brightness of a scene, which in turn is determined by (a) the amount.
Computer Graphics: Illumination
CS262 – Computer Vision Lect 4 - Image Formation
Shading Revisited Some applications are intended to produce pictures that look photorealistic, or close to it The image should look like a photograph A.
Radiometry (Chapter 4).
Capturing light Source: A. Efros.
Lecturer: Dr. A.H. Abdul Hafez
The Optics of the Camera Obscura or
Announcements Project 3 out today demo session at the end of class.
Computer Graphics (Fall 2003)
Presentation transcript:

Image Formation (approximately) Vision infers world properties form images. So we need to understand how images depend on these properties. Two key elements –Geometry –Light –We consider only simple models of these

(Russell Naughton) Camera Obscura "When images of illuminated objects... penetrate through a small hole into a very dark room... you will see [on the opposite wall] these objects in their proper form and color, reduced in size... in a reversed position, owing to the intersection of the rays". Da Vinci

Used to observe eclipses (eg., Bacon, ) By artists (eg., Vermeer).

(Jack and Beverly Wilgus) Jetty at Margate England, 1898.

Cameras First photograph due to Niepce First on record 1822

Pinhole cameras Abstract camera model - box with a small hole in it Pinhole cameras work in practice (Forsyth & Ponce)

The equation of projection (Forsyth & Ponce)

The equation of projection Cartesian coordinates: –We have, by similar triangles, that (x, y, z) -> (f x/z, f y/z, f) –Ignore the third coordinate, and get

Distant objects are smaller (Forsyth & Ponce)

For example, consider one line segment from (x,0,z) to (x,y,z), and another from (x,0,2z) to (x,y,2z). These are the same length. These project in the image to a line from (fx/z,0) to (fx/z, fy/z) and from (fx/z,0) to (fx/2z, fy/2z), where we can rewrite the last point as: (1/2)(fx/z,fy/z). The second line is half as long as the first.

Parallel lines meet Common to draw image plane in front of the focal point. Moving the image plane merely scales the image. (Forsyth & Ponce)

Vanishing points Each set of parallel lines meets at a different point –The vanishing point for this direction Sets of parallel lines on the same plane lead to collinear vanishing points. –The line is called the horizon for that plane

For example, let’s consider a line on the floor. We describe the floor with an equation like: y = -1. A line on the floor is the intersection of that equation with x = az + b. Or, we can describe a line on the floor as: (a, -1, b) + t(c, 0, d) (Why is this correct, and why does it have more parameters than the first way?) As a line gets far away, z -> infinity. If (x,-1,z) is a point on this line, its image is f(x/z,-1/z). As z -> infinity, -1/z - > 0. What about x/z? x/z = (az+b)/z = a + b/z -> a. So a point on the line appears at: (a,0). Notice this only depends on the slope of the line x = az + b, not on b. So two lines with the same slope have images that meet at the same point, (a,0), which is on the horizon.

Properties of Projection Points project to points Lines project to lines Planes project to the whole image Angles are not preserved Degenerate cases –Line through focal point projects to a point. –Plane through focal point projects to line –Plane perpendicular to image plane projects to part of the image (with horizon).

Take out paper and pencil

Weak perspective (scaled orthographic projection) Issue –perspective effects, but not over the scale of individual objects –collect points into a group at about the same depth, then divide each point by the depth of its group (Forsyth & Ponce)

The Equation of Weak Perspective s is constant for all points. Parallel lines no longer converge, they remain parallel.

Pros and Cons of These Models Weak perspective much simpler math. –Accurate when object is small and distant. –Most useful for recognition. Pinhole perspective much more accurate for scenes. –Used in structure from motion. When accuracy really matters, must model real cameras.

Cameras with Lenses (Forsyth & Ponce)

Human Eye Lens. Fovea, and surround. (see The Island of the Colorblind by Oliver Sacks)

CCD Cameras

New Camera Design (Terry Boult)

Summary Camera loses information about depth. –A model of the camera tells us what information is lost. This will be important when we want to recover this information. Examples: –Motion: with multiple images. –Recognition: using a model. –Shape: how is boundary of smooth object related to its image?

Light Source emits photons Photons travel in a straight line When they hit an object they: bounce off in a new direction or are absorbed (exceptions later). And then some reach the eye/camera.

Basic fact: Light is linear Double intensity of sources, double photons reaching eye. Turn on two lights, and photons reaching eye are same as sum of number when each light is on separately.

Modeling How Surfaces Reflect Light First, language for describing light –Striking a surface; –Leaving a surface. Next, how do we model the relationship between the two. –This depends on the material; –Eg., cloth or mirror.

Irradiance, E Light power per unit area (watts per square meter) incident on a surface. If surface tilts away from light, same amount of light strikes bigger surface (less irradiance). light surface

Radiance, L Amount of light radiated from a surface into a given solid angle per unit area (watts per square meter per steradian). Note: the area is the foreshortened area, as seen from the direction that the light is being emitted. light surface

BRDF

BRDF Not Always Appropriate (Jensen, Marschner, Levoy, Hanrahan) BRDFBSSRDF (don’t ask)

Special Cases: Lambertian Albedo is fraction of light reflected. Diffuse objects (cloth, matte paint). Brightness doesn’t depend on viewpoint. Does depend on angle between light and surface. Surface normal Light 

Lambertian Examples Scene (Oren and Nayar) Lambertian sphere as the light moves. (Steve Seitz)

Specular surfaces Another important class of surfaces is specular, or mirror-like. –radiation arriving along a direction leaves along the specular direction –reflect about normal –some fraction is absorbed, some reflected –on real surfaces, energy usually goes into a lobe of directions ( csNotes/Shading/Shading.html)

Specular surfaces ( csNotes/Shading/Shading.html) Brightness depends on viewing direction.

Phong’s model Vision algorithms rarely depend on the exact shape of the specular lobe. Typically: –very, very small --- mirror –small -- blurry mirror –bigger -- see only light sources as “specularities” –very big -- faint specularities Phong’s model –reflected energy falls off with (Forsyth & Ponce)

Lambertian + specular Two parameters: how shiny, what kind of shiny. Advantages –easy to manipulate –very often quite close true Disadvantages –some surfaces are not e.g. underside of CD’s, feathers of many birds, blue spots on many marine crustaceans and fish, most rough surfaces, oil films (skin!), wet surfaces –Generally, very little advantage in modelling behaviour of light at a surface in more detail -- it is quite difficult to understand behaviour of L+S surfaces (but in graphics???)

Lambertian+Specular+Ambient ( Ambient to be explained.

Modeling Light Sources Light strikes a surface from every direction in front of the object. Light in a scene can be complex:

Can vary with direction. (from Debevec)

Also with position (from Langer and Zucker)

And Along a Straight Line Useful to use simplified models. (from Narasimhan and Nayar)

Simplest model: distant point source All light in scene comes from same direction. With same intensity Consequences: Shadows are black. Light represented as direction & intensity

Lambertian + Point Source Surface normal Light 

Ambient Component Assume each surface normal receives equal light from all directions. Diffuse lighting, no cast shadows. Ambient + point source turns out to be good approximation to next model.

Distant Light Sky Light is function of direction. Same at every scene point. Point, elongated, diffuse.

Conclusions Projection loses info; we can understand this with geometry. Light reaching camera depends on surfaces and lighting; we can understand this with physics. Reflection also loses information. Our models are always simplified. Just because you can see doesn’t mean the relation between the world and images is intuitive. “(The world) saw shadows black until Monet discovered they were coloured,…” Maugham, Of Human Bondage