CS262 – Computer Vision Lect 4 - Image Formation

Slides:



Advertisements
Similar presentations
Computer Vision Radiometry. Bahadir K. Gunturk2 Radiometry Radiometry is the part of image formation concerned with the relation among the amounts of.
Advertisements

1 Graphics CSCI 343, Fall 2013 Lecture 18 Lighting and Shading.
Capturing light Source: A. Efros. Image formation How bright is the image of a scene point?
CS 376b Introduction to Computer Vision 04 / 21 / 2008 Instructor: Michael Eckmann.
Radiometry. Outline What is Radiometry? Quantities Radiant energy, flux density Irradiance, Radiance Spherical coordinates, foreshortening Modeling surface.
Capturing light Source: A. Efros. Review Pinhole projection models What are vanishing points and vanishing lines? What is orthographic projection? How.
Lecture 30: Light, color, and reflectance CS4670: Computer Vision Noah Snavely.
Announcements. Projection Today’s Readings Nalwa 2.1.
Telescopes (Chapter 6). Based on Chapter 6 This material will be useful for understanding Chapters 7 and 10 on “Our planetary system” and “Jovian planet.
Stefano Soatto (c) UCLA Vision Lab 1 Homogeneous representation Points Vectors Transformation representation.
© 2002 by Davi GeigerComputer Vision January 2002 L1.1 Image Formation Light can change the image (and appearances). What is the relation between pixel.
Introduction to Computer Vision CS / ECE 181B Tues, May 18, 2004 Ack: Matthew Turk (slides)
Announcements Mailing list Project 1 test the turnin procedure *this week* (make sure it works) vote on best artifacts in next week’s class Project 2 groups.
© 2004 by Davi GeigerComputer Vision February 2004 L1.1 Image Formation Light can change the image and appearances (images from D. Jacobs) What is the.
Lecture 12: Projection CS4670: Computer Vision Noah Snavely “The School of Athens,” Raphael.
Image formation 2. Blur circle A point at distance is imaged at point from the lens and so Points a t distance are brought into focus at distance Thus.
© 2003 by Davi GeigerComputer Vision September 2003 L1.1 Image Formation Light can change the image and appearances (images from D. Jacobs) What is the.
The Camera : Computational Photography Alexei Efros, CMU, Fall 2008.
Basic Principles of Imaging and Photometry Lecture #2 Thanks to Shree Nayar, Ravi Ramamoorthi, Pat Hanrahan.
Light and shading Source: A. Efros.
Reflectance Map: Photometric Stereo and Shape from Shading
1 CS6825: Image Formation How are images created. How are images created.
Human Eye and Color Rays of light enter the pupil and strike the back of the eye (retina) – signals go to the optic nerve and eventually to the brain Retina.
01/28/05© 2005 University of Wisconsin Last Time Improving Monte Carlo Efficiency.
LIGHT.
Image Formation Fundamentals Basic Concepts (Continued…)
Digital Image Fundamentals Selim Aksoy Department of Computer Engineering Bilkent University
Image Formation Dr. Chang Shu COMP 4900C Winter 2008.
Capturing light Source: A. Efros.
1 Perception, Illusion and VR HNRS 299, Spring 2008 Lecture 2 Introduction, Light Course webpage:
Radiometry and Photometric Stereo 1. Estimate the 3D shape from shading information Can you tell the shape of an object from these photos ? 2.
Properties of Reflective Waves Flat Mirrors. Light travels in a straight line Some light is absorbed Some light is redirected – “Reflected”
The Camera. Photography is all about how light interacts with film and with paper. Cameras are designed to control the amount of light that reaches film.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Chapter 2: The Lens. Focal Length is the distance between the center of a lens and the film plane when focused at infinity.
Radiometry of Image Formation Jitendra Malik. A camera creates an image … The image I(x,y) measures how much light is captured at pixel (x,y) We want.
Distributed Ray Tracing. Can you get this with ray tracing?
Eight light and image. Oscillation Occurs when two forces are in opposition Causes energy to alternate between two forms Guitar string Motion stretches.
Radiometry of Image Formation Jitendra Malik. What is in an image? The image is an array of brightness values (three arrays for RGB images)
1 Ch. 4: Radiometry–Measuring Light Preview 。 The intensity of an image reflects the brightness of a scene, which in turn is determined by (a) the amount.
Computer vision: geometric models Md. Atiqur Rahman Ahad Based on: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince.
Computer Graphics Lecture 26 Mathematics of Lighting and Shading Part II Taqdees A. Siddiqi
CS580: Radiometry Sung-Eui Yoon ( 윤성의 ) Course URL:
MAN-522 Computer Vision Spring
Rendering Pipeline Fall, 2015.
Aperture and Depth of Field
Notes 23.1: Optics and Reflection
Color Image Processing
Color Image Processing
The Camera : Computational Photography
Color Image Processing
© 2002 University of Wisconsin
Optics Optics is the study of how light behaves.
Aperture, Exposure and Depth of Field
Light Big Idea: Electromagnetic Radiation, which includes light, is a form of radiant energy possessing properties of both waves and zero-mass particles.
Digital Image Fundamentals
Chapter Menu Lesson 1: What is light? Lesson 2: Light and Matter
Capturing light Source: A. Efros.
Lecture 13: Cameras and geometry
Color Image Processing
The Camera : Computational Photography
Announcements Midterm out today Project 1 demos.
Lesson 14 Key Concepts and Notes
Part One: Acquisition of 3-D Data 2019/1/2 3DVIP-01.
Mirrors and Lenses.
L 32 Light and Optics [2] Measurements of the speed of light 
CS 480/680 Computer Graphics Shading.
Announcements Midterm out today Project 1 demos.
Distributed Ray Tracing
Physical Problem of Simultaneous Linear Equations in Image Processing
Presentation transcript:

CS262 – Computer Vision Lect 4 - Image Formation John Magee 25 January, 2017 Slides courtesy of Diane H. Theriault

Question of the Day: Why is Computer Vision hard?

All this effort to make sure the LIGHTING is good for a movie All this effort to make sure the LIGHTING is good for a movie. Why is more light needed for a good quality movie? What factors affect how much light reaches the film or image sensor? Why does your cell phone take such lousy pictures at a party? How does this all affect Computer Vision?

How are images formed Light is emitted from a light source Light hits a surface Light interacts with the surface Reflected light enters camera aperture Sensor of camera interprets light Szeliski Ch 2.2 Don’t worry about all the details of the math Shapiro & Stockman Ch. 6, Ch. 2 (https://courses.cs.washington.edu/courses/cse576/99sp/book.html

Light is emitted Point light sources radiates (emits) light uniformly in all directions Properties of light: Color spectrum (Wavelength distribution) Intensity (Watts / Area * Solid Angle) Note: A solid angle is like a cone Note: “Area” light sources, like fluorescent lights, are a little different

Light hits a surface Surface orientation is very important for determining the amount of incident light! The amount of incident light that falls on a surface (irradiated light) size of the surface solid angle of light subtended by the surface depends on distance to light and orientation of surface Distance: 5 m Orientation: 0 degrees Solid angle: 16.4 degrees attenuation Distance: 2.5 m Solid angle: 22.6 degrees Distance: ≈2.5 m Orientation: 45 degrees Solid angle: 11.4 degrees foreshortening

Light Interacts with a surface Some light absorbed due to surface color What happens to the rest? The orientation of a surface is defined by its “normal vector” which sticks straight up out of the surface. Simplified BRDF modeled with two components: “Lambertian”, “flat” or “matte” component : light radiated equally in all directions “Specular”, “shiny”, or “highlight” component: radiated light is reflected across the normal from the incoming light Bi-direction reflectance function: “BRDF” expresses : the amount, direction, and color spectrum of reflected light depending on the amount, direction, and color spectrum of incoming light

Reflected light enters a camera Pinhole Model Object location focal distance / focal length Optical axis focal plane / image plane Scene Depth Center of Projection Image location Red triangle (behind camera) and blue triangle (in front of camera) are similar: therefore: Given any three terms, you can determine the fourth

Reflected light enters a camera For given focal length, “Lens Equation leads to A “blur circle” or “circle of confusion” results when projections of objects are not focused on the image plane. The size of the blur circle depends on the distance to the object and the size of the aperture. The allowable size of the blur circle (e.g. a pixel) determines the allowable range of depths in the scene (“depth of field”) Note: The “F number” or “f stop” commonly used in photography is the ratio of focal length to aperture size. (http://www.dofmaster.com/dofjs.html)

Camera sensor interprets light http://micro.magnet.fsu.edu/optics/lightandcolor/vision.html Image is quantized into pixels to go from physical size of projection to pixel coordinates Szeliski 2.3, Shapiro & Stockman 2.2

Now what? Interaction between light, objects, and the camera leads to images The way image values change hopefully tells us something about the objects, the light, and the camera

Image Gradients “the way image values change”  image derivative “Gradient” at a particular point (x, y) is a vector that points in the direction of largest change Gradient can be in Cartesian (x, y) or Polar (magnitude, angle) coordinates Every point in an image may have a different gradient vector Friday’s lab and this week’s homework will be devoted to image gradients and edges.

Discussion Questions: What influences are mixed together when we observe the light reflected from a surface? In order to infer surface orientation, what assumptions do we need to make? Can we construct restricted imaging conditions that make this job easier? In order to infer surface properties, what assumptions do we need to make? Can we construct restricted imaging conditions that make this job easier? What are some things we would like to know about objects that we can’t directly observe, even if we could correctly reconstruct surface orientation, color, texture, and reflectance properties? (hint: clothes) What steps could we take to try to understand those things, given the image information Think of some ways that we could define the scope of some tasks that we might be able to do, even if all we have is the image appearance and we can’t infer scene structure and surface orientation and properties.

Light incident on a surface The amount of light that falls on a surface (irradiated light) size of the surface solid angle of light subtended by the surface Surfaces that are further away from the light subtend a smaller solid angle  attenuation Surfaces that are turned away from the light subtend a smaller solid angle  foreshortening

Image Gradients The gradient is a vector like any other vector. It just happens to represent the way the values of the image are changing. One way to compute gradient: “finite differences”: Just compute the difference between each pixel and the previous one (horizontally and vertically). Switching from the Cartesian representation (x,y) to the polar representation (magnitude, direction) is often helpful, and very, very important. Friday’s lab and this week’s homework will be devoted to image gradients and edges.