1 Formation et Analyse d’Images Session 2 Daniela Hall 7 October 2004.

Slides:



Advertisements
Similar presentations
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #12.
Advertisements

Color Image Processing
1 Formation et Analyse d’Images Session 12 Daniela Hall 16 January 2006.
1 Formation et Analyse d’Images Session 3 Daniela Hall 14 October 2004.
School of Computing Science Simon Fraser University
SWE 423: Multimedia Systems Chapter 4: Graphics and Images (2)
© 2002 by Yu Hen Hu 1 ECE533 Digital Image Processing Color Imaging.
Capturing Light… in man and machine : Computational Photography Alexei Efros, CMU, Fall 2006 Some figures from Steve Seitz, Steve Palmer, Paul Debevec,
1 Color Color Used heavily in human vision Used heavily in human vision Color is a pixel property, making some recognition problems easy Color is a pixel.
Color.
1 Computer Science 631 Lecture 6: Color Ramin Zabih Computer Science Department CORNELL UNIVERSITY.
CSE 803 Stockman Fall Color Used heavily in human vision Color is a pixel property, making some recognition problems easy Visible spectrum for humans.
Image Formation Mohan Sridharan Based on slides created by Edward Angel CS4395: Computer Graphics 1.
CSC 461: Lecture 2 1 CSC461 Lecture 2: Image Formation Objectives Fundamental imaging notions Fundamental imaging notions Physical basis for image formation.
1 Perception. 2 “The consciousness or awareness of objects or other data through the medium of the senses.”
1 CSCE441: Computer Graphics: Color Models Jinxiang Chai.
Asma Kanwal Lecturer Department of Computer Science, GC University, Lahore Dr. Wajahat Mahmood Qazi Assistant Professor Department of Computer Science,
Human Vision CS200 Art Technology Spring The Retina Contains two types of photoreceptors – Rods – Cones.
Image formation & Geometrical Transforms Francisco Gómez J MMS U. Central y UJTL.
Computer Vision Spring ,-685 Instructor: S. Narasimhan PH A18B T-R 10:30am – 11:50am Lecture #13.
1 Formation et Analyse d’Images Daniela Hall 19 Septembre 2005
Image Processing Lecture 2 - Gaurav Gupta - Shobhit Niranjan.
1 Formation et Analyse d’Images Daniela Hall 30 Septembre 2004.
Computer Graphics I, Fall 2008 Image Formation.
1 Image Formation. 2 Objectives Fundamental imaging notions Physical basis for image formation ­Light ­Color ­Perception Synthetic camera model Other.
Any questions about the current assignment? (I’ll do my best to help!)
Image Formation. Input - Digital Images Intensity Images – encoding of light intensity Range Images – encoding of shape and distance They are both a 2-D.
1 Perception and VR MONT 104S, Fall 2008 Lecture 7 Seeing Color.
Human Eye and Color Rays of light enter the pupil and strike the back of the eye (retina) – signals go to the optic nerve and eventually to the brain Retina.
COLLEGE OF ENGINEERING UNIVERSITY OF PORTO COMPUTER GRAPHICS AND INTERFACES / GRAPHICS SYSTEMS JGB / AAS Light and Color Graphics Systems / Computer.
1 © 2010 Cengage Learning Engineering. All Rights Reserved. 1 Introduction to Digital Image Processing with MATLAB ® Asia Edition McAndrew ‧ Wang ‧ Tseng.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
ELE 488 Fall 2006 Image Processing and Transmission Syllabus 1. Human Visual System 2. Image Representations (gray level, color) 3. Simple Processing:
25.2 The human eye The eye is the sensory organ used for vision.
Human Visual Perception The Human Eye Diameter: 20 mm 3 membranes enclose the eye –Cornea & sclera –Choroid –Retina.
Color. Contents Light and color The visible light spectrum Primary and secondary colors Color spaces –RGB, CMY, YIQ, HLS, CIE –CIE XYZ, CIE xyY and CIE.
UNIT EIGHT: Waves Chapter 24 Waves and Sound Chapter 25 Light and Optics.
Week 6 Colour. 2 Overview By the end of this lecture you will be familiar with: –Human visual system –Foundations of light and colour –HSV and user-oriented.
6. COLOR IMAGE PROCESSING
Color 2011, Fall. Colorimetry : Definition (1/2) Colorimetry  Light is perceived in the visible band from 380 to 780 nm  distribution of wavelengths.
Color Theory ‣ What is color? ‣ How do we perceive it? ‣ How do we describe and match colors? ‣ Color spaces.
1 Formation et Analyse d’Images Session 7 Daniela Hall 25 November 2004.
CSC361/ Digital Media Burg/Wong
An Introduction to Analyzing Colors in a Digital Photograph Rob Snyder.
CS6825: Color 2 Light and Color Light is electromagnetic radiation Light is electromagnetic radiation Visible light: nm. range Visible light:
A color model is a specification of a 3D color co-ordinate system and a visible subset in the co-ordinate System within all colors in a particular color.
1 Formation et Analyse d’Images Session 2 Daniela Hall 26 September 2005.
CS654: Digital Image Analysis Lecture 29: Color Image Processing.
1 CSCE441: Computer Graphics: Color Models Jinxiang Chai.
Introduction to Computer Graphics
Color Models. Color models,cont’d Different meanings of color: painting wavelength of visible light human eye perception.
1 Formation et Analyse d’Images Session 4 Daniela Hall 10 October 2005.
CS-321 Dr. Mark L. Hornick 1 Color Perception. CS-321 Dr. Mark L. Hornick 2 Color Perception.
Chapter 24: Perception April 20, Introduction Emphasis on vision Feature extraction approach Model-based approach –S stimulus –W world –f,
1 of 32 Computer Graphics Color. 2 of 32 Basics Of Color elements of color:
CSE 185 Introduction to Computer Vision
Computer vision: geometric models Md. Atiqur Rahman Ahad Based on: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince.
Color Models Light property Color models.
MAN-522 Computer Vision Spring
Capturing Light… in man and machine
Color Image Processing
Color Image Processing
25.2 The human eye The eye is the sensory organ used for vision.
Color Image Processing
Computer Vision Lecture 4: Color
Color Image Processing
Capturing Light… in man and machine
Color Image Processing
Color Model By : Mustafa Salam.
Color Theory What is color? How do we perceive it?
Presentation transcript:

1 Formation et Analyse d’Images Session 2 Daniela Hall 7 October 2004

2 Course Overview Session 1: –Homogenous coordinates and tensor notation –Image transformations –Camera models Session 2: –Camera models –Reflection models –Color spaces Session 3: –Review color spaces –Pixel based image analysis –Gaussian filter operators Session 4: –Scale Space

3 Course overview Session 5: –Contrast description –Hough transform Session 6: –Kalman filter –Tracking of regions, pixels, and lines Session 7: –Stereo vision –Epipolar geometry Session 8: exam

4 Session Overview 1.Camera model 2.Light 3.Reflection models 4.Color spaces

5 Camera model Projective model Scene coordinates Camera coordinates Image coordinates

6 Camera model Transformation from scene to camera coordinates Projection of camera coordinates to retina coordinates Transformation from retina coordinates to image coordinates Composition (camera model) The camera model is the composition of the transformations that transform P s to P i

7 Transformation Scene - Camera (xs,ys,zs) is position of the origin of the camera system with respect to the scene coordinates (translation). R is the orientation of the camera system with respect to the scene system (3d rotation).

8 3d rotation Around x-axis (counter-clockwise) Around y-axis Around z-axis General

9 Projection Camera-Retina Imagine a 1D camera in a 2D space. The transformation M R c can be found by considering similar triangles z (x c,z c ) F x xrxr O

10 Projection Camera-Retina

11 Transformation Retina-Image A frame: the image is composed of pixels (picture elements) Pixels are in general not squared. There physical sizes depends on the used material. i columns j rows (0,0) (i-1,j-1)

12 Intrinsic camera parameters F: focal distance C i, C j : Optical image center (in pixels) D i, D j : Physical size of the pixel on the retina (in pixel/mm) i, j : image coordinates (in pixels) Transformation Retina-Image

13 Camera model Equation: Image coordinates

14 Transformation image-scene Problem: we need to know depth z s for each image position. This process of finding M S I is called calibration. M S I has 12 coefficients. M S I is homogenous. 11 degrees of freedom.

15 Calibration 1.Construct a calibration object whose 3D position is known. 2.Measure image coordinates 3.Determine correspondences between 3D point R S k and image point P I k. 4.We have 11 DoF. We need at least 5 ½ correspondences.

16 Calibration For each correspondence scene point R S k and image point P I k which gives following equations for k=1,..., 6 from wich M I S can be computed

17 Calibration using many points For k=5 ½ M has one solution. –Solution depends on precise measurements of 3D and 2D points. –If you use another 5 ½ points you will get a different solution. A more stable solution is found by using large number of points and do optimisation.

18 Calibration using many points For each point correspondence we know (i,j) and R. We want to know M I S. Solve equation with your favorite algorithm (least squares, levenberg- marquart, svd,...)

19 Application: Rectifying images

20 Applications

21 Homographie: projection from one plane to another Homographie H B A is bijective Q B = H B A P A

22 Homography computation H can be computed from 4 point correspondences. Ps1 Ps2 Ps3 Ps4 Rd1 Rd2 Rd3 Rd4 Source image (observed) Destination image (rectified)

23 Homography computation H is 3x3 matrix and has 8 degrees of freedom (homogenous coordinates) gives 8 equations and one solution for H.

24 Session Overview 1.Camera model 2.Light 3.Reflection models 4.Color spaces

25 Light N: surface normal i angle between incoming light and normal e angle between normal and camera g angle between light and camera camera light N e g i

26 Spectrum Light source is characterised by its spectrum. The spectrum consists of a particular quantity of photons per frequency. The frequency is described by its wavelength The visible spectrum is 380nm to 720nm Cameras can see a larger spectrum depending on their CCD chip

27 Albedo is the fraction of light that is reflected by a body or surface. Reflectance function: Albedo camera light N e g i

28 Session Overview 1.Camera model 2.Light 3.Reflection models 4.Color spaces

29 Reflectance functions Specular reflection –example mirror Lambertian reflection –diffuse reflection, example paper, snow

30 Specular reflection camera light N e g i

31 Lambertian reflection

32 Di-chromatic reflectance model the reflected light R is the sum of the light reflected at the surface R s and the light reflected from the material body R L R s has the same spectrum as the light source The spectrum of R l is « filtered » by the material (photons are absorbed, this changes the emitted light) Luminance depends on surface orientation Spectrum of chrominance is composed of light source spectrum and absorption of surface material.

33 Color perception The retina is composed of rods and cones. Rods - provide "scotopic" or low intensity vision. –Provide our night vision ability for very low illumination, –Are a thousand times more sensitive to light than cones, –Are much slower to respond to light than cones, –Are distributed primarily in the periphery of the visual field.

34 Color perception Cones - provide "photopic" or high acuity vision. –Provide our day vision, –Produce high resolution images, –Determine overall brightness or darkness of images, –Provide our color vision, by means of three types of cones: "L" or red, long wavelength sensitive, "M" or green, medium wavelength sensitive, "S" or blue, short wavelength sensitive. Cones enable our day vision and color vision. Rods take over in low illumination. However, rods cannot detect color which is why at night we see in shades of gray. source:

35 Color perception Rod Sensitivity- Peak at 498 nm. Cone Sensitivity- Red or "L" cones peak at 564 nm. - Green or "M" cones peak at 533 nm. - Blue or "S" cones peak at 437 nm. This diagram shows the wavelength sensitivities of the different cones and the rods. Note the overlap in sensitivity between the green and red cones.

36 Camera sensitivity observed light intensity depends on: –source spectrum: S(λ) –reflectance of the observed point (i,j): P(i,j,λ) –receptive spectrum of the camera: c(λ) –p0 is the gain nm S(λ) λ vidicon CCD

37 Classical RGB camera The filters follow a convention of the International Illumination Commission. They are functions of λ: r(λ), g(λ), b(λ) They are close to the sensitivity of the human color vision system.

38 Color pixels

39 Color bands (channels) It is not possible to perceive the spectrum directly. Color is a projection of the spectrum to the spectrum of the sensors.

40 Session Overview 1.Camera model 2.Light 3.Reflection models 4.Color spaces

41 Color spaces RGB color space CMY color space YIQ color space HLS color space

42 RGB color space A CCD camera provides RGB images The luminance axis is r=g=b (diagonal)

43 CMY color space Cyan, magenta, yellow CMYK: CMY + black color channel

44 YIQ color space This is an approximation of –Y: luminance, –I: red – cyan, –Q: magenta - green Used US TVs (NTSC coding). Black and white TVs display only Y channel.

45 HLS space Hue, luminance, saturation space. L=R+G+B S=1-3*min(R,G,B)/L L S T

46 Color distribution Color distribution can be studied by histograms. A histogram is a multi-dimensional table. We define a function from the continuous color space to the discrete histogram space. Then each pixel of the image increments a cell in the histogram. Example: We define a histogram of RGB (3D) with 32 cells/dimension. The pixel value (210,180,100) increments cell (6,5,3)

47 Colors of a surface A reflection has the color of the light source (Rs) Which color is near the border of the reflection? –Rs and Rb are mixed. –A color histogram can be used to study this mix. –The histogram should contain two axis (in theory). –But reflectance in the real world is more complex than only Rs and Rb. You also have inter reflectance between neighboring objects.