Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter2 Image Formation Reading: Szeliski, Chapter 2.

Similar presentations


Presentation on theme: "Chapter2 Image Formation Reading: Szeliski, Chapter 2."— Presentation transcript:

1 Chapter2 Image Formation Reading: Szeliski, Chapter 2

2 What are we tuned to? The visual system is tuned to process structures typically found in the world.

3 What is a natural image?

4 The structure of ambient light

5 What is a natural image?

6 The visual system seems to be tuned to a set of images: What is a natural image?

7 The visual system seems to be tuned to a set of images: Did you saw this image? What is a natural image?

8 The visual system seems to be tuned to a set of images: Demo inspired from D. Field

9 6 images Not all these images are the result of sampling a real-world plenoptic function http://www.rowland.harvard.edu/images/ModPurp.jpg http://www.alexfito.com/

10 Proposition 1. The primary task of early vision is to deliver a small set of useful measurements about each observable location in the plenoptic function. Proposition 2. The elemental operations of early vision involve the measurement of local change along various directions within the plenoptic function. Goal: to transform the image into other representations (rather than pixel values) that makes scene information more explicit Cavanagh, Perception 95

11 What are “visual features” Shape, color, texture, etc

12 2.1 Photometric Image Formation Discrete color or intensity values Where do these value come from? – Geometry, projection – Camera optics, sensor properties – Lighting, surface properties

13 Images as Functions

14 We can think of an image as a function, f, from R 2 to R: – f( x, y ) gives the intensity at position ( x, y ) – Realistically, we expect the image only to be defined over a rectangle, with a finite range: f: [a,b] x [c,d]  [0,1] A color image is just three functions pasted together. We can write this as a “vector-valued” function:

15 Images as functions

16 What is a digital image? We usually work with digital (discrete) images: – Sample the 2D space on a regular grid – Quantize each sample (round to nearest integer) If our samples are  apart, we can write this as: f[i,j] = Quantize{ f(i , j  ) } The image can now be represented as a matrix of integer values

17 Photometric Image Formation Perspective projectionLight scattering Lens opticsBayer color filter array

18 Photometric Image Formation

19 2.2 Lighting Point light source – Single location (small light bulb) – Infinity: the sun --directional light Area light source – A finite rectangular area emitting light equally in all directions Environment map Light direction to color mapping

20 2.2.2 Reflectance and Shading many models for reflectance and shading BRDF: Bidirectional Reflectance Distribution Function

21 BRDF BRDF is reciprocal For isotropic surface, no preferred directions for light transport

22 BRDF Light existing a surface point: Foreshortening factor

23 Diffuse Reflection Also called Lambertian or matte reflection – Light is scattered uniformly in all directions, i.e. – BRDF is constant: Think about the inverse problem

24 Specular Reflection Depends on the direction of outgoing light Mirror surface: – Specular reflection direction

25 Specular Reflection Amount of light – Phone model – Micro-facet model – Larger,, more specular surface with hightlights ; Smaller, softer gloss

26 Phone Shading Model Diffuse Specular Ambient: – Does not depend on surface orientation – Color and both ambient illumination and the object

27 Phone Shading Model The recent advent of programmable pixel shaders makes the use of more complex models feasible.

28 Example

29 Realistic Rendering The recent advent of programmable pixel shaders makes the use of more complex models feasible. Ioannis GkioulekasIoannis Gkioulekas, et al, Siggraph’13

30 Optics Lens, sensor Ideal pinhole camera More complex: focus, exposure, vignetting, aberation,…,

31 Thin lens model Thin lens: low, equal curvature on both sides Optical axis

32 Thin lens model object Focus plane

33 Thin lens model object Circle of confusion Pinhole camera

34 Pinhole Camera Model object

35 Pinhole Camera Model object

36 Pinhole Camera Model object

37 2.3 3D to 2D Projection 3D perspective: the most commonly used projection in computer vision and computer graphics 3D view of world perspective book: pp32-60

38 Pinhole Camera Model object

39 Pinhole Camera Model object

40 Pinhole Camera Model Using homogeneous (projective) coordinate –

41 Camera Intrinsics Imperfect camera image sensor s: possible skew between sensor axes a: aspect ratio : optical center F : focal length Five intrinsic parameters

42 Camera Intrinsics Focal length Actual focal length, e.g. 18~55mm, Conventional sensor width: 35 mm Digital Image: integer values, [0,W) x [0,H) Focal length Sensor width Field of view

43 Extrinsic Parameters World Coordinate system to Camera Coordinate system Extrinsic parameters

44 Extrinsic Parameters with Camera Matrix

45 2.3 Digital Camera Process chart

46 2.3 Digital Camera Process chart

47 2.3.2 Color Light from different parts of the spectrum is somehow integrated into discrete RGB color values

48 2.3.2 Color Primary and Secondary Colors Additive colors (projector, monitor) Subtractive colors (printing, printing)

49 CIE color matching Commission Internationale d’Eclairage (CIE) Color matching experiments pure colors to the R=700.0nm, G=546.1nm, and B=435.8nm

50 XYZ Color Space Y=1 for pure R (1,0,0)

51 XYZ Color Space Y=1 for (1,1,1)

52 XYZ Color Space Chromaticity coordinates Yxy (luminance plus the two most distinctive chrominance components)

53 Chromaticity Diagram

54 L*a*b* Color Space Human visual system is roughly logarithmic Differences in luminance or chrominance are more perceptually uniform Non-linear mapping from XYZ to L*a*b* space

55 L*a*b* Color Space

56 Color Cameras Spectral response function Make sure to generate the standard color values HDTV, new monitors, new standard ITU-R BT.709

57 Color Filter Arrays Separate sensors for three primary colors Bayer RGB pattern: (a) color filter array layout; (b) interpolated pixel values

58 Bayer Pattern, 1976 Green filters are twice as many as red and blue filters Human visual system is much more sensitive to high frequency detail in luminance than chrominance Luminance is mostly determined by green value

59 Color Balance Move the white point of a given image closer to pure white (R=G=B) – Multiply RGB values by a different factor – Color twist, general 3x3 transform matrix – Exercise 2.9 (optional)

60 Gamma CRT Monitor: non-linear relationship between the voltage and the resulting brightness is determined by gamma Pre-map the sensed luminance Y through an inverse gamma

61 Gamma Compensation Noise added during transmission or quantization will be reduced in the darker regions of the signal where it was more visible

62 Other Color Spaces XYZ, RGB for spectral content of color signals Others for image coding and computer graphics – YUV, YCrCb, HSV

63 YUV Color Space YUV for video transmission – Luma – Two lower frequency chroma channels

64 YCrCb Color Space Closely related to YUV Different scale factor to fit within the 8-bit range for digital signals Useful for careful image de-blocking, et al.

65 HSV Color Space Hue: direction around a color wheel Saturation: scaled distance from the diagonal Value: mean or maximum color value More suitable for color picking

66 HSV Color Space

67 Color Ratios Suitable for algorithms that only affect the value/luminance and not saturation or hue After processing, scale rgb back by the color ratio Ynew/Yold Color FAQ, http://www.poynton.com/ColorFAQ.html

68

69 2.3.3 Compression Converting signal into YCbCr (or related variant) Compress the luminance signal with higher fidelity than the chrominance signal


Download ppt "Chapter2 Image Formation Reading: Szeliski, Chapter 2."

Similar presentations


Ads by Google