Computer vision: geometric models Md. Atiqur Rahman Ahad Based on: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince.

Slides:



Advertisements
Similar presentations
Lecture 11: Two-view geometry
Advertisements

3D reconstruction.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Computer vision: models, learning and inference
Two-View Geometry CS Sastry and Yang
Two-view geometry.
Camera calibration and epipolar geometry
Camera Models A camera is a mapping between the 3D world and a 2D image The principal camera of interest is central projection.
Computer Vision : CISC 4/689
Geometry of Images Pinhole camera, projection A taste of projective geometry Two view geometry:  Homography  Epipolar geometry, the essential matrix.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Uncalibrated Geometry & Stratification Sastry and Yang
Lecture 20: Two-view geometry CS6670: Computer Vision Noah Snavely.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Single-view geometry Odilon Redon, Cyclops, 1914.
3D Computer Vision and Video Computing 3D Vision Lecture 14 Stereo Vision (I) CSC 59866CD Fall 2004 Zhigang Zhu, NAC 8/203A
The Pinhole Camera Model
Projected image of a cube. Classical Calibration.
CS223b, Jana Kosecka Rigid Body Motion and Image Formation.
Lec 21: Fundamental Matrix
CSE473/573 – Stereo Correspondence
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Automatic Camera Calibration
Computer vision: models, learning and inference
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
Multi-view geometry.
Epipolar geometry The fundamental matrix and the tensor
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Lecture 04 22/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by.
Metrology 1.Perspective distortion. 2.Depth is lost.
Multiview Geometry and Stereopsis. Inputs: two images of a scene (taken from 2 viewpoints). Output: Depth map. Inputs: multiple images of a scene. Output:
Geometric Camera Models
© 2005 Martin Bujňák, Martin Bujňák Supervisor : RNDr.
Peripheral drift illusion. Multiple views Hartley and Zisserman Lowe stereo vision structure from motion optical flow.
Lecture 03 15/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Computer Vision Stereo Vision. Bahadir K. Gunturk2 Pinhole Camera.
Geometry of Multiple Views
Single-view geometry Odilon Redon, Cyclops, 1914.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
CS-498 Computer Vision Week 7, Day 2 Camera Parameters Intrinsic Calibration  Linear  Radial Distortion (Extrinsic Calibration?) 1.
Bahadir K. Gunturk1 Phase Correlation Bahadir K. Gunturk2 Phase Correlation Take cross correlation Take inverse Fourier transform  Location of the impulse.
Two-view geometry. Epipolar Plane – plane containing baseline (1D family) Epipoles = intersections of baseline with image planes = projections of the.
Feature Matching. Feature Space Outlier Rejection.
Computer vision: models, learning and inference M Ahad Multiple Cameras
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Single-view geometry Odilon Redon, Cyclops, 1914.
Lec 26: Fundamental Matrix CS4670 / 5670: Computer Vision Kavita Bala.
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
Calibrating a single camera
Nazar Khan PUCIT Lecture 19
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
Computer vision: models, learning and inference
Advanced Computer Graphics
Two-view geometry Computer Vision Spring 2018, Lecture 10
Epipolar geometry.
Two-view geometry.
Computer Graphics Recitation 12.
Geometric Camera Models
Multiple View Geometry for Robotics
Reconstruction.
Filtering Things to take away from this lecture An image as a function
Credit: CS231a, Stanford, Silvio Savarese
Course 6 Stereo.
Two-view geometry.
Single-view geometry Odilon Redon, Cyclops, 1914.
The Pinhole Camera Model
Presentation transcript:

Computer vision: geometric models Md. Atiqur Rahman Ahad Based on: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

To understand the structure of the environment from the captures images - Need to maintain accurate and realistic models of – – Lighting – Surface properties – Camera geometry – Camera and object motion 2

2.1 Models of surface reflectance Interaction of light with materials is key to imaging n to develop models for surfaces Light incident on a surface is – Absorbed, – Reflected, – Scattered, and/or – Refracted 3

BRDF For an opaque/non-transparent/solid surface, with no subsurface scattering, BRDF is a function to characterize Bidirectional Reflectance Distribution Function (BRDF) It is a 4D fun that defines how light is reflected in opaque surface. 4

Goal is to estimate and infer properties of the surface Need simple models, e.g., – Lambertian model – Phong model 5

2.1.1 Lambertian reflectance model Lambertian model for surface reflectance: simple. It describes surfaces whose reflectance is independent of the observer’s viewing direction. E.g., – Matte paint, – Unpolished wood, – Wool  exhibit the Lamb. model to a reasonable accuracy. 6

See Eq. 2.1 Albedo – the fraction of incident electromagnetic radiation that is reflected by the surface 7

2.1.2 Phone model / non.Lamb. Many real-world materials have non- Lambertian reflectacne. E.g., mirror-like surfaces: they reflect incoming light in a specific direction about the local surface normal, at the point of incidence Specular / mirror-like components and Lambartian components – together! 8

See eq parts of Phong model – – Diffuse part: for Lambertian shading due to illuminant direction – Specular term: for Specular highlights 9

2.2 Camera Models Pinhole camera projection / model Epipolar geomtery – consider 2 images or central projections of a 3D scene. Multi-view localization problem  correspondence problem Traingulation – to localize the objects in world coordinates. Need correspondence info across cameras & it is difficult to obtain Planar images and homography [rotation matrix, translation matrix] Camera calibration 10

Motivation 11Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Sparse stereo reconstruction Compute the depth at a set of sparse matching points

Pinhole camera model 12Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Real camera image is inverted Instead model impossible but more convenient virtual image

Pinhole camera terminology 13Computer vision: models, learning and inference. ©2011 Simon J.D. Prince World coordinates Image plane coordinates

Normalized Camera 14Computer vision: models, learning and inference. ©2011 Simon J.D. Prince By similar triangles:

Can model both the effect of the distance to the focal plane the density of the receptors with a single focal length parameter  In practice, the receptors may not be square: So use different focal length parameter for x and y dims Focal length parameters 15Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Offset parameters 16Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Current model assumes that pixel (0,0) is where the principal ray strikes the image plane (i.e., the center) Model offset to center

Finally, add skew parameter Accounts for image plane being not exactly perpendicular to the principal ray Skew parameter 17Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Position w=(u,v,w) T of point in the world is generally not expressed in the frame of reference of the camera. Transform using 3D transformation or Position and orientation of camera 18Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Point in frame of reference of camera Point in frame of reference of world / World coordinates

Intrinsic [inherent/essential] parameters (stored as intrinsic matrix) Extrinsic [external/non-essential] parameters Complete pinhole camera model 19Computer vision: models, learning and inference. ©2011 Simon J.D. Prince focal length parameter  different focal length parameter for x and y dims.

For short: Add noise – uncertainty in localizing feature in image Complete pinhole camera model 20Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

Radial distortion 21Computer vision: models, learning and inference. ©2011 Simon J.D. Prince

2.2.2 Epipolar geometry The principle of triangulation in stereo imaging.

Epipolar constraint 23

The epipole is the point of intersection of the line joining the optical centres, that is the baseline, with the image plane. Thus the epipole is the image, in one camera, of the optical centre of the other camera. The epipolar plane is the plane defined by a 3D point M and the optical centres C and C'. The epipolar line is the straight line of intersection of the epipolar plane with the image plane. It is the image in one camera of a ray through the optical centre and image point in the other camera. All epipolar lines intersect at the epipole. 24

The epipolar line along which the corresponding point for X must lie. 25

Conclusion 26Computer vision: models, learning and inference. ©2011 Simon J.D. Prince Pinhole camera model is a non-linear function that takes points in 3D world and finds where they map to in image Parameterized by intrinsic and extrinsic matrices Difficult to estimate intrinsic/extrinsic/depth because non-linear Use homogeneous coordinates where we can get closed form solutions (initial sol’ns only)