 # Camera Calibration CS485/685 Computer Vision Prof. Bebis.

## Presentation on theme: "Camera Calibration CS485/685 Computer Vision Prof. Bebis."— Presentation transcript:

Camera Calibration CS485/685 Computer Vision Prof. Bebis

Camera Calibration - Goal Estimate the extrinsic and intrinsic camera parameters. f/s x f/s y

Camera Calibration - How Using a set of known correspondences between point features in the world (X w, Y w, Z w ) and their projections on the image (x im, y im ) f/s x f/s y

Calibration Object Calibration relies on one or more images of a calibration object: (1) A 3D object of known geometry. (2) Located in a known position in space. (3) Yields image features which can be located accurately.

Calibration object: example Two orthogonal grids of equally spaced black squares. Assume that the world reference frame is centered at the lower left corner of the right grid, with axes parallel to the three directions identified by the calibration pattern.

Calibration pattern: example (cont’d) Obtain 3D coordinates (X w, Y w, Z w ) –Given the size of the planes, the number of squares etc. (i.e., all known by construction), the coordinates of each vertex can be computed in the world reference frame using trigonometry.

Calibration pattern: example (cont’d) Obtain 2D coordinates (x im, y im ) –The projection of the vertices on the image can be found by intersecting the edge lines of the corresponding square sides (or through corner detection).

Problem Statement Compute the extrinsic and intrinsic camera parameters from N corresponding pairs of points: and (x im_i, y im_i ), i = 1,..., N. Very well studied problem. There exist many different methods for camera calibration.

Methods (1) Indirect camera calibration (1.1) Estimate the elements of the projection matrix. (1.2) If needed, compute the intrinsic/extrinsic camera parameters from the entries of the projection matrix.

Methods (cont’d) (2) Direct camera calibration Direct recovery of the intrinsic and extrinsic camera parameters.

Method 1: Indirect Camera Calibration Review of basic equations Note: replaced (x im,y im ) with (x,y) for simplicity.

(Method 1) Step 1: solve for m ij ’s M has 11 independent entries. –e.g., divide every entry by m 11 Need at least 11 equations for computing M. Need at least 6 world-image point correspondences.

(Method 1) Step 1: solve for m ij ’s Each 3D-2D correspondence gives rise to two equations:

(Method 1) Step 1: solve for m ij ’s This leads to a homogeneous system of equations: N x 12 matrix

(Method 1) Step 1: solve for m ij ’s

(Method 1) Step 2: find intrinsic/extrinsic parameters

Let’s define the following vectors:

The solutions are as follows (see book chapter for details): The rest parameters are easily computed.... (Method 1) Step 2: find intrinsic/extrinsic parameters

Method 2: Direct Camera Calibration Review of basic equations –From world coordinates to camera coordinates –For simplicity, we will replace -T’ with T –Warning: this is NOT the same T as before! P c =RP w +T

Method 2: Direct Camera Calibration (cont’d) Review of basic equations –From camera coordinates to pixel coordinates: –Relating world coordinates to pixel coordinates:

Method 2: Direct Parameter Calibration Intrinsic parameters –Intrinsic parameters f, s x, s y, o x, and o y are not independent. –Define the following four independent parameters:

Method 2: Main Steps (1) Assuming that o x and o y are known, estimate all other parameters. (2) Estimate o x and o y

(Method 2) Step 1: estimate f x, α, R, and T To simplify notation, set (x im - o x, y im - o y ) = (x, y) Combining the equations above (i.e., same denominator), we have:

(Method 2) Step 1: estimate f x, α, R, and T (cont’d) Each pair of corresponding points must satisfy the previous equation: divide by f y and re-arrange terms:

(Method 2) Step 1: estimate f x, α, R, and T (cont’d) where we obtain the following equation:

(Method 2) Step 1: estimate f x, α, R, and T (cont’d) Assuming N correspondences leads to a homogeneous system : N x 8 matrix

(Method 2) Step 1: estimate f x, α, R, and T (cont’d)

Determine α and | γ |

(Method 2) Step 1: estimate f x, α, R, and T (cont’d) Determine r 21, r 22, r 23, r 11, r 12, r 13, T y, T x (up to an unknown common sign)

(Method 2) Step 1: estimate f x, α, R, and T (cont’d) Determine r 31, r 32, r 33 –Can be estimated as the cross product of R 1 and R 2 : –The sign of R 3 is already fixed (the entries of R 3 remain unchanged if the signs of all the entries of R 1 and R 2 are reversed). We have estimated R  call the estimate

(Method 2) Step 1: estimate f x, α, R, and T (cont’d) Ensure the orthogonality of R –The computation of R does not take into account explicitly the orthogonality constraints. –The estimate of R cannot be expected to be orthogonal: –Enforce orthogonality on using SVD: –Replace D with I:

(Method 2) Step 1: estimate f x, α, R, and T (cont’d) Determine the sign of γ –Consider the following equations again:

(Method 2) Step 1: estimate f x, α, R, and T (cont’d)

Determine T z and f x –Consider the equation: –Let’s rewrite it in the form: or xT z +f x (r 11 X w +r 12 Y w +r 13 Z w +T x ) = -x(r 31 X w +r 32 Y w +r 33 Z w )

(Method 2) Step 1: estimate f x, α, R, and T (cont’d) –We can obtain T z and f x by solving a system of equations like the above, written for N points: Using SVD, the (least-squares) solution is:

(Method 2) Step 1: estimate f x, α, R, and T (cont’d) Determine f y :

(Method 2) Step 2: estimate o x and o y The computation of o x and o y is based on the following theorem: Orthocenter Theorem: Let T be the triangle on the image plane defined by the three vanishing points of three mutually orthogonal sets of parallel lines in space. Then, (o x, o y ) is the orthocenter of T.

(Method 2) Step 2: estimate o x and o y (cont’d) We can use the same calibration pattern to compute three vanishing points (use three pairs of parallel lines defined by the sides of the planes). None of the three mutually orthogonal directions should not be near parallel to the image plane!

Comments To improve the accuracy of camera calibration, it is a good idea to estimate the parameters several times (i.e., using different images) and average the results. Localization errors –The precision of calibration depends on how accurately the world and image points are located. –Studying how localization errors "propagate" to the estimates of the camera parameters is very important.

Comments (cont’d) In theory, direct and indirect camera calibration should produce the same results. In practice, we obtain different solutions due to different error propagations. Indirect camera calibration is simpler and should be preferred when we do not need to compute the intrinsic/extrinsic camera parameters explicitly.

How should we estimate the accuracy of a calibration algorithm? Project known 3D points on the image Compare their projections with the corresponding pixel coordinates of the points. Repeat for many points and estimate “re-projection” error!