Presentation is loading. Please wait.

Presentation is loading. Please wait.

José Manuel Iñesta José Martínez Sotoca Mateo Buendía

Similar presentations


Presentation on theme: "José Manuel Iñesta José Martínez Sotoca Mateo Buendía"— Presentation transcript:

1 José Manuel Iñesta José Martínez Sotoca Mateo Buendía
A NEW STRUCTURED LIGHT CALIBRATION METHOD OF PARTIALLY UNKNOWN GEOMETRY José Manuel Iñesta José Martínez Sotoca Mateo Buendía University of Alicante University Jaume I, Castellón Univeristy of Valencia Spain

2

3 Range retrieval method alternative to stereo imaging
STRUCTURED LIGHT Range retrieval method alternative to stereo imaging A light source with a known pattern is utilised instead of a camera. A set of landmarks are created on the objects by the light pattern. The 3D positions of those landmarks are computed. Pros: Makes it easier to solve the stereo correspondence problem. A light source is expected to be cheaper than a digital camera. Cons: Only valid in controlled environments. Sensitive to light condition changes and kinds of surfaces.

4 EXPERIMENTAL SETTING CALIBRATION PHASE IMAGE 1: IMAGE 2: CAMERA CAMERA
BACK PLANE CAMERA PROJECTOR FRONT PLANE CAMERA PROJECTOR VALID CALIBRATED SPACE

5 EXPERIMENTAL SETTING zi,j? OPERATION PHASE OBJECT CAMERA PROJECTOR
BACK PLANE zi,j? OBJECT CAMERA PROJECTOR

6 THE INDEXATION PROBLEM
It’s the problem in structured light dual to the correspondence problem in stereovision. It represents the labelling of the landmarks artificially created by the pattern when it is projected over the scene. Once solved, the range data can be retrieved. Different approaches to help the solution: colour codes, binary patterns, constraints. We have introduced a mark in the pattern that sets a reference for landmark indexation.

7 THE CALIBRATION PROBLEM
Problem to be solved: The determination of the translation and orientation of the co-ordinate axes of the imaging system with respect to the global co-ordinate system. The presented approach is based on geometric considerations on the information provided by two previous calibration projections over two reference planes. This approach does not need to know the whole system geometry.

8 S Y T E M G O R X Z Y O z O D D O O D r r r' r rp rp rp r" r'
3 r (OBJECT POINT) 3 3 X r r' 1 2 BACK PLANE Z Y r FRONT PLANE 2 LIGHT SOURCE PROBLEM MAGNITUDE O b z O f D rp rp 2 3 rp 1 IMAGE PLANE D O 1 c O D CAMERA’S FOCAL POINT 2

9 RANGE RETRIEVAL EQUATIONS X Z Y O z O D D O O D r r r' r rp rp rp r"
3 r (OBJECT POINT) 3 3 RANGE RETRIEVAL EQUATIONS X r r' 1 2 BACK PLANE Z Y r FRONT PLANE 2 LIGHT SOURCE PROBLEM MAGNITUDE O b z O f D rp rp 3 2 rp 1 IMAGE PLANE D O 1 c O D CAMERA’S FOCAL POINT 2

10 RANGE RETRIEVAL EQUATIONS X Z Y O z O D D O O D r r r' r rp rp rp r"
3 r (OBJECT POINT) 3 3 RANGE RETRIEVAL EQUATIONS X r r' 1 2 BACK PLANE Z Y r FRONT PLANE 2 LIGHT SOURCE PROBLEM MAGNITUDE O b z O f D rp rp 3 2 rp 1 IMAGE PLANE D O 1 c O D CAMERA’S FOCAL POINT 2

11 RANGE RETRIEVAL EQUATIONS X Z Y O z O D D O O D r r r' r rp rp rp r"
3 r (OBJECT POINT) 3 3 RANGE RETRIEVAL EQUATIONS X r r' 1 2 BACK PLANE Z Y r FRONT PLANE 2 LIGHT SOURCE PROBLEM MAGNITUDE O b z O f D rp rp 3 2 rp 1 IMAGE PLANE D O 1 c O D CAMERA’S FOCAL POINT 2

12 RANGE RETRIEVAL EQUATIONS X Z Y O z O D D O O D r r r' r rp rp rp r"
3 r (OBJECT POINT) 3 3 RANGE RETRIEVAL EQUATIONS X r r' 1 2 BACK PLANE Z Y r FRONT PLANE 2 LIGHT SOURCE PROBLEM MAGNITUDE O b z O f D rp rp 3 2 rp 1 IMAGE PLANE D O 1 c O D CAMERA’S FOCAL POINT 2

13 RANGE RETRIEVAL EQUATIONS:
Using these similarities, it is possible to derive an expression where the z value depends only on the image co-ordinates of the light dots for the object net and both calibration nets (rp1 , rp2 , rp3) and distances between planes (D, D1, D2) : In addition, if we take two given nodes (A and B) on the calibration planes (1 and 2) then, it can be derived that k can be expressed as: This way z is computed as a function only of distances between pixels and the distance between both calibration planes, D.

14  r  3 % S Y T E M R O ERROR SOURCES:
UPPER VIEW OF THE EXPERIMENTAL SETTING r r’ r r’ BACK PLANE 1 3 3 2 z r q 3 D r FRONT PLANE 2 LIGHT SOURCE D 1 ERROR SOURCES: Discretization error: (256x256)   0.8% Calibration error: (D = 5001mm)   0.2% Setting error: (related to k )   2% Others? rp 3 rp rp 1 2 D 2  r  3 % CAMERA

15 S Y T E M R O In addition, errors due to experimental setting vary with the projection angle  Relative error measuring on a known dimension object with different . 8 7 6 5 e (%) 4 r 3 2 1 10 20 30 40 50 q ( degrees )

16 S Y T E M R O The same object with the pattern projected with different angle  values  = 10º  = 20º  = 30º  = 35º  = 40º

17 THE PROJECTED PATTERN The points where the z(i,j) are to be computed are the nodes of a square grid. To be decided: the line spacing and their thickness These parameters are problem and surface dependent: object sizes, expected surface topology, textures, precision needed, etc.

18 OBJECT AND PATTERN SEGMENTATION
Object segmentation is carried out by a logic difference between the posterior reference net and the object net images. Pattern segmentation A maxima detection in the profile lines inside the segmented object zone is performed A semiautomatic mechanism has been devised for reconstruction of discontinuities (if they appear) The reconstructed lines are skeletonized. The distorted pattern is re-built.

19 NODE MAPPING AND Z EXTRACTION
The node-seeking algorithm performs a line tracking in the four cardinal directions, looking for points that hold a crossing condition. The first node will be the one at the upper left corner of the pattern mark. After node tracking, for each node we have: Coordinates of its projection on the back reference plane (rp1) Coordinates of its projection on the front reference plane (rp2) Coordinates of its projection on the object (rp3) So we have all the information we need to compute z(i,j)

20 SURFACE TOPOGRAPHY: The z(i,j) values for the grid nodes are obtained.
These values represent the object surface. If further information is needed, a 2D interpolation is performed: (bilinear, cubic splines, Hermite polynoms, etc.)

21 USING THE DATA... In our project, we use the 3D data to analyse the shape of the human back. The first step is to achieve the line corresponding to the spine on the skin surface. The method is based on 3D deformable models (active shape models) trained with hand-marked spine lines. After this line is achieved, differential geometry is being applied to study its features

22 CONCLUSIONS: A new method for calibration of a structured light system has been presented. Their main advantages are an easy calibration procedure and its ability to retrieve the range of large object surfaces. The main limitation of the method in its current state is the need of the mark in the net for indexation. This is not important for the back applications, and could be solved by grid coding. For the human back application the error of the method is around 3 mm, which is below the limit recommended by the experts.

23 FUTURE LINES: To evaluate the difficulty of grid coding and its advantages to other real-world applications in which the objects are a priori totally unknown. Improve the accuracy of the method (lens distortion, subpixel information, etc.) Change the indexation approach using a coded pattern (to allow multiple objects and position variability.


Download ppt "José Manuel Iñesta José Martínez Sotoca Mateo Buendía"

Similar presentations


Ads by Google