Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Self-Calibration and Neural Network Implementation of Photometric Stereo Yuji IWAHORI, Yumi WATANABE, Robert J. WOODHAM and Akira IWATA.

Similar presentations


Presentation on theme: "1 Self-Calibration and Neural Network Implementation of Photometric Stereo Yuji IWAHORI, Yumi WATANABE, Robert J. WOODHAM and Akira IWATA."— Presentation transcript:

1 1 Self-Calibration and Neural Network Implementation of Photometric Stereo Yuji IWAHORI, Yumi WATANABE, Robert J. WOODHAM and Akira IWATA

2 2 Background Neural Network approach to Shape-from- shading by Sejnowski et al. (1987) Realtime implementation of photometric stereo using LUT (Lookup Table) by Woodham (1994)

3 3 Background Neural Network Based Photometric Stereo and extensions by Iwahori et al. (since 1995) Required Conditions –Calibration Sphere and Test Object has the same reflectance property under the same imaging conditions (taken under different directions of multiple light source)

4 4 Proposed Approach Neural network implementation of photometric stereo for a rotational object with non-uniform reflectance factor. We require no separate calibration object, instead self-calibration is done using controlled rotation of the target object itself.

5 5 Three Light Source Photometry Eliminating the effect of the reflectance factor gives where Let be image intensity and let be reflectance map for unit surface normal vector at each point

6 6 Observation System Turn Table x x y y -z Camera Light Source Object The target object is observed through a full 360 degrees of rotation under three separate illumination conditions.

7 7 Occluding Boundary representation is used at the occluding boundary. In stereographic projection, points on an occluding boundary lie on the circle Except at such points, surface gradient parameters where is given by using.

8 8 Extraction of Feature Points Geometric information –At an occluding boundary, the surface normal is perpendicular to both the tangent to the occluding contour itself and to the viewing direction. x Image Plane Occluding Boundary y z Surface Normal Viewing Direction

9 9 Extraction of Feature Points Gaussian sphere The current is determined from, and as follows: : radius of unit Gaussian sphere(=1) : horizontal distance to the rotation axis at each occluding boundary point g f 0

10 10 Use of Photometric Constraint The image irradiance of a tracked feature point ought to become gradually higher from the left occluding boundary to rotation axis and then become gradually lower towards the right occluding boundary. -90090 0 Rotation angle Rotation axis Image intensity increase decrease

11 11 Use of Photometric Constraint Vertical axis: image intensity Horizontal axis: rotation angle Photometric constraint Rotation axis 0 -90 90 All points on an occluding boundary Rotation axis 0 90 -90

12 12 Use of Photometric Constraint Examples of plot for points which don’t satisfy the photometric constraint. Vertical axis: image intensity Horizontal axis: rotation angle Rotation axis 0 -9090 Rotation axis 90 0 -90

13 13 Extraction of Training Data Set Points which happen to include the same value of are sorted and the median value is selected as one unique point. f g Feature point is 0 plane For each (f,g) value, the representative feature point is selected for the training data set for NN learning. The relation of image irradiance

14 14 Extraction of Training Data Set Vertical axis: image intensity Horizontal axis: rotation angle Photometric constraint Rotation axis 090-90 Unique combination 0 -9090 Rotation axis

15 15 Neural Network Learning n(1,1) n(1,2) a(1,1)n(2,1) n1 b(1,1) b(1,2) a(1,2) b(2,1) b(2,2) n(2,2) 11 n(1,P) b(1,P)b(2,3) n(2,3) 11 w(3,P) w(1,1) n2 n3 a(1,3) a(P,1 ) a(P,3) 11 ……. … E1’E1’ E2’E2’ E3’E3’

16 16 What this RBF NN does. Once learning is complete, that has been learned is represented by the weight connecting each RBF neural network. The resulting network generalizes in that it predicts a surface normal, given any to. Thus, the resulting network can be used to estimate the surface shape of the target object.

17 17 Experiment 90[deg] 0[deg]±180[deg] Test object (Light source 1)

18 18 Obtained Data Set as Needle Diagram

19 19 f g 2 2 -2 Feature Points onto Space

20 20 Results AspectSlope

21 21 Another Example of Input Images

22 22 Results AspectSlope

23 23 Conclusion Neural network based photometric stereo using self-calibration, was proposed. No calibration object is required to obtain shape of target object using geometric and photometric constraints. Empirical implementation has been performed. To detect and to correct for cast shadows is remained as future work.


Download ppt "1 Self-Calibration and Neural Network Implementation of Photometric Stereo Yuji IWAHORI, Yumi WATANABE, Robert J. WOODHAM and Akira IWATA."

Similar presentations


Ads by Google