1/20 Obtaining Shape from Scanning Electron Microscope Using Hopfield Neural Network Yuji Iwahori 1, Haruki Kawanaka 1, Shinji Fukui 2 and Kenji Funahashi.

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

Shapelets Correlated with Surface Normals Produce Surfaces Peter Kovesi School of Computer Science & Software Engineering The University of Western Australia.
Computer Vision Radiometry. Bahadir K. Gunturk2 Radiometry Radiometry is the part of image formation concerned with the relation among the amounts of.
Announcements Project 2 due today Project 3 out today –demo session at the end of class.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #12.
3D Modeling from a photograph
CS 376b Introduction to Computer Vision 04 / 21 / 2008 Instructor: Michael Eckmann.
Polarization-based Inverse Rendering from Single View Daisuke Miyazaki Robby T. Tan Kenji Hara Katsushi Ikeuchi.
Shape-from-X Class 11 Some slides from Shree Nayar. others.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Localization of Piled Boxes by Means of the Hough Transform Dimitrios Katsoulas Institute for Pattern Recognition and Image Processing University of Freiburg.
Camera calibration and epipolar geometry
Boundary matting for view synthesis Samuel W. Hasinoff Sing Bing Kang Richard Szeliski Computer Vision and Image Understanding 103 (2006) 22–32.
Lecture 23: Photometric Stereo CS4670/5760: Computer Vision Kavita Bala Scott Wehrwein.
RBF Neural Networks x x1 Examples inside circles 1 and 2 are of class +, examples outside both circles are of class – What NN does.
May 2004SFS1 Shape from shading Surface brightness and Surface Orientation --> Reflectance map READING: Nalwa Chapter 5. BKP Horn, Chapter 10.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Visibility Subspaces: Uncalibrated Photometric Stereo with Shadows Kalyan Sunkavalli, Harvard University Joint work with Todd Zickler and Hanspeter Pfister.
Previously Two view geometry: epipolar geometry Stereo vision: 3D reconstruction epipolar lines Baseline O O’ epipolar plane.
Single-view geometry Odilon Redon, Cyclops, 1914.
Object recognition under varying illumination. Lighting changes objects appearance.
May 2004Stereo1 Introduction to Computer Vision CS / ECE 181B Tuesday, May 11, 2004  Multiple view geometry and stereo  Handout #6 available (check with.
Information that lets you recognise a region.
Advanced Topics in Optimization
Photometric Stereo Merle Norman Cosmetics, Los Angeles Readings R. Woodham, Photometric Method for Determining Surface Orientation from Multiple Images.
Photometric Stereo & Shape from Shading
Motion from normal flow. Optical flow difficulties The aperture problemDepth discontinuities.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Computer Vision Spring ,-685 Instructor: S. Narasimhan PH A18B T-R 10:30am – 11:50am Lecture #13.
Reflectance Map: Photometric Stereo and Shape from Shading
Lecture 12 Modules Employing Gradient Descent Computing Optical Flow Shape from Shading.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
7.1. Mean Shift Segmentation Idea of mean shift:
Y. Moses 11 Combining Photometric and Geometric Constraints Yael Moses IDC, Herzliya Joint work with Ilan Shimshoni and Michael Lindenbaum, the Technion.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Course 10 Shading. 1. Basic Concepts: Light Source: Radiance: the light energy radiated from a unit area of light source (or surface) in a unit solid.
Ch. 3: Geometric Camera Calibration
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
December 11, 2002MVA20021 Determining Shapes of Transparent Objects from Two Polarization Images Daisuke Miyazaki Masataka Kagesawa Katsushi Ikeuchi The.
Optical Flow. Distribution of apparent velocities of movement of brightness pattern in an image.
November 4, THE REFLECTANCE MAP AND SHAPE-FROM-SHADING.
1 Self-Calibration and Neural Network Implementation of Photometric Stereo Yuji IWAHORI, Yumi WATANABE, Robert J. WOODHAM and Akira IWATA.
(c) 2000, 2001 SNU CSE Biointelligence Lab Finding Region Another method for processing image  to find “regions” Finding regions  Finding outlines.
Shape from Shading Course web page: vision.cis.udel.edu/cv February 26, 2003  Lecture 6.
Multiple Light Source Optical Flow Multiple Light Source Optical Flow Robert J. Woodham ICCV’90.
Computer vision: models, learning and inference M Ahad Multiple Cameras
Large-Scale Matrix Factorization with Missing Data under Additional Constraints Kaushik Mitra University of Maryland, College Park, MD Sameer Sheoreyy.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Reflectance Function Estimation and Shape Recovery from Image Sequence of a Rotating object Jiping Lu, Jim Little UBC Computer Science ICCV ’ 95.
Determining 3D Structure and Motion of Man-made Objects from Corners.
Tal Amir Advanced Topics in Computer Vision May 29 th, 2015 COUPLED MOTION- LIGHTING ANALYSIS.
Introduction to Meshes Lecture 22 Mon, Oct 20, 2003.
Digital Image Processing CSC331
Amir Yavariabdi Introduction to the Calculus of Variations and Optical Flow.
EECS 274 Computer Vision Sources, Shadows, and Shading.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
Announcements Project 3a due today Project 3b due next Friday.
MAN-522 Computer Vision Spring
Instructor: S. Narasimhan
Deeper insight in the Steins flyby geometry:
Merle Norman Cosmetics, Los Angeles
Epipolar geometry.
Common Classification Tasks
Announcements Project 3b due Friday.
Part One: Acquisition of 3-D Data 2019/1/2 3DVIP-01.
Lecture 28: Photometric Stereo
Announcements Project 3 out today demo session at the end of class.
Integration of Multi-View Data
Applications of Linear Algebra in Electronics and Communication
Presentation transcript:

1/20 Obtaining Shape from Scanning Electron Microscope Using Hopfield Neural Network Yuji Iwahori 1, Haruki Kawanaka 1, Shinji Fukui 2 and Kenji Funahashi 1 1 Nagoya Institute of Technology, Japan 2 Aichi University of Education, Japan

2/20 Introduction Shape from Scanning Electron Microscope (SEM) images is the recent topic in computer vision. The position of a light source and a viewing point are the same under the orthographic projection. The object stand is rotated to some extent through the observation. Only these conditions can be used to recover the object shape. 2D Image of SEM Recovering 3D Shape

3/20 Previous Approaches (1) Photometric Stereo Estimation using the temporal color space use multiple images under the different light source directions. Linear Shape from Shading Photometric Motion the position of viewing point (camera) and light source should be widely located

4/20 Previous Approaches (2) Shape from Occluding Boundaries is limited to a simply convex closed curved surface Shape from Silhouette uses multiple images through 360 degree rotation, is also unavailable to object with local concave shape Surface Reflectance and Shape from Images Using 90 degree rotation to get the feature points However the rotation angle is limited to SEM

5/20 New Proposed Approach Uses optimization with two images observed through the rotation of the object stand 1. The appropriate initial vector is determined using the Radial Basis Function neural network (RBF-NN) from two images during rotation. 2. The optimization is introduced using the Hopfield like neural network (HF-NN).

6/20 Characteristics of SEM Image (1) Orthographic projection Rotation angle Reflectance property i : incident angle, < 70° s ≈ 0.5 R(i) is normalized to the range of 0 and 1.

7/20 Characteristics of SEM Image (2) z = F(x, y) F : height distribution p, q : gradient parameters l = (0, 0, 1) : light source direction : surface normal cos i = n ・ l = n z … (3) From Eq.(1)(2)(3), Cross Section of Reflectance Map(q=0)

8/20 Rotation Axis on Object Stand Under the orthographic projection the gradient of the rotation axis is the same for both images observed during rotation. ex.

9/20 Estimation of Rotation Axis 1. Assume A and B move A’ and B’ during the rotation 2. Set A and A’ be the same pixel 3. Then rotation axis is determined so that it becomes perpendicular to the line BB’ and passes through the point A.

10/20 Shape Recovery from Two Images Using Hopfield Neural Network Hopfield Neural Network (HF-NN) the mutual connection network the connection between the neurons are the symmetric HF-NN can be applied to solve the optimization problem of the energy function m1m1 m2m2 m3m3

11/20 Energy Function to be Minimized C 1, C 2, C 3 : the regularization parameters D : the target region of the object E 1 : the smoothness constraint E 2 : the error of the observed image brightness I(x,y) and the reflectance map R(p, q) E 3 : the error of the geometric relation for z and (p, q) (p,q,z): unknown variables

12/20 Initial Vector for Optimization (1) Radial Basis Function Neural Network (RBF-NN) is introduced to obtain the approximation of gradient p, q Assume the same pixel (x, y) during the rotation. The integration ofalong x direction results in the height distribution. RBF NN I 1 (x, y) I 2 (x, y) nxnx nznz

13/20 Initial Vector for Optimization (2) How to make dataset of RBF-NN A sphere is used to make I 1 and I 2 using R(p, q), where, R is since a sphere has the whole combination of the surface gradient. How to use learned RBF-NN The corresponding point of the target object is assumed to be the same during the rotation.

14/20 Updating Equation using HF-NN The equation is iteratively used to optimize the energy function, that is, each partial difference becomes 0.

15/20 Iteration for Optimization The optimization is applied to each of two images repeatedly. The height z’with the rotation angle is given by Gradient are also calculated from the height repeatedly during rotation. C 1 is gradually reduces E 1 : the smoothness constraint Optimization is terminated the value of energy function converges in comparison with that of one step before.

16/20 Experiments (synthesis image) Rotation angle is 10° Image size is 64×64 pixels Rotation axis is along the center of the image. RBF-NN Learning Data : 2000 Learning Epoch : 15 MSE Maximum Height Theoretical Height Initial HeightRecovered Height Input Images

17/20 Experiments (SEM image) Rotation angle taken is 10 ° Rotation axis is set from the known feature points A and B MSE Theoretical Depth Theoretical HeightInitial HeightRecovered Height Relaxation Method Input Images

18/20 Experiments (SEM image) Rotation angle taken is 10 ° Rotation axis is set from the known feature points A and B Input Images Initial HeightRecovered Height

19/20 Conclusion A new method is proposed to recover the shape from SEM images. HF-NN is introduced to solve the optimization problem. The energy function is formulated from two image during rotation. The initial vector is obtained using RBF-NN.

20/20 Further works Getting more accurate result using more images Treatment of the inter-reflection Thank you