3D M otion D etermination U sing µ IMU A nd V isual T racking 14 May 2010 Centre for Micro and Nano Systems The Chinese University of Hong Kong Supervised.

Slides:



Advertisements
Similar presentations
Practical Camera Auto-Calibration Based on Object Appearance and Motion for Traffic Scene Visual Surveillance Zhaoxiang Zhang, Min Li, Kaiqi Huang and.
Advertisements

Miroslav Hlaváč Martin Kozák Fish position determination in 3D space by stereo vision.
Simultaneous surveillance camera calibration and foot-head homology estimation from human detection 1 Author : Micusic & Pajdla Presenter : Shiu, Jia-Hau.
CSE473/573 – Stereo and Multiple View Geometry
Computer vision: models, learning and inference
Vision Based Control Motion Matt Baker Kevin VanDyke.
Kiyoshi Irie, Tomoaki Yoshida, and Masahiro Tomono 2011 IEEE International Conference on Robotics and Automation Shanghai International Conference Center.
Chapter 6 Feature-based alignment Advanced Computer Vision.
Intelligent Systems Lab. Extrinsic Self Calibration of a Camera and a 3D Laser Range Finder from Natural Scenes Davide Scaramuzza, Ahad Harati, and Roland.
Fisheye Camera Calibration Arunkumar Byravan & Shai Revzen University of Pennsylvania.
Active Calibration of Cameras: Theory and Implementation Anup Basu Sung Huh CPSC 643 Individual Presentation II March 4 th,
Exchanging Faces in Images SIGGRAPH ’04 Blanz V., Scherbaum K., Vetter T., Seidel HP. Speaker: Alvin Date: 21 July 2004.
Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002.
CS485/685 Computer Vision Prof. George Bebis
Vision Computing An Introduction. Visual Perception Sight is our most impressive sense. It gives us, without conscious effort, detailed information about.
COMP322/S2000/L221 Relationship between part, camera, and robot (cont’d) the inverse perspective transformation which is dependent on the focal length.
Passive Object Tracking from Stereo Vision Michael H. Rosenthal May 1, 2000.
Object Recognition Using Geometric Hashing
Sebastian Thrun and Jana Kosecha CS223B Computer Vision, Winter 2007 Stanford CS223B Computer Vision, Winter 2007 Lecture 4 Camera Calibration Professors.
COMP322/S2000/L23/L24/L251 Camera Calibration The most general case is that we have no knowledge of the camera parameters, i.e., its orientation, position,
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Camera Calibration CS485/685 Computer Vision Prof. Bebis.
CSE473/573 – Stereo Correspondence
Recognition of object by finding correspondences between features of a model and an image. Alignment repeatedly hypothesize correspondences between minimal.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
Today: Calibration What are the camera parameters?
Project 4 Results Representation – SIFT and HoG are popular and successful. Data – Hugely varying results from hard mining. Learning – Non-linear classifier.
Sebastian Thrun CS223B Computer Vision, Winter Stanford CS223B Computer Vision, Winter 2006 Lecture 4 Camera Calibration Professor Sebastian Thrun.
Automatic Camera Calibration
Chapter 6 Feature-based alignment Advanced Computer Vision.
Camera Calibration & Stereo Reconstruction Jinxiang Chai.
Shape Recognition and Pose Estimation for Mobile Augmented Reality Author : N. Hagbi, J. El-Sana, O. Bergig, and M. Billinghurst Date : Speaker.
KinectFusion : Real-Time Dense Surface Mapping and Tracking IEEE International Symposium on Mixed and Augmented Reality 2011 Science and Technology Proceedings.
Geometric Camera Models and Camera Calibration
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
3D SLAM for Omni-directional Camera
3D-2D registration Kazunori Umeda Chuo Univ., Japan CRV2010 Tutorial May 30, 2010.
Digital Image Processing CCS331
CSCE 643 Computer Vision: Structure from Motion
Self-Calibration and Metric Reconstruction from Single Images Ruisheng Wang Frank P. Ferrie Centre for Intelligent Machines, McGill University.
1 Formation et Analyse d’Images Session 7 Daniela Hall 25 November 2004.
Lecture 03 15/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Computer Vision : CISC 4/689 Going Back a little Cameras.ppt.
Pyramidal Implementation of Lucas Kanade Feature Tracker Jia Huang Xiaoyan Liu Han Xin Yizhen Tan.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
Autonomous Navigation Based on 2-Point Correspondence 2-Point Correspondence using ROS Submitted By: Li-tal Kupperman, Ran Breuer Advisor: Majd Srour,
Action and Gait Recognition From Recovered 3-D Human Joints IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS— PART B: CYBERNETICS, VOL. 40, NO. 4, AUGUST.
The Implementation of Markerless Image-based 3D Features Tracking System Lu Zhang Feb. 15, 2005.
Outline Intro to Representation and Heuristic Search Machine Learning (Clustering) and My Research.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
EECS 274 Computer Vision Geometric Camera Calibration.
Plane-based external camera calibration with accuracy measured by relative deflection angle Chunhui Cui , KingNgiNgan Journal Image Communication Volume.
Figure 6. Parameter Calculation. Parameters R, T, f, and c are found from m ij. Patient f : camera focal vector along optical axis c : camera center offset.
Journal of Visual Communication and Image Representation
3D Reconstruction Using Image Sequence
3D Sensing 3D Shape from X Perspective Geometry Camera Model Camera Calibration General Stereo Triangulation 3D Reconstruction.
Camera Model Calibration
Projector-camera system Application of computer vision projector-camera v3a1.
Recognizing specific objects Matching with SIFT Original suggestion Lowe, 1999,2004.
Computer vision: geometric models Md. Atiqur Rahman Ahad Based on: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince.
Heechul Han and Kwanghoon Sohn
3D Single Image Scene Reconstruction For Video Surveillance Systems
José Manuel Iñesta José Martínez Sotoca Mateo Buendía
RGBD Camera Integration into CamC Computer Integrated Surgery II Spring, 2015 Han Xiao, under the auspices of Professor Nassir Navab, Bernhard Fuerst and.
Multiple View Geometry for Robotics
Outline H. Murase, and S. K. Nayar, “Visual learning and recognition of 3-D objects from appearance,” International Journal of Computer Vision, vol. 14,
Course 6 Stereo.
Camera Calibration Reading:
Presentation transcript:

3D M otion D etermination U sing µ IMU A nd V isual T racking 14 May 2010 Centre for Micro and Nano Systems The Chinese University of Hong Kong Supervised by Prof. Li Lam Kin Kwok, Mark

Outline Brief summary of previous works Detail of Visual Tracking System (VTS) -Perspective Camera Model -Procedure of Pose Estimation Current Results of VTS Conclusion Future Plan

Previous Works Implement Harris Corner Finding Algorithm -Automatic finding good features Improve the performance of LK Tracking Method -Reduce the noise generated by inconstant lighting Find some information about high speed camera (>60fps)

Previous Works

Detail of Visual Tracking System Select ROI from captured image Extract Good Features (Harris Algorithm) Motion Tracking (LK Tracking Method) Pose Estimation Position and Orientation (Camera Coordinate) Coordinate Transformation Final Pose of Camera (World Coordinate)

Perspective Camera Model Optical Axis Square Grid Image Plan f lili P1P1 P3P3 P2P2 P4P4 p1p1 p2p2 p3p3 p4p4 C C: Optical Center f : Focal Length l i : Distance between 3D feature points and the optical center P i : 3D Feature Points on the square grid p i : Corresponding 2D projected image points cycy cxcx czcz { C } wywy wxwx wzwz { W } { W } : World Coordinate { I } : Image Coordinate { C } : Camera Coordinate v u { I }

Perspective Camera Model Relationship between image point and 3D scene point Image Plan Optical Axis Scene Optical Center f cZcZ cxcx cXcX z x { C }

Pose Estimation Procedure Calibrate camera (obtain interior parameter) Target Dimension Target Image Step 1: Calibration and Measurement Calculate distance between target and camera Step 2: Recover Pose of Camera ( Respect to Camera Coordinate) Calculate Transformation Matrix Step 3: Recover Transformation Matrix between Camera to World Coordinate Final Pose Step 4: Transform the coordinate to World Coordinate

Pose Estimation (Step 1) Using square pattern (with known dimensions) to calibrate a camera

Pose Estimation (Step 2) Image to Camera Coordinate Transformation Image Coordinate: Camera Coordinate: Image Plan f p1p1 p2p2 p3p3 p4p4 C cycy cxcx czcz { C } v u { I } (u o, v o ) is image principal point Optical Axis

Areas of triangles (Given): Volumes of tetrahedra: Use unit vector c u i to represent c P i Pose Estimation (Step 2) C cycy cxcx czcz { C } P1P1 P3P3 P2P2 P4P4 h cP1cP1 cP2cP2 cP4cP4 cP3cP3 cuicui (From Step 1)

Pose Estimation (Step 2) Use vectors to calculate Volume: Express d 2, d 3, d 4 as a function of d 1 : C cycy cxcx czcz { C } P1P1 P3P3 P2P2 P4P4 h cP1cP1 cP2cP2 cP4cP4 cP3cP3 cuicui

Pose Estimation (Step 2) Use a line segment s 1k to compute squared distance: Use parametric representation and simplify C cycy cxcx czcz { C } P1P1 P2P2 cP1cP1 cP2cP2 cu1cu1 cu2cu2 s 12

Pose Estimation (Step 2) Substitute d 1 into the following equation to obtain the 3D coordinates of the feature points:

Transformation Matrix w T o is given Transformation matrix o T c can be obtained by step 2 Pose Estimation (Step 3) cycy cxcx czcz { C } wywy wxwx wzwz { W } oyoy oxox ozoz { O } wTowTo oTcoTc { W } : World Coordinate { O } : Object Coordinate { C } : Camera Coordinate

Pose Estimation (Step 4) The Final Pose of camera can be solved cycy cxcx czcz { C } wywy wxwx wzwz { W } oyoy oxox ozoz { O } wTowTo oTcoTc wTcwTc

Current Results of VTS Experimental Setup Motion Recording Computer Webcam Feature Ruler

Current Results of VTS

Conclusion Heavily depend on image points -Increase image resolution (Now using 640 X 480 pixels) Use some optimization methods to increase accuracy -Gauss-Newton Line search method

Future Plan Develop this method and test the performance Try to fuse the data with the µIMU data Develop the optimization method after finishing data fusion

Reference [1]Abidi M.A., Chandra T., “A new efficient and direct solution for estimation using quadrangular targets: algorithm and evaluation,” IEEE transactions on pattern analysis and machine intelligence, Vol.17, No.5, pp , [2] Abidi M.A., Chandra T., “Pose estimation for camera calibration and landmark tracking,” IEEE International Conference on Robotics and Automation, [3]Forsyth Ponce, “Computer Vision: A Modern Approach,” Prentice Hall, 2003

Thanks for your attention