Source: Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on Author: Paucher, R.; Turk, M.; Adviser: Chia-Nian.

Slides:



Advertisements
Similar presentations
Zhengyou Zhang Vision Technology Group Microsoft Research
Advertisements

Object Recognition from Local Scale-Invariant Features David G. Lowe Presented by Ashley L. Kapron.
Simultaneous surveillance camera calibration and foot-head homology estimation from human detection 1 Author : Micusic & Pajdla Presenter : Shiu, Jia-Hau.
Lecture 11: Two-view geometry
QR Code Recognition Based On Image Processing
Object Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition l Panoramas,
Low Complexity Keypoint Recognition and Pose Estimation Vincent Lepetit.
(Includes references to Brian Clipp
Computer Vision Detecting the existence, pose and position of known objects within an image Michael Horne, Philip Sterne (Supervisor)
Chapter 6 Feature-based alignment Advanced Computer Vision.
Neurocomputing,Neurocomputing, Haojie Li Jinhui Tang Yi Wang Bin Liu School of Software, Dalian University of Technology School of Computer Science,
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
Virtual Dart: An Augmented Reality Game on Mobile Device Supervisor: Professor Michael R. Lyu Prepared by: Lai Chung Sum Siu Ho Tung.
Active Calibration of Cameras: Theory and Implementation Anup Basu Sung Huh CPSC 643 Individual Presentation II March 4 th,
CH24 in Robotics Handbook Presented by Wen Li Ph.D. student Texas A&M University.
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
Lecture 21: Multiple-view geometry and structure from motion
Vision-based Registration for AR Presented by Diem Vu Nov 20, 2003.
Triangulation and Multi-View Geometry Class 9 Read notes Section 3.3, , 5.1 (if interested, read Triggs’s paper on MVG using tensor notation, see.
Fitting a Model to Data Reading: 15.1,
CS664 Lecture #19: Layers, RANSAC, panoramas, epipolar geometry Some material taken from:  David Lowe, UBC  Jiri Matas, CMP Prague
Fitting. Choose a parametric object/some objects to represent a set of tokens Most interesting case is when criterion is not local –can’t tell whether.
Lecture 10: Robust fitting CS4670: Computer Vision Noah Snavely.
Motion from normal flow. Optical flow difficulties The aperture problemDepth discontinuities.
REAL-TIME DETECTION AND TRACKING FOR AUGMENTED REALITY ON MOBILE PHONES Daniel Wagner, Member, IEEE, Gerhard Reitmayr, Member, IEEE, Alessandro Mulloni,
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Overview and Mathematics Bjoern Griesbach
EE392J Final Project, March 20, Multiple Camera Object Tracking Helmy Eltoukhy and Khaled Salama.
כמה מהתעשייה? מבנה הקורס השתנה Computer vision.
CSE 185 Introduction to Computer Vision
Chapter 6 Feature-based alignment Advanced Computer Vision.
Satellites in Our Pockets: An Object Positioning System using Smartphones Justin Manweiler, Puneet Jain, Romit Roy Choudhury TsungYun
Shape Recognition and Pose Estimation for Mobile Augmented Reality Author : N. Hagbi, J. El-Sana, O. Bergig, and M. Billinghurst Date : Speaker.
MESA LAB Multi-view image stitching Guimei Zhang MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering, University of.
Example: line fitting. n=2 Model fitting Measure distances.
CSCE 643 Computer Vision: Structure from Motion
Correspondence-Free Determination of the Affine Fundamental Matrix (Tue) Young Ki Baik, Computer Vision Lab.
3D Reconstruction Jeff Boody. Goals ● Reconstruct 3D models from a sequence of at least two images ● No prior knowledge of the camera or scene ● Use the.
INEMO™ Demonstration Kit DOF (Degrees of Freedom) platform  The STEVAL-MKI062V2 combines accelerometers, gyroscopes and magnetometers with pressure.
1 Registration algorithm based on image matching for outdoor AR system with fixed viewing position IEE Proc.-Vis. Image Signal Process., Vol. 153, No.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
© 2005 Martin Bujňák, Martin Bujňák Supervisor : RNDr.
CS654: Digital Image Analysis Lecture 25: Hough Transform Slide credits: Guillermo Sapiro, Mubarak Shah, Derek Hoiem.
Computer Vision : CISC 4/689 Going Back a little Cameras.ppt.
18 th August 2006 International Conference on Pattern Recognition 2006 Epipolar Geometry from Two Correspondences Michal Perďoch, Jiří Matas, Ondřej Chum.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
1 Comparative Survey on Fundamental Matrix Estimation Computer Vision and Robotics Group Institute of Informatics and Applications University of Girona.
Feature Matching. Feature Space Outlier Rejection.
Cherevatsky Boris Supervisors: Prof. Ilan Shimshoni and Prof. Ehud Rivlin
CSE 185 Introduction to Computer Vision Feature Matching.
Markerless Augmented Reality Platform Design and Verification of Tracking Technologies Author:J.M. Zhong Date: Speaker:Sian-Lin Hong.
Visual Odometry David Nister, CVPR 2004
Distinctive Image Features from Scale-Invariant Keypoints
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
Model Refinement from Planar Parallax Anthony DickRoberto Cipolla Department of Engineering University of Cambridge.
CSCI 631 – Foundations of Computer Vision March 15, 2016 Ashwini Imran Image Stitching.
SIFT.
Grouping and Segmentation. Sometimes edge detectors find the boundary pretty well.
CSCI 631 – Foundations of Computer Vision March 15, 2016 Ashwini Imran Image Stitching Link: singhashwini.mesinghashwini.me.
Computer vision: models, learning and inference
Paper – Stephen Se, David Lowe, Jim Little
Homography From Wikipedia In the field of computer vision, any
A special case of calibration
3D Photography: Epipolar geometry
Inertial Measurement Unit (IMU) Basics
Paige Thielen, ME535 Spring 2018
CSE 185 Introduction to Computer Vision
Calibration and homographies
Back to equations of geometric transformations
CS5760: Computer Vision Lecture 9: RANSAC Noah Snavely
Presentation transcript:

Source: Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on Author: Paucher, R.; Turk, M.; Adviser: Chia-Nian Shyi Speaker: 趙家寬 Date: 2010/12/28 1

Introduction  Two categories of Augmented reality: Pose computation Object recognition  A more flexible approach is the 6 DOF pose of the sensor. 2

Introduction  Creating a local database for use.  Match the two images to find point correspondences. 3

Database building  For each image, store the pose of the camera(rotation and position) and its parameters.  Extract Speeded-Up Robust Features(SURF) features.  Use a stereo camera for the database images.  Store the 3D position of the features. 4

Cell phone sensors and pose estimation  The N97 phone from Nokia(include GPS, an accelerometer, a magnetometer and a rotation sensor).  The accelerometer outputs the second derivative of the position.  The three channels that represent the acceleration along the three cell phone axes.(acceleration due to the user, gravity acceleration) 5

Cell phone sensors and pose estimation  Obtain the tilt of the phone by two parameters of three of the cell phone rotation matrix.  The advantage of using this sensor rather than the accelerometer is that the outputs of this sensor are not sensitive to user movements. 6

Cell phone sensors and pose estimation  The last parameter to compute is the amount of rotation around a vertical axis.  The magnetometer(digital compass).  It outputs the angle that gives one parameter which is enough to compute the full orientation of the cell phone. 7

Database search restriction  Use two criteria to select only the relevant images.  For a database image to be considered, its center(3D point) has to be seen by the cell phone camera.  Because there is an uncertainty on the phone orientation estimated by the sensors, have to extend somewhat the field of view of the cell phone camera 8

Database search restriction 9

 The second criterion prevents bad image matching configurations.  The step is done using features that are rotation-invariant to some extent.  If there is too much rotation between the two images will fail. 10

Outlier removal process  The most common way to remove outliers between two views is using a Random Sample Consensus(RANSAC).  Fewer than 8 points are too time- consuming, even if they are more precise.  The 8-point algorithm uses a linear criterion that gives very poor results in the presence of noise. 11

Outlier removal process  First remove the most obvious wrong matches by Least Median of Square.  Then use the 8-point algorithm combined with RANSAC to remove the final outliers. 12

Pose computation  A set of presumably correct matches between the cell phone and the database images, which enables to compute the relative pose between the two images.  The reprojection error of the database 3D points in the cell phone image is the most meaningful criterion to minimize. 13

Pose computation  Initialization  Final pose estimation 14

Results and performance  All the 3D virtual objects used for the AR experiments are planar rectangles. 15

Results and performance  Better results when the rotation is lower(rotation is close to 0 degrees). 16

Conclusion  There is no location estimation from the sensors, the user has to start the application at a known location, otherwise the application has to search all the database images.  Porting the code to an architecture that would have native support for floating- point numbers would improve the frame rate drastically. 17