Reconstruction the 3D world out of two frames, based on camera pinhole model : 1. Calculating the Fundamental Matrix for each pair of frames 2. Estimating.

Slides:



Advertisements
Similar presentations
The fundamental matrix F
Advertisements

Change Detection C. Stauffer and W.E.L. Grimson, “Learning patterns of activity using real time tracking,” IEEE Trans. On PAMI, 22(8): , Aug 2000.
Rear Lights Vehicle Detection for Collision Avoidance Evangelos Skodras George Siogkas Evangelos Dermatas Nikolaos Fakotakis Electrical & Computer Engineering.
Patch to the Future: Unsupervised Visual Prediction
Low Complexity Keypoint Recognition and Pose Estimation Vincent Lepetit.
Kiyoshi Irie, Tomoaki Yoshida, and Masahiro Tomono 2011 IEEE International Conference on Robotics and Automation Shanghai International Conference Center.
Neurocomputing,Neurocomputing, Haojie Li Jinhui Tang Yi Wang Bin Liu School of Software, Dalian University of Technology School of Computer Science,
Detecting Abandoned Objects With a Moving Camera 指導教授:張元翔 老師 學生:資訊碩一 吳思穎.
Virtual Dart: An Augmented Reality Game on Mobile Device Supervisor: Professor Michael R. Lyu Prepared by: Lai Chung Sum Siu Ho Tung.
Computer Graphics Lab Electrical Engineering, Technion, Israel June 2009 [1] [1] Xuemiao Xu, Animating Animal Motion From Still, Siggraph 2008.
Active Calibration of Cameras: Theory and Implementation Anup Basu Sung Huh CPSC 643 Individual Presentation II March 4 th,
Computer Graphics Lab Electrical Engineering, Technion, Israel June 2009 [1] [1] Xuemiao Xu, Animating Animal Motion From Still, Siggraph 2008.
A Study of Approaches for Object Recognition
Relevance Feedback based on Parameter Estimation of Target Distribution K. C. Sia and Irwin King Department of Computer Science & Engineering The Chinese.
Srikumar Ramalingam Department of Computer Science University of California, Santa Cruz 3D Reconstruction from a Pair of Images.
3D reconstruction of cameras and structure x i = PX i x’ i = P’X i.
CSSE463: Image Recognition Day 30 Due Friday – Project plan Due Friday – Project plan Evidence that you’ve tried something and what specifically you hope.
CS 326 A: Motion Planning robotics.stanford.edu/~latombe/cs326/2004/index.htm Collision Detection and Distance Computation.
Effective Gaussian mixture learning for video background subtraction Dar-Shyang Lee, Member, IEEE.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
1Notes  Movie today (compositing etc.). 2Example.
AKSHAY UTTAMANI( ) DIVYAM JAISWAL( ) SAURABH KHANDELWAL( )
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Jason Li Jeremy Fowers Ground Target Following for Unmanned Aerial Vehicles.
EE392J Final Project, March 20, Multiple Camera Object Tracking Helmy Eltoukhy and Khaled Salama.
Automatic Camera Calibration
Path-Based Constraints for Accurate Scene Reconstruction from Aerial Video Mauricio Hess-Flores 1, Mark A. Duchaineau 2, Kenneth I. Joy 3 Abstract - This.
Ondřej Rozinek Czech Technical University in Prague Faculty of Biomedical Engineering 3D Hand Movement Analysis in Parkinson’s Disease
Feature and object tracking algorithms for video tracking Student: Oren Shevach Instructor: Arie nakhmani.
STC Robot 2 Majd Srour, Anis Abboud Under the supervision of: Yotam Elor and Prof. Alfred Bruckstein Optimally Covering an Unknown Environment with Ant-like.
Abstract Developing sign language applications for deaf people is extremely important, since it is difficult to communicate with people that are unfamiliar.
3D SLAM for Omni-directional Camera
Students: Adi Vainiger, Eyal Yaacoby Supervisor: Netanel Ratner Laboratory of Computer Graphics & Multimedia Electrical Engineering faculty, Technion Semester:
An Information Fusion Approach for Multiview Feature Tracking Esra Ataer-Cansizoglu and Margrit Betke ) Image and.
CSCE 643 Computer Vision: Structure from Motion
Computer Vision Lab Seoul National University Keyframe-Based Real-Time Camera Tracking Young Ki BAIK Vision seminar : Mar Computer Vision Lab.
DIEGO AGUIRRE COMPUTER VISION INTRODUCTION 1. QUESTION What is Computer Vision? 2.
Submitted by: Giorgio Tabarani, Christian Galinski Supervised by: Amir Geva CIS and ISL Laboratory, Technion.
Recognizing Action at a Distance Alexei A. Efros, Alexander C. Berg, Greg Mori, Jitendra Malik Computer Science Division, UC Berkeley Presented by Pundik.
Driver’s Sleepiness Detection System Idit Gershoni Introduction to Computational and Biological Vision Fall 2007.
Acquiring 3D models of objects via a robotic stereo head David Virasinghe Department of Computer Science University of Adelaide Supervisors: Mike Brooks.
Visual SLAM Visual SLAM SPL Seminar (Fri) Young Ki Baik Computer Vision Lab.
© 2005 Martin Bujňák, Martin Bujňák Supervisor : RNDr.
TRAFFIC SIGN SEGMENTATION AND RECOGNITION IN SCENE IMAGES Fei Qin1, Bin Fang1, Hengjun Zhao1 1. Department of Computer Science, Chongqing University, Chongqing.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
Project by: Cirill Aizenberg, Dima Altshuler Supervisor: Erez Berkovich.
1 Motion Analysis using Optical flow CIS601 Longin Jan Latecki Fall 2003 CIS Dept of Temple University.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Final Review Course web page: vision.cis.udel.edu/~cv May 21, 2003  Lecture 37.
Visual Odometry David Nister, CVPR 2004
Visual Odometry for Ground Vehicle Applications David Nistér, Oleg Naroditsky, and James Bergen Sarnoff Corporation CN5300 Princeton, New Jersey
A Recognition Method of Restricted Hand Shapes in Still Image and Moving Image Hand Shapes in Still Image and Moving Image as a Man-Machine Interface Speaker.
3D Reconstruction Using Image Sequence
Learning video saliency from human gaze using candidate selection CVPR2013 Poster.
Introduction to Scale Space and Deep Structure. Importance of Scale Painting by Dali Objects exist at certain ranges of scale. It is not known a priory.
Preliminary Transformations Presented By: -Mona Saudagar Under Guidance of: - Prof. S. V. Jain Multi Oriented Text Recognition In Digital Images.
Multi-view Synchronization of Human Actions and Dynamic Scenes Emilie Dexter, Patrick Pérez, Ivan Laptev INRIA Rennes - Bretagne Atlantique
Frank Bergschneider February 21, 2014 Presented to National Instruments.
Distinctive Image Features from Scale-Invariant Keypoints Presenter :JIA-HONG,DONG Advisor : Yen- Ting, Chen 1 David G. Lowe International Journal of Computer.
Smart Camera Network Localization Using a 3D Target John Kassebaum Nirupama Bulusu Wu-Chi Feng Portland State University.
Detection, Tracking and Recognition in Video Sequences Supervised By: Dr. Ofer Hadar Mr. Uri Perets Project By: Sonia KanOra Gendler Ben-Gurion University.
Toward humanoid manipulation in human-centered environments T. Asfour, P. Azad, N. Vahrenkamp, K. Regenstein, A. Bierbaum, K. Welke, J. Schroder, R. Dillmann.
Signal and Image Processing Lab
Contents Team introduction Project Introduction Applicability
Mauricio Hess-Flores1, Mark A. Duchaineau2, Kenneth I. Joy3
Eric Grimson, Chris Stauffer,
Noah Snavely.
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 30
Presentation transcript:

Reconstruction the 3D world out of two frames, based on camera pinhole model : 1. Calculating the Fundamental Matrix for each pair of frames 2. Estimating the Essential Matrix using the calibration information of the camera. Extracting the Transformation between the frames out of the Essential Matrix 3. Calculation of first-order triangulation Laboratory of Computer Graphics & Multimedia Reconstruction Results: Simulation Scenarios: Collision directionSame direction Collision directionSame direction (2) 3D Reconstruction Matches Fundamental Matrix Estimating transformation between frames Triangulation 3D Reconstructed points (4) Collision Detection Estimate dynamic points scattering Is there collision ? Static Points Static Points Static Points Static Feature Points N שחזור העולם התלת - ממדי על פי הנקודות הסטטיות בלבד Reconstruction of the Dynamic points N-1 Static Points Static Points Static Points Dynamic Feature Points N שחזור העולם התלת - ממדי על פי הנקודות הסטטיות בלבד Estimating Fundamental Matrix by the Static points N-1 Project goal: Designing an algorithm for recognition of possible collision trajectories by vehicles, using a video taken from a camera directed toward the rear of the direction of driving Presented by: Adi Vainiger & Eyal Yaacoby, under the supervision of Netanel Ratner SIFT vs. ASIFT Though slower (~50x) then SIFT, ASIFT was chosen due to accuracy reasons and fining more features. (1) Feature Detection & Matching Matches Feature Detection & Image Descriptors Frame 1 Matching Interest Points Frame 2 Feature Detection & Image Descriptors In this section we find interest points and their descriptors then match them between the two frames. This stage was implemented using the algorithm ASIFT. (2) 3D Reconstruction System outline: Frame i-N Frames (3) Recognition and Differentiation Between Static and Moving Objects (4) Collision Detection Alert Frame i (1) Feature Detection and Matching The system takes a video from a camera, with an angle to the direction of the movement. For each window of time (~2.5 seconds) in the video, the system looks at pairs of frames a second apart. Each such pair of frames is processed at stages 1 and 2. After there are enough reconstructions the algorithm performs stages 3 and 4. (1) Feature Detection and Matching (2) 3D Reconstruction (2) 3D Reconstruction (1) Feature Detection and Matching Introduction: Driving is a task that requires attention distribution. One of its many issues is identifying possible collision trajectories by vehicles from behind. Thus, there is a need for a system that automatically recognizes vehicles that are about to collide with the user, and warns him/her. Our solution is an algorithm that uses a video feed from a single simple camera, recognizes moving vehicles in the video and predicts whether they are about to collide with the user. Part A of this project focuses on the algorithm itself, without taking into account real-time constraints. (3) Recognition and Differentiation Between Static and Moving Objects Dynamic Feature Points Reconstructions Matching Variance Calculation for each point Static Feature Points N-1 3D Reconstructed points 3D Reconstructed points 3D Reconstructed points Matching the reconstructions for each point. Differentiation of moving points from static points is based on the normalized variance of the reconstructed matches of each point. High Variance Dynamic Point Reconstruction Low Variance Static Point Reconstruction L0w ambiguity High ambiguity We normalize the variance by angle and distance from camera, as the ambiguity correlates well with them On a collision course, the lines between the camera centers and the object are almost parallel. Thus, the reconstructions will be very distant from one another, as shown in results We estimate whether the dynamic points are moving towards the camera, using their scattering thorough out the reconstructions. Collision detection Results: *Scenario 4 is a collision scenario and the rest are non-collision scenarios. Ideal results for synthetic environment : 2% false negatives 12% false positives Real movie results: 3D reconstruction of the world example Static & moving object differentiation *Red points – high variance --> dynamic points. Green points – low variance --> static points Conclusions : On the synthetic environment, the system produces good results. When turning to real movies, we had several issues: Matching features of dynamic objects (due to rolling shutter) did not work, and the classification did not work well. However, under certain conditions, we still get valuable results. Further research should allow much better results. We believe that a tracking algorithm can solve most of the issues that we saw. Our thanks to Hovav Gazit and CGM Lab for the support