Miroslav Hlaváč Martin Kozák 27. 07. 2011 Fish position determination in 3D space by stereo vision.

Slides:



Advertisements
Similar presentations
DEVELOPMENT OF A COMPUTER PLATFORM FOR OBJECT 3D RECONSTRUCTION USING COMPUTER VISION TECHNIQUES Teresa C. S. Azevedo João Manuel R. S. Tavares Mário A.
Advertisements

Single-view geometry Odilon Redon, Cyclops, 1914.
Computer Vision, Robert Pless
Simultaneous surveillance camera calibration and foot-head homology estimation from human detection 1 Author : Micusic & Pajdla Presenter : Shiu, Jia-Hau.
CSE473/573 – Stereo and Multiple View Geometry
For Internal Use Only. © CT T IN EM. All rights reserved. 3D Reconstruction Using Aerial Images A Dense Structure from Motion pipeline Ramakrishna Vedantam.
Joshua Fabian Tyler Young James C. Peyton Jones Garrett M. Clayton Integrating the Microsoft Kinect With Simulink: Real-Time Object Tracking Example (
Institut für Elektrische Meßtechnik und Meßsignalverarbeitung Professor Horst Cerjak, Augmented Reality VU 2 Calibration Axel Pinz.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
A Modified EM Algorithm for Hand Gesture Segmentation in RGB-D Data 2014 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) July 6-11, 2014, Beijing,
Computer vision: models, learning and inference
Surveillance and Security
Computer vision. Camera Calibration Camera Calibration ToolBox – Intrinsic parameters Focal length: The focal length in pixels is stored in the.
Camera calibration and epipolar geometry
Multi video camera calibration and synchronization.
Geometry of Images Pinhole camera, projection A taste of projective geometry Two view geometry:  Homography  Epipolar geometry, the essential matrix.
MSU CSE 240 Fall 2003 Stockman CV: 3D to 2D mathematics Perspective transformation; camera calibration; stereo computation; and more.
Srikumar Ramalingam Department of Computer Science University of California, Santa Cruz 3D Reconstruction from a Pair of Images.
Structure from motion. Multiple-view geometry questions Scene geometry (structure): Given 2D point matches in two or more images, where are the corresponding.
Uncalibrated Geometry & Stratification Sastry and Yang
Passive Object Tracking from Stereo Vision Michael H. Rosenthal May 1, 2000.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Single-view geometry Odilon Redon, Cyclops, 1914.
3D Computer Vision and Video Computing 3D Vision Lecture 14 Stereo Vision (I) CSC 59866CD Fall 2004 Zhigang Zhu, NAC 8/203A
The Pinhole Camera Model
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
CSE473/573 – Stereo Correspondence
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Project 4 Results Representation – SIFT and HoG are popular and successful. Data – Hugely varying results from hard mining. Learning – Non-linear classifier.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
776 Computer Vision Jan-Michael Frahm, Enrique Dunn Spring 2013.
Automatic Camera Calibration
A Brief Overview of Computer Vision Jinxiang Chai.
Exploration Robot with Stereovision Vladislav Richter Miroslav Skrbek FIT, CTU in Prague
Camera Geometry and Calibration Thanks to Martial Hebert.
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
Stereoscopic Imaging for Slow-Moving Autonomous Vehicle By: Alexander Norton Advisor: Dr. Huggins April 26, 2012 Senior Capstone Project Final Presentation.
Shape from Stereo  Disparity between two images  Photogrammetry  Finding Corresponding Points Correlation based methods Feature based methods.
CS654: Digital Image Analysis Lecture 8: Stereo Imaging.
1 Finding depth. 2 Overview Depth from stereo Depth from structured light Depth from focus / defocus Laser rangefinders.
Geometric Camera Models
Cmput412 3D vision and sensing 3D modeling from images can be complex 90 horizon 3D measurements from images can be wrong.
1 Formation et Analyse d’Images Session 7 Daniela Hall 25 November 2004.
© 2005 Martin Bujňák, Martin Bujňák Supervisor : RNDr.
Peripheral drift illusion. Multiple views Hartley and Zisserman Lowe stereo vision structure from motion optical flow.
Lecture 03 15/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Computer Vision Stereo Vision. Bahadir K. Gunturk2 Pinhole Camera.
Single-view geometry Odilon Redon, Cyclops, 1914.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
Bahadir K. Gunturk1 Phase Correlation Bahadir K. Gunturk2 Phase Correlation Take cross correlation Take inverse Fourier transform  Location of the impulse.
CSP Visual input processing 1 Visual input processing Lecturer: Smilen Dimitrov Cross-sensorial processing – MED7.
Feature Matching. Feature Space Outlier Rejection.
Computer vision: models, learning and inference M Ahad Multiple Cameras
Vision Sensors for Stereo and Motion Joshua Gluckman Polytechnic University.
MASKS © 2004 Invitation to 3D vision Uncalibrated Camera Chapter 6 Reconstruction from Two Uncalibrated Views Modified by L A Rønningen Oct 2008.
Single-view geometry Odilon Redon, Cyclops, 1914.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Stereoscopic Imaging for Slow-Moving Autonomous Vehicle By: Alex Norton Advisor: Dr. Huggins February 28, 2012 Senior Project Progress Report Bradley University.
Computer vision: geometric models Md. Atiqur Rahman Ahad Based on: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince.
Microsoft Kinect How does a machine infer body position?
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry
Computer vision: models, learning and inference
Advanced Computer Graphics
Processing visual information for Computer Vision
José Manuel Iñesta José Martínez Sotoca Mateo Buendía
René Vidal and Xiaodong Fan Center for Imaging Science
Multiple View Geometry for Robotics
Course 6 Stereo.
Single-view geometry Odilon Redon, Cyclops, 1914.
Presentation transcript:

Miroslav Hlaváč Martin Kozák Fish position determination in 3D space by stereo vision

Project goals Design low budget system to determinate 3D position of fish in water environment in real time Explore the capabilities of two cameras system Explore the capabilities of the Kinect dept sensor Testing of both systems in different conditions Compare results from cameras and Kinect Designed system will be used to track differences in fish motion

Used equipment and software Aquarium (60x30x30cm) – similar one is planned to be used in real application of this project Two Microsoft LifeCam Studio webcams Calibration object (chessboard) Kinect for Xbox 360 Rubber testing object Matlab

Two cameras system The system of two cameras is emulating human eyes We need to do calibration of cameras to determine the system parameters These parameters are then used to compute 3D coordinates from two different views of scene (epipolar geometry)

Epipolar geometry We can determine position of point from one image, but to determine depth we need the information from the second camera Selection of one point in left image and finding corresponding point on epipolar line in right image Computing 3D coordinates from those two points

Cameras calibration Two sets of parameters for cameras – Extrinsic (rotation and translation between cameras) – Intrinsic (focal length, skew and pixel distortion for each camera)

Kinect Gaming device for Xbox 360 Projecting IR light pattern on the scene through special grid Computing depth information from the projected grid distortion

Cameras results 1 Manual corresponding points selection Selecting the white point on rubber testing object manually and computing 3D trajectory 3D coordinates accuracy is ± 0.5 mm

Camera results 2 We developed online tracking system – 7fps Automatic corresponding point selection Image thresholding Binary image opening to eliminate small distortions By computing mean position of white pixels we will get corresponding points in both images

Kinect accuracy Real and Kinect distance dependence on water depth Depth independent Kinect accuracy in x-axis in water x-axis accuracy is ±3.5pixels real distance [cm] measured distance [cm] shift from the center of view [cm] object size [pixel]

Kinect results We developed online tracking system – 30 fps Maximum measurable depth in clear water is 40 cm Maximum measurable depth in dirty water is 20 cm Depth of fish is obtained by depth thresholding Minimal measurable distance 80cm

Kinect vs. cameras Kinect No need for calibration (+) Depth map is direct output (+) No color and outer light (+) dependence Maximal water depth (-) limitation IR reflecting material cause (-) errors in depth map Lower accuracy in water (-) Minimal distance 80cm (-) Cameras Precision (+) Environment independence (+) Image segmentation(-) Localization of (-) corresponding points Calibration for each new (-) system position Requires more processing (-) power

Conclusion Both systems are usable for online 3D fish position determination in water We would recommend using Kinect in environment where accuracy is not the main concern the water is shallow and clean and where we need more mobility Cameras offer higher accuracy and environment independence but they require more processing power (corresponding points detection) and initial calibration

Acknowledgement We would like to thank Ing. Petr Císař, Ph.D. for leading us through this project and for his advices.