InerVis Mobile Robotics Laboratory Institute of Systems and Robotics ISR – Coimbra Contact Person: Jorge Lobo Human inertial sensor:

Slides:



Advertisements
Similar presentations
For Internal Use Only. © CT T IN EM. All rights reserved. 3D Reconstruction Using Aerial Images A Dense Structure from Motion pipeline Ramakrishna Vedantam.
Advertisements

Hilal Tayara ADVANCED INTELLIGENT ROBOTICS 1 Depth Camera Based Indoor Mobile Robot Localization and Navigation.
Mobile Robotics Laboratory Institute of Systems and Robotics ISR – Coimbra Aim –Take advantage from intelligent cooperation between mobile robots, so as.
3D M otion D etermination U sing µ IMU A nd V isual T racking 14 May 2010 Centre for Micro and Nano Systems The Chinese University of Hong Kong Supervised.
EVENTS: INRIA Work Review Nov 18 th, Madrid.
Tracking Multiple Occluding People by Localizing on Multiple Scene Planes Professor :王聖智 教授 Student :周節.
Adviser : Ming-Yuan Shieh Student ID : M Student : Chung-Chieh Lien VIDEO OBJECT SEGMENTATION AND ITS SALIENT MOTION DETECTION USING ADAPTIVE BACKGROUND.
Intelligent Systems Lab. Extrinsic Self Calibration of a Camera and a 3D Laser Range Finder from Natural Scenes Davide Scaramuzza, Ahad Harati, and Roland.
Computer and Robot Vision I
Probabilistic video stabilization using Kalman filtering and mosaicking.
Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002.
Object Detection and Tracking Mike Knowles 11 th January 2005
Visual Odometry for Ground Vehicle Applications David Nister, Oleg Naroditsky, James Bergen Sarnoff Corporation, CN5300 Princeton, NJ CPSC 643, Presentation.
Introduction to Computer Vision CS223B, Winter 2005.
Estimating the Driving State of Oncoming Vehicles From a Moving Platform Using Stereo Vision IEEE Intelligent Transportation Systems 2009 M.S. Student,
Review: Binocular stereo If necessary, rectify the two stereo images to transform epipolar lines into scanlines For each pixel x in the first image Find.
Vision Guided Robotics
Optical flow (motion vector) computation Course: Computer Graphics and Image Processing Semester:Fall 2002 Presenter:Nilesh Ghubade
Jason Li Jeremy Fowers Ground Target Following for Unmanned Aerial Vehicles.
The 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15, 2009 St. Louis, USA.
Mohammed Rizwan Adil, Chidambaram Alagappan., and Swathi Dumpala Basaveswara.
An INS/GPS Navigation System with MEMS Inertial Sensors for Small Unmanned Aerial Vehicles Masaru Naruoka The University of Tokyo 1.Introduction.
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
A Brief Overview of Computer Vision Jinxiang Chai.
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 3.2: Sensors Jürgen Sturm Technische Universität München.
BACS Review Meeting FCT-UC Jorge Dias 17 th – 19 th March 2008 Collège de France, Paris.
Real-time Dense Visual Odometry for Quadrocopters Christian Kerl
SLAM (Simultaneously Localization and Mapping)
Sensing self motion Key points: Why robots need self-sensing Sensors for proprioception in biological systems in robot systems Position sensing Velocity.
Accuracy Evaluation of Stereo Vision Aided Inertial Navigation for Indoor Environments D. Grießbach, D. Baumbach, A. Börner, S. Zuev German Aerospace Center.
Consistent Visual Information Processing Axel Pinz EMT – Institute of Electrical Measurement and Measurement Signal Processing TU Graz – Graz University.
KinectFusion : Real-Time Dense Surface Mapping and Tracking IEEE International Symposium on Mixed and Augmented Reality 2011 Science and Technology Proceedings.
A General Framework for Tracking Multiple People from a Moving Camera
Flow Separation for Fast and Robust Stereo Odometry [ICRA 2009]
Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics.
International Conference on Computer Vision and Graphics, ICCVG ‘2002 Algorithm for Fusion of 3D Scene by Subgraph Isomorphism with Procrustes Analysis.
DTI Management of Information LINK Project: ICONS Incident reCOgnitioN for surveillance and Security funded by DTI, EPSRC, Home Office (March March.
An Information Fusion Approach for Multiview Feature Tracking Esra Ataer-Cansizoglu and Margrit Betke ) Image and.
Metrology 1.Perspective distortion. 2.Depth is lost.
University of Coimbra ISR – Institute of Systems and Robotics University of Coimbra - Portugal WP5: Behavior Learning And Recognition Structure For Multi-modal.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Motion Analysis using Optical flow CIS750 Presentation Student: Wan Wang Prof: Longin Jan Latecki Spring 2003 CIS Dept of Temple.
Source: Computer Vision and Pattern Recognition Workshops (CVPRW), 2010 IEEE Computer Society Conference on Author: Paucher, R.; Turk, M.; Adviser: Chia-Nian.
Expectation-Maximization (EM) Case Studies
Removing Moving Objects from Point Cloud Scenes Krystof Litomisky and Bir Bhanu International Workshop on Depth Image Analysis November 11, 2012.
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
Based on the success of image extraction/interpretation technology and advances in control theory, more recent research has focused on the use of a monocular.
IEEE Robot Team Vision System Project Michael Slutskiy & Paul Nguyen ECE 533 Presentation.
Reference books: – Digital Image Processing, Gonzalez & Woods. - Digital Image Processing, M. Joshi - Computer Vision – a modern approach, Forsyth & Ponce.
IEEE International Conference on Multimedia and Expo.
Application of Stereo Vision in Tracking *This research is supported by NSF Grant No. CNS Opinions, findings, conclusions, or recommendations.
Real-time foreground object tracking with moving camera P Martin Chang.
1 Computational Vision CSCI 363, Fall 2012 Lecture 29 Structure from motion, Heading.
Particle Filtering for Symmetry Detection and Segmentation Pramod Vemulapalli.
Vision Based hand tracking for Interaction The 7th International Conference on Applications and Principles of Information Science (APIS2008) Dept. of Visual.
Recognizing specific objects Matching with SIFT Original suggestion Lowe, 1999,2004.
SZTAKI DEVA in Remote Sensing, Pattern recognition and change detection In Remote sensing Distributed Events Analysis Research Group Computer.
Instantaneous Geo-location of Multiple Targets from Monocular Airborne Video.
UAV for Indoors Inspection and Measurement
Video object segmentation and its salient motion detection using adaptive background generation Kim, T.K.; Im, J.H.; Paik, J.K.;  Electronics Letters 
Processing visual information for Computer Vision
RGBD Camera Integration into CamC Computer Integrated Surgery II Spring, 2015 Han Xiao, under the auspices of Professor Nassir Navab, Bernhard Fuerst and.
Florian Shkurti, Ioannis Rekleitis, Milena Scaccia and Gregory Dudek
DEPTH RANGE ACCURACY FOR PLENOPTIC CAMERAS
Closing the Gaps in Inertial Motion Tracking
Multiple View Geometry for Robotics
Institute of Neural Information Processing (Prof. Heiko Neumann •
Combining Geometric- and View-Based Approaches for Articulated Pose Estimation David Demirdjian MIT Computer Science and Artificial Intelligence Laboratory.
SENSOR BASED CONTROL OF AUTONOMOUS ROBOTS
Optical flow and keypoint tracking
Presentation transcript:

InerVis Mobile Robotics Laboratory Institute of Systems and Robotics ISR – Coimbra Contact Person: Jorge Lobo Human inertial sensor: The vestibular system Within the inner ear, the vestibular system measures tilt and angular acceleration Integration of Vision and Inertial Sensing PhD 2003 – 2007 selected publications:  Jorge Lobo and Jorge Dias, "Relative Pose Calibration Between Visual and Inertial Sensors", International Journal of Robotics Research, Special InerVis Issue, in press.  Peter Corke, Jorge Lobo and Jorge Dias, "An introduction to inertial and visual sensing", International Journal of Robotics Research, Special InerVis Issue, in press.  Luiz G. B. Mirisola, Jorge Lobo, and Jorge Dias, "Stereo Vision 3D Map Registration for Airships using Vision-Inertial Sensing", In The 12th IASTED Int. Conf. on Robotics and Applications, Honolulu, USA, August  Jorge Lobo, João Filipe Ferreira and Jorge Dias, "Bioinspired Visuo-vestibular Artificial Perception System for Independent Motion Segmentation", In Second Inernational Cognitive Vision Workshop, ECCV 9th European Conference on Computer Vision, Graz, Austria, May  Jorge Lobo and Jorge Dias, "Relative Pose Calibration Between Visual and Inertial Sensors", Proceedings of the ICRA 2005 Workshop on Integration of Vision and Inertial Sensors - 2nd InerVis, Barcelona, Spain, April 18, Jorge Lobo, Jorge Dias, “Inertial Sensed Ego-motion for 3D Vision”, in Journal of Robotic Systems Volume 21, Issue 1, pp. 3-12, January Jorge Lobo and Jorge Dias, “Vision and Inertial Sensor Cooperation, Using Gravity as a Vertical Reference”, in IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI, 25(12), pp , December Jorge Lobo, Carlos Queiroz, Jorge Dias, “World Feature Detection and Mapping using Stereovision and Inertial Sensors”, in Robotics and Autonomous Systems, Elsevier Science, vol. 44, Issue 1, pp. 69–81, July selected publications:  Jorge Lobo and Jorge Dias, "Relative Pose Calibration Between Visual and Inertial Sensors", International Journal of Robotics Research, Special InerVis Issue, in press.  Peter Corke, Jorge Lobo and Jorge Dias, "An introduction to inertial and visual sensing", International Journal of Robotics Research, Special InerVis Issue, in press.  Luiz G. B. Mirisola, Jorge Lobo, and Jorge Dias, "Stereo Vision 3D Map Registration for Airships using Vision-Inertial Sensing", In The 12th IASTED Int. Conf. on Robotics and Applications, Honolulu, USA, August  Jorge Lobo, João Filipe Ferreira and Jorge Dias, "Bioinspired Visuo-vestibular Artificial Perception System for Independent Motion Segmentation", In Second Inernational Cognitive Vision Workshop, ECCV 9th European Conference on Computer Vision, Graz, Austria, May  Jorge Lobo and Jorge Dias, "Relative Pose Calibration Between Visual and Inertial Sensors", Proceedings of the ICRA 2005 Workshop on Integration of Vision and Inertial Sensors - 2nd InerVis, Barcelona, Spain, April 18, Jorge Lobo, Jorge Dias, “Inertial Sensed Ego-motion for 3D Vision”, in Journal of Robotic Systems Volume 21, Issue 1, pp. 3-12, January Jorge Lobo and Jorge Dias, “Vision and Inertial Sensor Cooperation, Using Gravity as a Vertical Reference”, in IEEE Transactions on Pattern Analysis and Machine Intelligence, PAMI, 25(12), pp , December Jorge Lobo, Carlos Queiroz, Jorge Dias, “World Feature Detection and Mapping using Stereovision and Inertial Sensors”, in Robotics and Autonomous Systems, Elsevier Science, vol. 44, Issue 1, pp. 69–81, July Integrating Inertial Sensors with Artificial Vision key contributions:  a common framework for inertial-vision sensor integration;  calibration methods for integrated inertial and vision systems;  vertical feature segmentation and 3D mapping;  ground plane segmentation;  3D depth map registration;  independent motion segmentation. MEMs Inertial sensors Analog devices ADXL202 dual axis ±2g accelerometer. Analog devices ADXRS150 angular rate sensor (gyroscope). (October 2002) Xsens MTi IMU Data From Inertial Sensors Independent Motion Segmentation Registering Stereo Depth Maps Inertial sensors Camera Images INS calcs Image processing Match features across images Dynamic 3D(t) reconstructured world model real world time sensor data fusion N static poses observing vertical target –Full camera calibration –IMU↔CAM Rotation estimated Sensor Calibration Both sensors used to measure the vertical direction N observations at different camera positions Unknown rotation determined 2N static poses with N rotations about IMU –IMU↔CAM Translation estimated Swinging pendulum sequence Seg. motion voxelsRaw voxelsbackground voxels -= 1)Image optical flow (LK) 2)Estimate optical flow from 3D data and reconstructed camera motion assuming static scene 3)Subtract and threshold to segment independent motion Optical flow consistency segmentation Images Camera motion Depth Map Background subtraction 1)Quantise registered point cloud to voxel space and accumulate occupancy votes for all frames 2)Threshold to obtain background voxels (apply thinning and growing transformation for noise filtering) 3)Intersect current frame voxels with complement of background voxels to have voxels from moving objects