Sukhum Sattaratnamai Advisor: Dr.Nattee Niparnan

Slides:



Advertisements
Similar presentations
How Do You Tell a Blackbird from a Crow? Thomas Berg and Peter N. Belhumeur Columbia University.
Advertisements

The fundamental matrix F
CSE473/573 – Stereo and Multiple View Geometry
For Internal Use Only. © CT T IN EM. All rights reserved. 3D Reconstruction Using Aerial Images A Dense Structure from Motion pipeline Ramakrishna Vedantam.
www-video.eecs.berkeley.edu/research
1 Autonomous Registration of LiDAR Data to Single Aerial Image Takis Kasparis Nicholas S. Shorter
Hilal Tayara ADVANCED INTELLIGENT ROBOTICS 1 Depth Camera Based Indoor Mobile Robot Localization and Navigation.
Hi_Lite Scott Fukuda Chad Kawakami. Background ► The DARPA Grand Challenge ► The Defense Advance Research Project Agency (DARPA) established a contest.
System Integration and Experimental Results Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash.
Hybrid Position-Based Visual Servoing
Natchanon Wongwilai Adviser: Nattee Niparnan, Ph.D. M.Eng. 1.
Sukhum Sattaratnamai Advisor: Dr.Nattee Niparnan 1.
Intelligent Systems Lab. Extrinsic Self Calibration of a Camera and a 3D Laser Range Finder from Natural Scenes Davide Scaramuzza, Ahad Harati, and Roland.
Stereo.
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
SA-1 Probabilistic Robotics Probabilistic Sensor Models Beam-based Scan-based Landmarks.
1 Robust Video Stabilization Based on Particle Filter Tracking of Projected Camera Motion (IEEE 2009) Junlan Yang University of Illinois,Chicago.
Navigation and obstacle avoidance system for side-walk mobile robot Pakorn Udsatid Advisor: Nattee Niparnan, Ph.D. 1.
A Self-Supervised Terrain Roughness Estimator for Off-Road Autonomous Driving David Stavens and Sebastian Thrun Stanford Artificial Intelligence Lab.
Visual Odometry Michael Adams CS 223B Problem: Measure trajectory of a mobile platform using visual data Mobile Platform (Car) Calibrated Camera.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2005 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Stereoscopic Light Stripe Scanning: Interference Rejection, Error Minimization and Calibration By: Geoffrey Taylor Lindsay Kleeman Presented by: Ali Agha.
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
CSCE 641 Computer Graphics: Image-based Modeling Jinxiang Chai.
Computer Vision. Computer vision is concerned with the theory and technology for building artificial Computer vision is concerned with the theory and.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2006 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Accurate, Dense and Robust Multi-View Stereopsis Yasutaka Furukawa and Jean Ponce Presented by Rahul Garg and Ryan Kaminsky.
Vision Guided Robotics
The 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems October 11-15, 2009 St. Louis, USA.
My Research Experience Cheng Qian. Outline 3D Reconstruction Based on Range Images Color Engineering Thermal Image Restoration.
Camera Calibration & Stereo Reconstruction Jinxiang Chai.
1 Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University, Australia Visual Perception.
Stereo Matching Information Permeability For Stereo Matching – Cevahir Cigla and A.Aydın Alatan – Signal Processing: Image Communication, 2013 Radiometric.
Simple Calibration of Non-overlapping Cameras with a Mirror
Final Exam Review CS485/685 Computer Vision Prof. Bebis.
Reading Notes: Special Issue on Distributed Smart Cameras, Proceedings of the IEEE Mahmut Karakaya Graduate Student Electrical Engineering and Computer.
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
Real-time object tracking using Kalman filter Siddharth Verma P.hD. Candidate Mechanical Engineering.
A Local Adaptive Approach for Dense Stereo Matching in Architectural Scene Reconstruction C. Stentoumis 1, L. Grammatikopoulos 2, I. Kalisperakis 2, E.
3D SLAM for Omni-directional Camera
Visual Perception PhD Program in Information Technologies Description: Obtention of 3D Information. Study of the problem of triangulation, camera calibration.
1 PhD dissertation Hadi Aliakbarpour Faculty of Science and Technology October 2012, University of Coimbra.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Particle Filters.
Stereo Readings Szeliski, Chapter 11 (through 11.5) Single image stereogram, by Niklas EenNiklas Een.
Ray Divergence-Based Bundle Adjustment Conditioning for Multi-View Stereo Mauricio Hess-Flores 1, Daniel Knoblauch 2, Mark A. Duchaineau 3, Kenneth I.
Vision-based human motion analysis: An overview Computer Vision and Image Understanding(2007)
Visual SLAM Visual SLAM SPL Seminar (Fri) Young Ki Baik Computer Vision Lab.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
Bundle Adjustment A Modern Synthesis Bill Triggs, Philip McLauchlan, Richard Hartley and Andrew Fitzgibbon Presentation by Marios Xanthidis 5 th of No.
Automated Fingertip Detection
COMP 417 – Jan 12 th, 2006 Guest Lecturer: David Meger Topic: Camera Networks for Robot Localization.
Project 2 due today Project 3 out today Announcements TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAAAA.
MASKS © 2004 Invitation to 3D vision. MASKS © 2004 Invitation to 3D vision Lecture 1 Overview and Introduction.
Presented by: Kumar Magi. ( 2MM07EC016 ). Contents Introduction Definition Sensor & Its Evolution Sensor Principle Multi Sensor Fusion & Integration Application.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
Multiple View Geometry
Imaging and Depth Estimation in an Optimization Framework
Semi-Global Matching with self-adjusting penalties
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Introduction Computer vision is the analysis of digital images
Vehicle Segmentation and Tracking in the Presence of Occlusions
Introduction Computer vision is the analysis of digital images
Introduction to Robot Mapping
Noah Snavely.
Liyuan Li, Jerry Kah Eng Hoe, Xinguo Yu, Li Dong, and Xinqi Chu
Introduction Computer vision is the analysis of digital images
Lecture 15: Structure from motion
Presentation transcript:

Sukhum Sattaratnamai Advisor: Dr.Nattee Niparnan Improving and Filtering Laser Data for Extrinsic Laser Range Finder/Camera Calibration Sukhum Sattaratnamai Advisor: Dr.Nattee Niparnan

Outline Introduction Related work Our Problem Scope & Work plan LRF-Camera System, Applications Related work LRF-Camera Calibration Method Our Problem Challenge, Propose method Scope & Work plan

Nice Point Cloud

Point Cloud Data Hard to classify the objects without color information

Color Information Give rich information about the environment

Laser Range Finder Give depth data of scan plane, and can be used to generate 3D point cloud

Camera Camera Model

LRF-Camera System

LRF-Camera System

LRF-Camera Calibration Problem Definition [Ganhua Li, 2007] Find the transformation [R |t ] of the camera w.r.t. LRF

Applications Transportation Surveillance Tourism Robotics

Precision? “Stanley: The Robot that Won the DARPA Grand Challenge”

Precision? Accident

Objective Calibration method can give most accurate result laser data post-processing method

Related work Projection Error (2D) Point to Plane Distance (3D)

Related work (2D) Wasielewski, S.; Strauss, O.;, "Calibration of a multi-sensor system laser rangefinder/camera," Intelligent Vehicles '95 Symposium., 1995

Related work (2D) Mei, C.; Rives, P.;, "Calibration between a central catadioptric camera and a laser range finder for robotic applications," ICRA 2006

Related work (2D) Ganhua Li; Yunhui Liu; Li Dong; Xuanping Cai; Dongxiang Zhou;, "An algorithm for extrinsic parameters calibration of a camera and a laser range finder using line features," IROS 2007

Related work (3D) Qilong Zhang; Pless, R.;, "Extrinsic calibration of a camera and laser range finder (improves camera calibration)," IROS 2004

Related work (3D) Dupont, R.; Keriven, R.; Fuchs, P.;, "An improved calibration technique for coupled single-row telemeter and CCD camera," 3DIM 2005

Comparison 2004 vs 2007

Our Problem Propose an autonomous data improving and filtering method which lead to more accurate calibration result

LRF-Camera System Laser Range Finder Camera

Challenge Sensor Model [Kneip, L.; 2009] Laser range finder sampling an environment discretely Laser data are noisy : Mixed pixel

Challenge Laser beams are invisible Autonomous process Point-Line constrains No ground truth available Autonomous process Autonomously improve and filter the data

Proposed method Data improvement : Reduce angular error

Proposed method Data filtering: Remove outlier In case of mixed pixel: may select neighbor point instead In case of moving calibration object: remove data pairs

Scope of the research Propose an autonomous laser data improving and filtering method for extrinsic LRF/camera calibration Laser range finder and camera can be placed at arbitrarily position as long as they have a common detection area An environment is suitable for laser range finder and camera so that they can detect the calibration object

Work Plan Study the works in the related fields Develop data improvement method Develop data filtering method Test the proposed method Prepare and engage in a thesis defense

Thank you

Bundle adjustment Conceived in the field of photogrammetry during 1950s and increasingly been used by computer vision researchers during recent years Mature bundle algorithms are comparatively efficient even on very large problems Bundle adjustment boils down to minimizing the re-projection error between the image locations of observed and predicted image points Visual reconstruction attempts to recover a model of a 3D scene from multiple images and also recovers the poses of the cameras that took the images