Presentation: Shashank Gundu.  Introduction  Related work  Hardware Platform  Experiments  Conclusion.

Slides:



Advertisements
Similar presentations
Computer Networks Group Universität Paderborn Ad hoc and Sensor Networks Chapter 9: Localization & positioning Holger Karl.
Advertisements

Hilal Tayara ADVANCED INTELLIGENT ROBOTICS 1 Depth Camera Based Indoor Mobile Robot Localization and Navigation.
Learning Parameterized Maneuvers for Autonomous Helicopter Flight Jie Tang, Arjun Singh, Nimbus Goehausen, Pieter Abbeel UC Berkeley.
Probabilistic Robotics
Silvina Rybnikov Supervisors: Prof. Ilan Shimshoni and Prof. Ehud Rivlin HomePage:
(Includes references to Brian Clipp
Monte Carlo Localization for Mobile Robots Karan M. Gupta 03/10/2004
Kiyoshi Irie, Tomoaki Yoshida, and Masahiro Tomono 2011 IEEE International Conference on Robotics and Automation Shanghai International Conference Center.
IR Lab, 16th Oct 2007 Zeyn Saigol
Simultaneous Localization & Mapping - SLAM
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Visual Navigation in Modified Environments From Biology to SLAM Sotirios Ch. Diamantas and Richard Crowder.
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
1 Robust Video Stabilization Based on Particle Filter Tracking of Projected Camera Motion (IEEE 2009) Junlan Yang University of Illinois,Chicago.
Autonomous Robot Navigation Panos Trahanias ΗΥ475 Fall 2007.
Adam Rachmielowski 615 Project: Real-time monocular vision-based SLAM.
Introduction to Kalman Filter and SLAM Ting-Wei Hsu 08/10/30.
Probabilistic video stabilization using Kalman filtering and mosaicking.
3D Measurements by PIV  PIV is 2D measurement 2 velocity components: out-of-plane velocity is lost; 2D plane: unable to get velocity in a 3D volume. 
A Probabilistic Approach to Collaborative Multi-robot Localization Dieter Fox, Wolfram Burgard, Hannes Kruppa, Sebastin Thrun Presented by Rajkumar Parthasarathy.
Visual Odometry for Ground Vehicle Applications David Nister, Oleg Naroditsky, James Bergen Sarnoff Corporation, CN5300 Princeton, NJ CPSC 643, Presentation.
Authors: Joseph Djugash, Sanjiv Singh, George Kantor and Wei Zhang
Course AE4-T40 Lecture 5: Control Apllication
1/53 Key Problems Localization –“where am I ?” Fault Detection –“what’s wrong ?” Mapping –“what is my environment like ?”
Overview and Mathematics Bjoern Griesbach
Adaptive Signal Processing Class Project Adaptive Interacting Multiple Model Technique for Tracking Maneuvering Targets Viji Paul, Sahay Shishir Brijendra,
Mohammed Rizwan Adil, Chidambaram Alagappan., and Swathi Dumpala Basaveswara.
ROBOT LOCALISATION & MAPPING: MAPPING & LIDAR By James Mead.
Quadcopters. History Also known as quadrotors First flying quadrotor: – 11 November 1922 – Etienne Oehmichen Image: blogger.com.
Presented by: Chaitanya K. Sambhara Paper by: Maarten Ditzel, Caspar Lageweg, Johan Janssen, Arne Theil TNO Defence, Security and Safety, The Hague, The.
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 3.2: Sensors Jürgen Sturm Technische Universität München.
Real-time Dense Visual Odometry for Quadrocopters Christian Kerl
Sensing self motion Key points: Why robots need self-sensing Sensors for proprioception in biological systems in robot systems Position sensing Velocity.
APT: Accurate Outdoor Pedestrian Tracking with Smartphones TsungYun
Accuracy Evaluation of Stereo Vision Aided Inertial Navigation for Indoor Environments D. Grießbach, D. Baumbach, A. Börner, S. Zuev German Aerospace Center.
Navi Rutgers University 2012 Design Presentation
3D SLAM for Omni-directional Camera
Flow Separation for Fast and Robust Stereo Odometry [ICRA 2009]
Sérgio Ronaldo Barros dos Santos (ITA-Brazil)
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 2.3: 2D Robot Example Jürgen Sturm Technische Universität München.
Mapping and Localization with RFID Technology Matthai Philipose, Kenneth P Fishkin, Dieter Fox, Dirk Hahnel, Wolfram Burgard Presenter: Aniket Shah.
Complete Pose Determination for Low Altitude Unmanned Aerial Vehicle Using Stereo Vision Luke K. Wang, Shan-Chih Hsieh, Eden C.-W. Hsueh 1 Fei-Bin Hsaio.
Robust localization algorithms for an autonomous campus tour guide Richard Thrapp Christian Westbrook Devika Subramanian Rice University Presented at ICRA.
Young Ki Baik, Computer Vision Lab.
Cmput412 3D vision and sensing 3D modeling from images can be complex 90 horizon 3D measurements from images can be wrong.
Visual SLAM Visual SLAM SPL Seminar (Fri) Young Ki Baik Computer Vision Lab.
HiQuadLoc: An RSS-Based Indoor Localization System for High-Speed Quadrotors 1 Tuo Yu*, Yang Zhang*, Siyang Liu*, Xiaohua Tian*, Xinbing Wang*, Songwu.
By: Eric Backman Advisor: Dr. Malinowski.  Introduction  Goals  Project Overview and Changes  Work Completed  Updated Schedule.
A new Ad Hoc Positioning System 컴퓨터 공학과 오영준.
Cooperative Air and Ground Surveillance Wenzhe Li.
A Passive Approach to Sensor Network Localization Rahul Biswas and Sebastian Thrun International Conference on Intelligent Robots and Systems 2004 Presented.
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
1 Motion estimation from image and inertial measurements Dennis Strelow and Sanjiv Singh Carnegie Mellon University.
Based on the success of image extraction/interpretation technology and advances in control theory, more recent research has focused on the use of a monocular.
FUFO project Final report.
Range-Only SLAM for Robots Operating Cooperatively with Sensor Networks Authors: Joseph Djugash, Sanjiv Singh, George Kantor and Wei Zhang Reading Assignment.
COMP 417 – Jan 12 th, 2006 Guest Lecturer: David Meger Topic: Camera Networks for Robot Localization.
Fast Semi-Direct Monocular Visual Odometry
Vision-based SLAM Enhanced by Particle Swarm Optimization on the Euclidean Group Vision seminar : Dec Young Ki BAIK Computer Vision Lab.
Visual Odometry for Ground Vehicle Applications David Nistér, Oleg Naroditsky, and James Bergen Sarnoff Corporation CN5300 Princeton, New Jersey
Quadcopters A CEV Talk. Agenda Flight PreliminariesWhy Quadcopters The Quadcopter SystemStability: The NotionSensors and FusionControl AlgorithmsThe Way.
Optic Flow QuadCopter Control
SLAM Techniques -Venkata satya jayanth Vuddagiri 1.
1 Motion estimation from image and inertial measurements Dennis Strelow and Sanjiv Singh Carnegie Mellon University.
Fundamentals of Data Analysis Lecture 11 Methods of parametric estimation.
Paper – Stephen Se, David Lowe, Jim Little
Vision Based Motion Estimation for UAV Landing
Simultaneous Localization and Mapping
Florian Shkurti, Ioannis Rekleitis, Milena Scaccia and Gregory Dudek
Presentation transcript:

Presentation: Shashank Gundu

 Introduction  Related work  Hardware Platform  Experiments  Conclusion

 The camera based navigation on a quadcopter is based on SLAM.  SLAM: In robotic mapping SLAM is a computational problem of constructing or updating a map of an unknown environment while simultaneously keeping track of its location within it.  In past many researches have been done on Micro Aerial Vehicles (MAV) by collaborative construction tasks and also all these systems require external motion capture systems that is flying in unknown GPS denied environments.  The key challenges here are to localize the robot purely from its own sensor data and to robustly navigate it even under potential sensor loss. Which also provides the solution for (SLAM) problem as well as robust state estimation and control methods.

 For solving the SLAM problem on MAVs, different types of sensors such laser range scanners, monocular cameras, stereo cameras and RGB-D sensors are used.  Here the monocular cameras provide two major advantages from all other modalities. That is the amount of information that can be acquired is extremely large compared to their low weight, power consumption, size and cost and in contrast to depth measuring devices, the range of a monocular camera is nearly unlimited allowing a monocular SLAM system to operate both in small, confined and large open environments.  The motive of his paper is two-fold: first, we derive a maximum likelihood estimator to determine the map scale in closed-form from metric distance measurements. Second, we provide a robust technique to deal with large delays in the controlled system by using a Parrot AR.Drone which is available for $300 and, with a weight of only 420 g and a protective hull.

 work on autonomous flight with quadrocopters can be categorized into different research areas. One part of the community focuses on accurate quadrocopter control and a number of impressive results which rely on advanced external tracking systems.  we focus on autonomous flight without previous knowledge about the environment nor GPS signals, while using only onboard sensors. and the results for these are presented by lightweight laser scanner, a Kinect or a stereo rig mounted on a quadrocopter as primary sensor.  Now we focus on a monocular camera for pose estimation. (Stabilizing controllers based on optical flow) which are integrated in commercially available hardware to eliminate drift and also various monocular SLAM methods have been investigated on quadrocopters, both with off-board and on-board processing.

 The monocular SLAM cannot recover from vision alone so the particular challenge for this is that the scale of the map need to be estimated from additional metrics such as IMU or GPS.  Current state of the art is to estimate the scale using an extended Kalman filter (EKF), which contains scale and offset in its state and provides consistent estimator for the scale of visual map and also requires less computational resources.  It can be used with any monocular SLAM algorithm and sensors providing metric position or velocity measurements, such as an ultrasonic or pressure altimeter or occasional GPS measurements.

Monocular slam is used for visual tracking. Extended kalman filter (EKF) is for data fusion and prediction. PID control for pose stabilization and navigation.

 Sensors: AR.Drone is equipped with a 3-axis gyroscope (that is a gyro sensor, angular rate sensor or angular velocity sensor is a device that can sense angular velocity of a object) and accelerometer, an ultrasound altimeter(enhanced with the addition of an air pressure sensor, allowing for more stable flight and hovering) and two cameras. The first camera is aimed forward, covers a field of view of 73.5 ◦ × 58.5 ◦, has a resolution of 320 × 240 and a rolling shutter with a delay of 40 ms. The video of the first camera is streamed to a laptop at 18 fps, using lossy compression. The second camera aims downward which covers a field of view of 47.5 ◦ ×36.5 ◦ and the onboard software uses the down-looking camera to estimate the horizontal velocity. The quadcopter sends gyroscope measurements and the estimated horizontal velocity at 200Hz, the ultrasound measurements at 25Hz to the laptop and the raw accelerometer data cannot be accessed directly.

Here the pose prediction is calculated by the measurements and control commands that arrive with significant delays of ∼ 100ms ∼ 25ms ∼ 125ms. To compensate these delays, we keep a history of observations and sent control commands between t −∆tvis and t +∆tcontrol and re-calculate the EKF state when required.

 Extended Kalman Filter: In order to fuse all available data, we employ an extended Kalman filter (EKF). We derived and calibrated a full motion model of the quadrocopter’s flight dynamics and reaction to control commands. This EKF is also used to compensate for the different time delays in the system, arising from wireless LAN communication and computationally complex visual tracking. We known that height and horizontal velocity measurements arrive with the same delay, which is slightly larger than the delay of attitude measurements. All timing values given subsequently are typical values for a good connection, the exact values depend on the wireless connection quality and are determined by a combination of regular ICMP echo requests sent to the quadrocopter.

 first, we time-stamp all incoming data and store it in an observation buffer. Control commands are then calculated using a prediction for the quadrocopter’s pose at t + ∆tcontrol.  For this prediction, we start with the saved state of the EKF at t −∆tvis (i.e., after the last visual observation/unsuccessfully tracked frame).  we predict ahead up to t + ∆tcontrol, using previously issued control commands and integrating stored sensor measurements as observations.  With this approach, we are able to compensate for delayed and missing observations at the expense of recalculating the last cycles of the EKF.

 PID Control : Based on the position and velocity estimates from the EKF at t +∆tcontrol, we apply PID control to steer the quadrocopter towards the desired goal location in a global coordinate system. According to the state estimate, we rotate the generated control commands to the robot-centric coordinate system and send them to the quadrocopter. For each of the four degrees-of freedom, we employ a separate PID controller for which we experimentally determined suitable controller gains. Scale estimation: The key contributions of this paper is a closedform solution for estimating the scale λ ∈R + of a monocular SLAM system. For this, we assume that the Drone is able to make noisy measurements of absolute distances or velocities from additional metric sensors such as ultrasound altimeter.

 As a first step, the quadrocopter measures in regular intervals the d-dimensional distance traveled both using only the visual SLAM system (subtracting start and end position) and using only the metric sensors available that is subtracting start and end position, or integrating over estimated speeds.  Each interval gives a pair of samples xi,yi ∈ R d, where xi is scaled according to the visual map and yi is in metric units. As both xi and yi measure the motion of the quadrocopter, they are related according to xi ≈ λyi.

State Prediction and Observation: Here we describe the state space, the observation models and the motion model used in the EKF. The quadrocopter measures its horizontal speed in its local coordinate system, which we transform into the global frame and is known as Odometry Observation Model. Visual Observation Model: When parallel tracking and mapping successfully tracks a video frame, we scale the pose estimate by the current estimate for the scaling factor λ ∗ and transform it from the coordinate system of the front camera to the coordinate system of t. Prediction Model: The prediction model describes how the state vector evolves from one time step to the next. In particular, we approximate the quadrocopter’s horizontal acceleration ¨x, y¨ based on its current state xt and estimate its vertical acceleration, yaw-rotational acceleration and roll/pitch rotational speed based on the state xt.

 The experiments were conducted in different environments, i.e., both indoor in rooms of varying size and visual appearance as well as outdoor under the influence of sunlight and wind.  Results are presented by the convergence behavior and accuracy of scale estimation in the accuracy of the motion model.  As ground truth at time t we use the state of the EKF after all odometry and visual pose information up to t have been received and integrated and therefore is not used for drone control – in practice it is available ∼ 250ms after a control command for t has been computed and sent to the quadrocopter.

 Scale Estimation Accuracy: The plots show the mean and standard deviation of the estimation error e, corresponding to the estimated length of 1m, from horizontal and vertical motion. It can be seen that the scale can be estimated accurately in both cases, it is however more accurate and converges faster if the quadrocopter moves vertically

State Prediction Accuracy: Here we give a qualitative evaluation of the accuracy of the predicted state of the quadrocopter used for control. Testing Environments: The top row shows an image of the quadrocopter flying, the bottom row the corresponding image from the quadrocopter’s frontal camera. So this shows that our system can operate robustly in different, real-world environments.

 Positioning Accuracy and Convergence Speed evaluates the performance of the complete system in terms of position control.  In particular, we measured the average time to approach a given goal location and the average positioning error while holding this position.  Considering the large delay in our system, the pose stability of the quadrocopter heavily depends on an accurate prediction from the EKF.  To evaluate the convergence speed we repeatedly let the drone fly a given distance and measure the convergence time corresponding to the time required to reach the target position and hold it for at least 5s.  To determine the stability, we instructed the quadrocopter to hold a target position over 60 s in different environments and measure the root mean square error (RMSE)  To verify that the incorporation of a visual SLAM system eliminates odometry drift, we compare the estimated trajectory with and without the visual SLAM system.

 A visual navigation system for a low-cost quadrocopter using offboard processing is presentend. Our system enables the quadrocopter to visually navigate in unstructured, GPS-denied environments and does not require artificial landmarks nor prior knowledge about the environment.  The contribution of this paper is two-fold: first, we presented a robust solution for visual navigation with a low-cost quadrocopter. Second, we derived a maximum-likelihood estimator in closed-form to recover the absolute scale of the visual map, providing an efficient and consistent alternative to predominant filtering-based methods.  Furthermore, our approach is able to robustly deal with communication delays of up to 400 ms. Tested our system in a set of extensive experiments in different real-world indoor and outdoor environments and with these it is demonstrated as accurate and drift free visual navigation.