ECE 7340: Building Intelligent Robots QUALITATIVE NAVIGATION FOR MOBILE ROBOTS Tod S. Levitt Daryl T. Lawton Presented by: Aniket Samant.

Slides:



Advertisements
Similar presentations
Efficient access to TIN Regular square grid TIN Efficient access to TIN Let q := (x, y) be a point. We want to estimate an elevation at a point q: 1. should.
Advertisements

電腦視覺 Computer and Robot Vision I
Extended Gaussian Images
Monte Carlo Localization for Mobile Robots Karan M. Gupta 03/10/2004
GIS and Image Processing for Environmental Analysis with Outdoor Mobile Robots School of Electrical & Electronic Engineering Queen’s University Belfast.
Raster Based GIS Analysis
Topographical Maps.
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
A Versatile Depalletizer of Boxes Based on Range Imagery Dimitrios Katsoulas*, Lothar Bergen*, Lambis Tassakos** *University of Freiburg **Inos Automation-software.
Localization of Piled Boxes by Means of the Hough Transform Dimitrios Katsoulas Institute for Pattern Recognition and Image Processing University of Freiburg.
Large Scale Navigation Based on Perception Maria Joao Rendas I3S, CNRS-UNSA.
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Autonomous Robot Navigation Panos Trahanias ΗΥ475 Fall 2007.
A Bayesian Formulation For 3d Articulated Upper Body Segmentation And Tracking From Dense Disparity Maps Navin Goel Dr Ara V Nefian Dr George Bebis.
1 Motion Planning Algorithms : BUG-family. 2 To plan a path  find a continuous trajectory leading from initial position of the automaton (a mobile robot)
Visual Querying By Color Perceptive Regions Alberto del Bimbo, M. Mugnaini, P. Pala, and F. Turco University of Florence, Italy Pattern Recognition, 1998.
Dynamic Medial Axis Based Motion Planning in Sensor Networks Lan Lin and Hyunyoung Lee Department of Computer Science University of Denver
Study on Mobile Robot Navigation Techniques Presenter: 林易增 2008/8/26.
3D Rigid/Nonrigid RegistrationRegistration 1)Known features, correspondences, transformation model – feature basedfeature based 2)Specific motion type,
Dept. of Civil and Environmental Engineering and Geodetic Science College of Engineering The Ohio State University Columbus, Ohio 43210
June 12, 2001 Jeong-Su Han An Autonomous Vehicle for People with Motor Disabilities by G. Bourhis, O.Horn, O.Habert and A. Pruski Paper Review.
1 DARPA TMR Program Collaborative Mobile Robots for High-Risk Urban Missions Second Quarterly IPR Meeting January 13, 1999 P. I.s: Leonidas J. Guibas and.
Kalman filter and SLAM problem
Zereik E., Biggio A., Merlo A. and Casalino G. EUCASS 2011 – 4-8 July, St. Petersburg, Russia.
1 Intelligent Robotics Research Centre (IRRC) Department of Electrical and Computer Systems Engineering Monash University, Australia Visual Perception.
Ioannis Karamouzas, Roland Geraerts, Mark Overmars Indicative Routes for Path Planning and Crowd Simulation.
Extracting Places and Activities from GPS Traces Using Hierarchical Conditional Random Fields Yong-Joong Kim Dept. of Computer Science Yonsei.
Introduction Tracking the corners Camera model and collision detection Keyframes Path Correction Controlling the entire path of a virtual camera In computer.
1/20 Obtaining Shape from Scanning Electron Microscope Using Hopfield Neural Network Yuji Iwahori 1, Haruki Kawanaka 1, Shinji Fukui 2 and Kenji Funahashi.
Landing a UAV on a Runway Using Image Registration Andrew Miller, Don Harper, Mubarak Shah University of Central Florida ICRA 2008.
Intelligent Vision Systems ENT 496 Object Shape Identification and Representation Hema C.R. Lecture 7.
Machine Learning Approach to Report Prioritization with an Application to Travel Time Dissemination Piotr Szczurek Bo Xu Jie Lin Ouri Wolfson.
University of Amsterdam Search, Navigate, and Actuate - Qualitative Navigation Arnoud Visser 1 Search, Navigate, and Actuate Qualitative Navigation.
Introduction to the Principles of Aerial Photography
Towards real-time camera based logos detection Mathieu Delalandre Laboratory of Computer Science, RFAI group, Tours city, France Osaka Prefecture Partnership.
Spin Image Correlation Steven M. Kropac April 26, 2005.
1 Distributed and Optimal Motion Planning for Multiple Mobile Robots Yi Guo and Lynne Parker Center for Engineering Science Advanced Research Computer.
9 Introduction to AI Robotics (MIT Press), copyright Robin Murphy 2000 Chapter 9: Topological Path Planning1 Part II Chapter 9: Topological Path Planning.
Vehicle Segmentation and Tracking From a Low-Angle Off-Axis Camera Neeraj K. Kanhere Committee members Dr. Stanley Birchfield Dr. Robert Schalkoff Dr.
Topological Path Planning JBNU, Division of Computer Science and Engineering Parallel Computing Lab Jonghwi Kim Introduction to AI Robots Chapter 9.
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
Raquel A. Romano 1 Scientific Computing Seminar May 12, 2004 Projective Geometry for Computer Vision Projective Geometry for Computer Vision Raquel A.
Based on the success of image extraction/interpretation technology and advances in control theory, more recent research has focused on the use of a monocular.
Robotics Club: 5:30 this evening
Chapter 10. The Explorer System in Cognitive Systems, Christensen et al. Course: Robots Learning from Humans On, Kyoung-Woon Biointelligence Laboratory.
Navigate A Course Lesson Title Lesson Number LP 6 Motivator
9 Introduction to AI Robotics (MIT Press), copyright Robin Murphy 2000 Chapter 9: Topological Path Planning1 Part II Chapter 9: Topological Path Planning.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Using IR For Maze Navigation Kyle W. Lawton and Liz Shrecengost.
Navigation NAU 102 Lesson 1.
AGI-09 Scott Lathrop John Laird 1. 2  Cognitive Architectures Amodal, symbolic representations & computations No general reasoning with perceptual-based.
NAVIGATION TERMS Indicated Airspeed is the airplane’s speed as indicated by the airspeed indicator. True Airspeed is the actual speed of the airplane through.
Efficient Placement and Dispatch of Sensors in a Wireless Sensor Network You-Chiun Wang, Chun-Chi Hu, and Yu-Chee Tseng IEEE Transactions on Mobile Computing.
Mobile Robot Localization and Mapping Using Range Sensor Data Dr. Joel Burdick, Dr. Stergios Roumeliotis, Samuel Pfister, Kristo Kriechbaum.
Beard & McLain, “Small Unmanned Aircraft,” Princeton University Press, 2012, Chapter 12: Slide 1 Chapter 12 Path Planning.
Wireless sensor and actor networks: research challenges Ian. F. Akyildiz, Ismail H. Kasimoglu
Garmin 60CSx How To Use The Available Features Photos by IN-TF1 Technical Search, Stephen Bauer Written by IN-TF1 Technical Search, Jean Seibert.
Vision-Guided Humanoid Footstep Planning for Dynamic Environments
Introduction to Wireless Sensor Networks
Paper – Stephen Se, David Lowe, Jim Little
Dejavu:An accurate Energy-Efficient Outdoor Localization System
Mean Shift Segmentation
Intelligent Agents Chapter 2.
Part II Chapter 9: Topological Path Planning
Common Classification Tasks
Vehicle Segmentation and Tracking in the Presence of Occlusions
Multiple View Geometry for Robotics
A Volumetric Method for Building Complex Models from Range Images
Application to Animating a Digital Actor on Flat Terrain
Presentation transcript:

ECE 7340: Building Intelligent Robots QUALITATIVE NAVIGATION FOR MOBILE ROBOTS Tod S. Levitt Daryl T. Lawton Presented by: Aniket Samant

INTRODUCTION REQUIREMENTS OF A MOBILE ROBOT Organize its visual memory about local co-ordinate systems of landmarks as the primary means of defining locations Account for the nature of visual events, and represent poor range and angle measurements Maintain memory structures that associate landmark systems along paths of motion that the robot executed when it saw the landmarks Admit inference processes over visual memory that robustly perform navigation and guidance

INTRODUCTION GEOGRAPHICAL INFORMATION FOR NAVIGATION Builds a memory of the environment the robot passes through Contains sufficient information to allow the robot to re-trace its paths Constructs or updates a posteriori map of the area the robot passes through Utilizes runtime perceptual inferences and a priori map data for navigation and guidance

INTRODUCTION DEFINITIONS PLACE: Region in space, in which a fixed set of landmarks can be observed from anywhere in the region. VIEWFRAMES: Data in terms of relative angles and angular error between landmarks, and coarse estimates of the range of the landmarks LANDMARK PAIR BOUNDARY: Line connecting 2 landmarks, creating a virtual division of the ground surface. ORIENTATION REGIONS: Regions on the ground determined by a set of LPBs PATHS: Sequences of observations of landmarks, viewframes, LPBs and other visual events HEADINGS: World states created by a robotic actions

LANDMARKS DEFINITION OF A LANDMARK Distinctive visual events Define a single direction or a set of directions in a 3-D space Visually re-acquirable Robustly acquired and closely related to the navigation task

LANDMARKS DETERMINATION OF LANDMARKS WITH SENSORS Knowing the relation between gravity and positions in an image, orient each pixel with respect to the direction of gravity Filtering out the set of n-most interesting structures above and near the global horizon line Defining a local horizon due to sensor’s orientation to the immediate ground plane

LANDMARKS ASSUMPTIONS FOR NAVIGATION Has a wide field of view (complete panaromic) Has access to the position and orientation of the immediate ground plane Has a restriction on the camera motion to be translational

LANDMARKS

VIEWFRAMES GENERATION OF VIEWFRAMES Relative solid angles between distinguished points found using a sensor centered co-ordinate system Distinguished points obtained 1.Statistically – centroid of the landmark projection 2.Structurally – vertex or high curvature boundary point 3.Perceptually – unique visual feature of the landmark

VIEWFRAMES Fixes an orientation in azimuth and elevation Takes the direction opposite to the current heading as the zero degree axis Notations L i = Landmarks viewed from left to right Ang ij = Solid Angle between landmarks i and j θ ij = Planar Angle between landmarks i and j e ij = Angular Error between i and j, due to pan/tilt error

VIEWFRAMES No need to know co-ordinates of landmarks to deduce approximate angles and distances No need to know the path between the previous and the current location Localization is a geographic region determined by representing valid set of angles between landmarks

VIEWFRAMES

ORIENTATION REGIONS Orientation of the LPB is [L 1 L 2 ] if Landmark L 1 is on the left Landmark L 2 is on the right Angle from L 1 to L 2 (left to right) is less than 180 degrees Orientation of LPB (L 1 L 2 ) = +1 if θ 12 < π = 0 if θ 12 = π = -1 if θ 12 > π

ORIENTATION REGIONS Number of new orientation regions = N – (number of crossings with multiplicities) + 1 Multiplicity = (number of LPBs crossing – 1) Multiplicity = 1 if 2 LPBs cross If all LPB crossings generated by pairs of landmarks in the viewframe have multiplicity 1, number of orientation regions depend on the number of landmarks Number of orientation regions =

LPB formed by two landmarks L 1 and L 2 is LPB(L 1,L 2 ) l[L 1 L 2 ] = cross LPB(L 1,L 2 ) to the left of L 1 r[L 1 L 2 ] = cross LPB(L 1,L 2 ) to the right of L 2 b[L 1 L 2 ] = cross LPB(L 1,L 2 ) between L 1 and L 2 a[L 1 L 1 ] = head towards landmark L 1 ORIENTATION REGIONS

VISUAL MEMORY FOR NAVIGATION PURPOSES OF LONG TERM MEMORY (LTM) Represent places in the world the robot has passed through so that the LTM is useful for getting back to those places Facilitate visual re-acquisition of landmarks so that the robot can perform recognition processes more efficiently Respond to queries about spatial relational information between places and/or landmarks of interest Formulate and update map knowledge about the world

VISUAL MEMORY FOR NAVIGATION

LTM ARCHITECTURE Four databases – a priori terrain grids, a priori cultural feature networks (e.g. roads, rivers), viewpaths, and landmarks. Landmark data – time of acquisition, viewframe it occurs in, pointers to map or grid data tying the landmark to absolute coordinate systems, as well as perceptual data. Viewpath maintenance – the process that attempts to match the re-acquisition of landmarks to infer closeness of visual places in memory. Spatial reasoning module – computes relationships between objects in visual memory in response to tasks and queries from the vision system.

PATH PLANNING AND EXECUTION HEADING TYPES Type Specifies the co-ordinate system that the direction components are specified in Metric Heading – correspondence of sensor position to a priori map or grid data Viewframe Heading – headings computed between viewframes that share common landmarks Destination Goals Description of places that the heading is intended to point the robot or vision platform toward A set of absolute world coordinates A viewframe localization An orientation region A set of landmarks

PATH PLANNING AND EXECUTION Direction Functions Accept runtime data and return true if heading is maintained or false otherwise Compare the desired heading with the sensor reading for metric headings Measure the relative angle between the heading vector and observed landmarks for viewframe headings Termination Criteria Runtime computable conditions that indicate that if heading continues to be maintained, its direction function can no longer return true Conditions Reach destination goal Cross desired LPBs Recognise set of landmarks and relative angles

PATH PLANNING AND EXECUTION

QUALITATIVE PATH PLANNING Carry a compass as we move through the environment and mark the direction North relative to observed landmarks Propogate paths outward from start and goal regions Store the initial direction for each adjacent region Choose an adjacent region with a relative compass heading most close to the initial step from the start or goal Repeat the process till we get a path

QUALNAV SIMULATOR FUNCTIONS Manipulates an a priori grid and cultural feature data Allows the interactive specification of landmarks and vehicle locations Computes visibility of landmarks in 3D terrain data Helps in 2D/3D conversions Supports a number of color graphics Calculates viewframe localizations relative to visible landmarks

QUALNAV SIMULATOR Simulator level keeps track of actual landmark locations and performs line-of-sight calculations Planning level records simulated range and actual error for landmark sightings Execution level simulates actual robot motion and vision

QUALNAV SIMULATOR