PLANAR VEHICLE TRACKING USING A MONOCULAR BASED MULTIPLE CAMERA VISUAL POSITION SYSTEM Anthony Hinson April 22, 2003.

Slides:



Advertisements
Similar presentations
ARTIFICIAL PASSENGER.
Advertisements

Ter Haar Romeny, ICPR 2010 Introduction to Scale-Space and Deep Structure.
QR Code Recognition Based On Image Processing
Road-Sign Detection and Recognition Based on Support Vector Machines Saturnino, Sergio et al. Yunjia Man ECG 782 Dr. Brendan.
電腦視覺 Computer and Robot Vision I
Change Detection C. Stauffer and W.E.L. Grimson, “Learning patterns of activity using real time tracking,” IEEE Trans. On PAMI, 22(8): , Aug 2000.
July 27, 2002 Image Processing for K.R. Precision1 Image Processing Training Lecture 1 by Suthep Madarasmi, Ph.D. Assistant Professor Department of Computer.
Person Re-Identification Application for Android
Color Image Processing
Surface Reconstruction from 3D Volume Data. Problem Definition Construct polyhedral surfaces from regularly-sampled 3D digital volumes.
Digital Cameras CCD (Monochrome) RGB Color Filter Array.
CS485/685 Computer Vision Prof. George Bebis
Multiple Human Objects Tracking in Crowded Scenes Yao-Te Tsai, Huang-Chia Shih, and Chung-Lin Huang Dept. of EE, NTHU International Conference on Pattern.
Preprocessing ROI Image Geometry
Robot Homing Workshop.
Simultaneous Localization and Map Building System for Prototype Mars Rover CECS 398 Capstone Design I October 24, 2001.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Multiple Object Class Detection with a Generative Model K. Mikolajczyk, B. Leibe and B. Schiele Carolina Galleguillos.
Real-Time Face Detection and Tracking Using Multiple Cameras RIT Computer Engineering Senior Design Project John RuppertJustin HnatowJared Holsopple This.
Statistical Color Models (SCM) Kyungnam Kim. Contents Introduction Trivariate Gaussian model Chromaticity models –Fixed planar chromaticity models –Zhu.
Spectral contrast enhancement
Zereik E., Biggio A., Merlo A. and Casalino G. EUCASS 2011 – 4-8 July, St. Petersburg, Russia.
1 Computer Graphics Week13 –Shading Models. Shading Models Flat Shading Model: In this technique, each surface is assumed to have one normal vector (usually.
CGMB214: Introduction to Computer Graphics
Information Extraction from Cricket Videos Syed Ahsan Ishtiaque Kumar Srijan.
CSE 381 – Advanced Game Programming Basic 3D Graphics
Computational Biology, Part E Basic Principles of Computer Graphics Robert F. Murphy Copyright  1996, 1999, 2000, All rights reserved.
3D SLAM for Omni-directional Camera
Vision & Recognition. From a different direction At different times, particularly if it has been modified in the interval In different light A particular.
Reconstructing 3D mesh from video image sequences supervisor : Mgr. Martin Samuelčik by Martin Bujňák specifications Master thesis
Marching Cubes: A High Resolution 3D Surface Construction Algorithm William E. Lorenson Harvey E. Cline General Electric Company Corporate Research and.
Course 9 Texture. Definition: Texture is repeating patterns of local variations in image intensity, which is too fine to be distinguished. Texture evokes.
Computer Graphics Chapter 6 Andreas Savva. 2 Interactive Graphics Graphics provides one of the most natural means of communicating with a computer. Interactive.
1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR.
Vehicle Segmentation and Tracking From a Low-Angle Off-Axis Camera Neeraj K. Kanhere Committee members Dr. Stanley Birchfield Dr. Robert Schalkoff Dr.
1 Artificial Intelligence: Vision Stages of analysis Low level vision Surfaces and distance Object Matching.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
DIGITAL IMAGE. Basic Image Concepts An image is a spatial representation of an object An image can be thought of as a function with resulting values of.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
Image-Based Segmentation of Indoor Corridor Floors for a Mobile Robot Yinxiao Li and Stanley T. Birchfield The Holcombe Department of Electrical and Computer.
Basic Perspective Projection Watt Section 5.2, some typos Define a focal distance, d, and shift the origin to be at that distance (note d is negative)
Introduction to Computer Graphics
Autonomous Robots Vision © Manfred Huber 2014.
Visual Computing Computer Vision 2 INFO410 & INFO350 S2 2015
1 Machine Vision. 2 VISION the most powerful sense.
Fixed-Center Pan-Tilt Projector and Its Calibration Methods Ikuhisa Mitsugami Norimichi Ukita Masatsugu Kidode Graduate School of Information Science Nara.
Intelligent Vision Systems Image Geometry and Acquisition ENT 496 Ms. HEMA C.R. Lecture 2.
2D Output Primitives Points Lines Circles Ellipses Other curves Filling areas Text Patterns Polymarkers.
Intelligent Robotics Today: Vision & Time & Space Complexity.
Data Models, Pixels, and Satellite Bands. Understand the differences between raster and vector data. What are digital numbers (DNs) and what do they.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Introduction to Scale Space and Deep Structure. Importance of Scale Painting by Dali Objects exist at certain ranges of scale. It is not known a priory.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
CSE 185 Introduction to Computer Vision
Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI). With efficient object tracking.
IMAGE PROCESSING is the use of computer algorithms to perform image process on digital images   It is used for filtering the image and editing the digital.
MAV Optical Navigation Software System April 30, 2012 Tom Fritz, Pamela Warman, Richard Woodham Jr, Justin Clark, Andre DeRoux Sponsor: Dr. Adrian Lauf.
Color Image Processing
Color Image Processing
Color Image Processing
Common Classification Tasks
Computer Vision Lecture 4: Color
Vehicle Segmentation and Tracking in the Presence of Occlusions
Color Image Processing
Computer and Robot Vision I
Color Image Processing
Primitive Drawing Algorithm
Primitive Drawing Algorithm
Presentation transcript:

PLANAR VEHICLE TRACKING USING A MONOCULAR BASED MULTIPLE CAMERA VISUAL POSITION SYSTEM Anthony Hinson April 22, 2003

Center for Intelligent Machines and Robotics Slide 2 of 97 Overview Introduction Image Processing –Primitive –Statistical Planar Visual Positioning –Fundamentals –Application

Center for Intelligent Machines and Robotics Slide 3 of 97 Overview Testing and Results –Simulation –Actual Conclusions Graphical User Interface Future Work –Surface Positioning –Time Based Models Demonstration and Questions

Center for Intelligent Machines and Robotics Slide 4 of 97 Link Page Introduction Primitive Image Processing Statistical Image Processing Testing Planar Positioning Fundamentals Planar Positioning Application

Center for Intelligent Machines and Robotics Slide 5 of 97 Link Page ConclusionsFuture Work Graphical User Interface

Center for Intelligent Machines and Robotics Slide 6 of 97 Introduction Simple Monocular Vision Based Position System for Tracking of Indoor and Outdoor Vehicles Concept

Center for Intelligent Machines and Robotics Slide 7 of 97 Introduction Uses Single or Multiple Cameras to Determine Vehicle Position and Orientation Concept

Center for Intelligent Machines and Robotics Slide 8 of 97 Introduction Vehicle Position and Orientation Determined Via Tracking Features On Top of the Vehicle Concept

Center for Intelligent Machines and Robotics Slide 9 of 97 Introduction Advantages –Well Suited for Indoor Vehicles –Accurate Position Information –Easy to Implement –Non-Intrusive to Environment or Vehicle –Not Specific to Certain Hardware –One-Time Setup Concept

Center for Intelligent Machines and Robotics Slide 10 of 97 Introduction Advantages –Video Feed Can Be Used for Monitoring and Positioning Simultaneously Concept

Center for Intelligent Machines and Robotics Slide 11 of 97 Introduction Disadvantages –Reliability is Dependent on Environmental Conditions –Accuracy Decreases with Range –Planar Positioning System (2D Only) Concept

Center for Intelligent Machines and Robotics Slide 12 of 97 Image Processing Initial Image Processing Work –Some Routines Good for Basic Image Enhancement –Largely Ineffective for Feature Extraction Primitive

Center for Intelligent Machines and Robotics Slide 13 of 97 Image Processing ColorBias –Process – Shifts Individual Color Channel Values –Usage – Used for Hue Correction –Synopsis – Reasonably Fast and Effective Primitive Modified ImageOriginal Image ColorBias

Center for Intelligent Machines and Robotics Slide 14 of 97 Image Processing ProgressiveSmooth –Process – Performs Weighted Averaging with Neighboring Pixels –Usage – Used for Noise Removal and Anti- Aliasing –Synopsis – Effective but Slow Primitive Modified ImageOriginal Image ProgressiveSmooth

Center for Intelligent Machines and Robotics Slide 15 of 97 Image Processing ColorDistinguish –Process – Removes Pixels that Are not Within the User-Specified Range –Usage – Color Feature Extraction –Synopsis – Limited Functionality / No Longer Used Primitive Modified ImageOriginal Image ColorDistinguish

Center for Intelligent Machines and Robotics Slide 16 of 97 Image Processing ColorRemove –Process – Removes Pixels that Are not Within the User-Specified Range –Usage – Removes Unwanted Colors –Synopsis – Limited Functionality / No Longer Used Primitive Modified ImageOriginal Image ColorRemove

Center for Intelligent Machines and Robotics Slide 17 of 97 Image Processing Threshold –Process – Removes Pixels with Values Less Than User-Specified Boundary –Usage – Removes Dark Pixels / Was Typically Used to Enhance Edge Information –Synopsis – No Longer Used Primitive Modified ImageOriginal Image Threshold

Center for Intelligent Machines and Robotics Slide 18 of 97 Image Processing EdgeDetect –Process – Calculates Color Discrepancy Between Adjacent Pixels –Usage – Finds Edges of Color Boundaries –Synopsis – Relatively Fast and Effective Primitive Modified ImageOriginal Image EdgeDetect

Center for Intelligent Machines and Robotics Slide 19 of 97 Image Processing ScreenText –Process – Writes Alphanumeric Characters to a Video Pixel Array –Usage – Currently Used to Display Range Data in Video Stream –Synopsis – Works Very Well Primitive Modified ImageOriginal Image ScreenText

Center for Intelligent Machines and Robotics Slide 20 of 97 Image Processing Primitive Image Processing Functions Insufficient for Visual Positioning –Work Reasonably Well on Simulated Images –Work Poorly on Experimental Images Primitive

Center for Intelligent Machines and Robotics Slide 21 of 97 Image Processing Desired Capabilities of Feature Classifier –Capable of Handling Simulated Data –Capable of Handling Experimental Data –Fast Processing Speed Statistical

Center for Intelligent Machines and Robotics Slide 22 of 97 Image Processing Color Space (RGB Space) –All Possible Digital Colors Represented by Cube with Dimension of 256 –Each Axis Represents Color Statistical

Center for Intelligent Machines and Robotics Slide 23 of 97 Image Processing In RGB Space –Color Distributions Have Physical Meaning –Distributions Can be Represented by 3D Shapes in RGB Space Statistical

Center for Intelligent Machines and Robotics Slide 24 of 97 Image Processing In RGB Space –Data from an Image Can be Displayed as Data Points in RGB Space Statistical

Center for Intelligent Machines and Robotics Slide 25 of 97 Image Processing Color Classifiers Used in This Research –Color Range –Normalized Color Direction –3D Gaussian Color Distribution –2D Normalized Gaussian Color Distribution Statistical

Center for Intelligent Machines and Robotics Slide 26 of 97 Image Processing Color Range –Basically Same as ColorDistinguish –Distribution Defined by High & Low Values for Each Color Channel Separately –Distribution is Represented by a Box in RGB Space Statistical

Center for Intelligent Machines and Robotics Slide 27 of 97 Image Processing Color Range –High/Low Values Determined By 1-D Gaussian Distributions for Each Color Channel High Value =  + n  Low Value =  – n  –Pixels Located Inside the Box are Considered to be Target Color Statistical

Center for Intelligent Machines and Robotics Slide 28 of 97 Image Processing Color Range –Advantages Very Fast –Disadvantages Not Very Precise Typically Yields High Error Statistical

Center for Intelligent Machines and Robotics Slide 29 of 97 Image Processing Color Range Sample Image Statistical Processed ImageOriginal Image

Center for Intelligent Machines and Robotics Slide 30 of 97 Image Processing Color Range in RGB Space –Black: Correctly Classified Non- Feature pixels –White: Correctly Classified Feature Pixels –Blue: Missed Feature Pixels Statistical

Center for Intelligent Machines and Robotics Slide 31 of 97 Image Processing Color Direction –Searches for Pixels Using Color Vectors in RGB Space –Distribution is Defined as a Target Color and Range –Resulting Distribution Shape is a Conic Section Statistical

Center for Intelligent Machines and Robotics Slide 32 of 97 Image Processing Color Direction –Color Normalization Equations Converts Discreet Color Value to Normalized Color Direction Vector Statistical

Center for Intelligent Machines and Robotics Slide 33 of 97 Image Processing Color Direction –Distribution Defined By: Target Color (Mean of Normalized Feature Pixels) Statistical

Center for Intelligent Machines and Robotics Slide 34 of 97 Image Processing Color Direction –Distribution Defined By: Color Direction Variance (Each Color Separate) Statistical

Center for Intelligent Machines and Robotics Slide 35 of 97 Image Processing Color Direction –Distribution Defined By: Any Pixel with a Color Direction Between  + n  and  – n  is Considered to be Feature Pixel Statistical

Center for Intelligent Machines and Robotics Slide 36 of 97 Image Processing Color Direction –Advantages Discards Brightness Information Can Find Colors in the Light or Shadows Inherently Compensates for Scattered Color Data –Disadvantages More Likely to Have False Hits on Similar Colored Objects in Scene Statistical

Center for Intelligent Machines and Robotics Slide 37 of 97 Image Processing Color Direction Sample Image Statistical Processed ImageOriginal Image

Center for Intelligent Machines and Robotics Slide 38 of 97 Image Processing Color Direction RGB Space –Black: Correctly Classified Non- Feature pixels –White: Correctly Classified Feature Pixels –Blue: Missed Feature Pixels –Red: False Hit Pixels Statistical

Center for Intelligent Machines and Robotics Slide 39 of 97 Image Processing 3D Gaussian Distribution –Classifies Data According to a Normal Distribution –Classifier is Represented by a 3D Ellipsoid in RGB Space Statistical

Center for Intelligent Machines and Robotics Slide 40 of 97 Image Processing 3D Gaussian Distribution –Classifier’s Shape and Position are Defined By: Mean Color of Feature Data Variance Within Each Color Channel Covariance Between Color Channel Statistical

Center for Intelligent Machines and Robotics Slide 41 of 97 Image Processing 3D Gaussian Distribution –Probability Density Function (PDF) –Where Statistical

Center for Intelligent Machines and Robotics Slide 42 of 97 Image Processing 3D Gaussian Distribution –Variance Calculations Statistical

Center for Intelligent Machines and Robotics Slide 43 of 97 Image Processing 3D Gaussian Distribution –Exponential Part of PDF Can be Used to Assess Membership of Pixel to the Distribution –r is known as Mahalanobis Distance Statistical

Center for Intelligent Machines and Robotics Slide 44 of 97 Image Processing 3D Gaussian Distribution –Mahalanobis Distance The Number of Standard Deviations The Current Pixel is from the Mean Any Pixel with an r of Less Than User- Specified Value is Considered Member of Distribution Statistical

Center for Intelligent Machines and Robotics Slide 45 of 97 Image Processing 3D Gaussian Distribution –Advantages Very Accurate for Most Distributions Compensates for Data Clusters at Any Location and Orientation in RGB Space –Disadvantages Color Distribution Must Be Relatively Gaussian in Distribution Statistical

Center for Intelligent Machines and Robotics Slide 46 of 97 Image Processing 3D Gaussian Distribution Sample Image Statistical Processed ImageOriginal Image

Center for Intelligent Machines and Robotics Slide 47 of 97 Image Processing 3D Gaussian RGB Space –Black: Correctly Classified Non- Feature pixels –White: Correctly Classified Feature Pixels –Blue: Missed Feature Pixels –Red: False Hit Pixels Statistical

Center for Intelligent Machines and Robotics Slide 48 of 97 Image Processing 2D Normalized Gaussian Distribution –Hybrid of 3D Gaussian and Color Direction –Converts 3D Color Cube to 2D Color Triangle Statistical

Center for Intelligent Machines and Robotics Slide 49 of 97 Image Processing 2D Normalized Gaussian Distribution –Color Data Reduced to 2 Dimensions Removes Brightness Information Bivariate Gaussian Classifier –Classifier Shape is an Ellipse within the Color Triangle Statistical

Center for Intelligent Machines and Robotics Slide 50 of 97 Image Processing 2D Normalized Gaussian Distribution –Color Data Flattening (Convert RGB Coordinates to XY Coordinates) Statistical

Center for Intelligent Machines and Robotics Slide 51 of 97 Image Processing 2D Normalized Gaussian Distribution –Multivariate Distribution –Where Statistical

Center for Intelligent Machines and Robotics Slide 52 of 97 Image Processing 2D Normalized Gaussian Distribution –Advantages Same As Color Direction Classifier Allows for Better Classification Than Color Direction –Disadvantages Same As Color Direction Classifier Slower Than Color Direction Statistical

Center for Intelligent Machines and Robotics Slide 53 of 97 Image Processing 2D Normalized Gaussian Distribution Sample Image Statistical Processed ImageOriginal Image

Center for Intelligent Machines and Robotics Slide 54 of 97 Image Processing RGB Space 2D Normalized Gaussian Distribution –Black: Correctly Classified Non- Feature pixels –White: Correctly Classified Feature Pixels –Blue: Missed Feature Pixels –Red: False Hit Pixels Statistical

Center for Intelligent Machines and Robotics Slide 55 of 97 Planar Positioning Planar Positioning Concepts –Camera View Compresses 3D View to 2D –Each Pixel Represents a Vector to an Object in Space –Distance to the Object is Unknown –Point at Where Pixel Vector Intersects Object in Space Must be Found Concepts

Center for Intelligent Machines and Robotics Slide 56 of 97 Planar Positioning Planar Positioning Concepts –Intersection Can be Found if Pixel Vector Intersects a Plane –Each Pixel Will Represent a Finite Area on the Plane Concepts

Center for Intelligent Machines and Robotics Slide 57 of 97 Planar Positioning Quantities Needed For Reconstruction of 3D Data –Extrinsic Camera Properties X, Y, and Z Coordinates of Camera Pan, Tilt, and Slant Angles of Camera –Intrinsic Camera Properties Field of View in Horizontal and Vertical –Video Capture Device Properties Resolution of Video Capture –Planar Properties Coordinates of Plane (D;A,B,C) Concepts

Center for Intelligent Machines and Robotics Slide 58 of 97 Planar Positioning Determining Required Input Data –Tracking Plane Must be Defined (Typically Parallel or Coincident with Ground) –Camera Must be Placed in Position to Be Able to See Plane –Video Capture Hardware Must be Initialized to Determine Capture Resolution Procedure

Center for Intelligent Machines and Robotics Slide 59 of 97 Planar Positioning Creating Tracking Data Lookup Table (LUT) Procedure Define Pixel Grid –Camera Placed at Home Location –Image Plane Assumed to be at Unit Distance from Origin in Y- Direction

Center for Intelligent Machines and Robotics Slide 60 of 97 Planar Positioning Creating Tracking Data Lookup Table (LUT) Procedure Define Pixel Grid –Image Plane Boundaries Determined Trigonometrically Using Fields of View in Horizontal and Vertical

Center for Intelligent Machines and Robotics Slide 61 of 97 Planar Positioning Creating Tracking Data Lookup Table (LUT) Procedure Define Pixel Grid –Image Plane Area Divided Into Pixel Grid Corresponding to Capture Resolution –Intersections of Gridlines are Referred to as Pixel Grid Nodes

Center for Intelligent Machines and Robotics Slide 62 of 97 Planar Positioning Creating Tracking Data Lookup Table (LUT) Procedure Define Pixel Grid –Pixel Grid Node Locations are Recorded in Homogenous Coordinates Format (w;x,y,z)

Center for Intelligent Machines and Robotics Slide 63 of 97 Planar Positioning Translate & Rotate Pixel Grid –Pixel Grid Points Translated to Camera XYZ Location By Multiplying Each Point by Translation Matrix Procedure Creating Tracking Data Lookup Table (LUT)

Center for Intelligent Machines and Robotics Slide 64 of 97 Planar Positioning Translate & Rotate Pixel Grid –Pixel Grid Points Rotated to Camera Orientation By Multiplying Each Point by Three Rotation Matrices Procedure Creating Tracking Data Lookup Table (LUT)

Center for Intelligent Machines and Robotics Slide 65 of 97 Translation and Rotation Matrices Planar Positioning Procedure Creating Tracking Data Lookup Table (LUT)

Center for Intelligent Machines and Robotics Slide 66 of 97 Planar Positioning Create Pixel Node Vectors –Vectors Created Between Focal Point of Camera and Pixel Grid Nodes Procedure Creating Tracking Data Lookup Table (LUT)

Center for Intelligent Machines and Robotics Slide 67 of 97 Planar Positioning Create Pixel Node Vectors –Vectors Represented in Terms of Plücker Line Coordinates or Procedure Creating Tracking Data Lookup Table (LUT)

Center for Intelligent Machines and Robotics Slide 68 of 97 Planar Positioning Create Planar Intersection Points –Intersection Between All Vectors and Plane Can be Found Procedure Creating Tracking Data Lookup Table (LUT)

Center for Intelligent Machines and Robotics Slide 69 of 97 Planar Positioning Projective Geometry –Intersection of Line and Plane Determine a Point Equation of LineEquation of Plane Intersection of Line and Plane = Procedure Creating Tracking Data Lookup Table (LUT)

Center for Intelligent Machines and Robotics Slide 70 of 97 Planar Positioning Determine Pixel Areas –Each Pixel Node Intersection Point Corresponds to the Corner of a Pixel Area Procedure Creating Tracking Data Lookup Table (LUT)

Center for Intelligent Machines and Robotics Slide 71 of 97 Planar Positioning Calculate Pixel Centroids –Pixel Centroid is the Average of the Four Corners of Pixel Area –The Centroid Represents the Coordinates that the Pixel Represents in Space Procedure Creating Tracking Data Lookup Table (LUT)

Center for Intelligent Machines and Robotics Slide 72 of 97 Planar Positioning Calculate Pixel Centroids –Error Represented by Maximum Distance from Centroid to Area Vertex Procedure Creating Tracking Data Lookup Table (LUT)

Center for Intelligent Machines and Robotics Slide 73 of 97 Planar Positioning Environment Setup for Planar Positioning –Vehicle Drive Path Must be Planar –Cameras Must Cover All Drive Areas Application

Center for Intelligent Machines and Robotics Slide 74 of 97 Planar Positioning Vehicle Setup for Planar Positioning –Vehicle Must Have 2 Tracking Features Distinguishable from Rest of Image Residing in a Plane Parallel to Ground Application

Center for Intelligent Machines and Robotics Slide 75 of 97 Planar Positioning Setup for Planar Positioning –Camera Properties Must be Precisely Defined Intrinsic Extrinsic –Environment Must be Accurately Mapped Boundaries Obstacles –Tracking Plane Must be Defined as the Plane the Tracking Features are in Application

Center for Intelligent Machines and Robotics Slide 76 of 97 Planar Positioning Using Planar Positioning –Tracking Information is Displayed for Allowed Areas Application

Center for Intelligent Machines and Robotics Slide 77 of 97 Testing & Results Test #1: Simulated Warehouse Test #1 –Three Camera Views Camera2 Camera3 Camera6

Center for Intelligent Machines and Robotics Slide 78 of 97 Testing & Results Test #1 Initial Test –Gridline Match up Check to See if Grid Lines Up With Walls

Center for Intelligent Machines and Robotics Slide 79 of 97 Testing & Results Test #1: Simulated Warehouse Test #1 –Initial Test Gridline Match up

Center for Intelligent Machines and Robotics Slide 80 of 97 Testing & Results Results –Red: Measured –Green: Camera2 –Blue: Camera3 –Magenta: Camera6 Test #1

Center for Intelligent Machines and Robotics Slide 81 of 97 Testing & Results Test #1 Results –Error Typically Less Than 1% –Some Feature Classifier Break-Down at Far Distances

Center for Intelligent Machines and Robotics Slide 82 of 97 Testing & Results Test #2: Desktop Rover Test #2 –Miniature Remote Control Tank- like Vehicle

Center for Intelligent Machines and Robotics Slide 83 of 97 Testing & Results Test #2 Test Setup –2 Cameras –Poster board grid 4x4 Major Gridlines 1x1 Minor Gridlines

Center for Intelligent Machines and Robotics Slide 84 of 97 Testing & Results Test #2 Initial Gridline Test –Software Gridlines Overlay Match Existing Gridlines Well

Center for Intelligent Machines and Robotics Slide 85 of 97 Testing & Results Test #2

Center for Intelligent Machines and Robotics Slide 86 of 97 Testing & Results Test #2 Results –Blue: Camcorder –Magenta: Sony CCD –Yellow: Calculated Position –Red: Measured Position

Center for Intelligent Machines and Robotics Slide 87 of 97 Testing & Results Test #2 Results –1% to 2% Error Typically –Slightly Higher Error from Sony CCD Camera at More Distant Locations –Vehicle Location Lost Occasionally from Camcorder Video

Center for Intelligent Machines and Robotics Slide 88 of 97 Testing & Results Test #3 Test #3: Remote Controlled Truck –Inexpensive Radio Controlled Truck

Center for Intelligent Machines and Robotics Slide 89 of 97 Testing & Results Test #3 Test Setup –3 Camera Test Panasonic Camcorder Sony XC711 Industrial Camera X10 Wireless Camera (Onboard) –Vehicle Tested on Tiled Floor Space 8x8 Inch Floor Tiles Used as External Reference Point for Analyzing Tracking Data

Center for Intelligent Machines and Robotics Slide 90 of 97 Testing & Results Test #3 Initial Gridline Test –Gridlines Match Tile Grid Well

Center for Intelligent Machines and Robotics Slide 91 of 97 Testing & Results Test #2

Center for Intelligent Machines and Robotics Slide 92 of 97 Testing & Results Test #3 Results –Blue: Camcorder –Magenta: Sony CCD –Yellow: Calculate Position –Red: Measured Position

Center for Intelligent Machines and Robotics Slide 93 of 97 Testing & Results Test #3 Results –Significant Classifier Breakdown with Distance or Lighting Changes –Problematic Camera Model for Sony CCD Camera –Data from Sony CCD Camera Stayed Within 3% Error

Center for Intelligent Machines and Robotics Slide 94 of 97 Testing & Results Test #4 Test #4: Warehouse Test –Lighting Conditions Very Poor –Feature Color Information Washed Out –Test had to be Discarded

Center for Intelligent Machines and Robotics Slide 95 of 97 Conclusions Planar Visual Position System Works Well When: –Vehicle and Environment are Measured Well –Camera Properties are Known –Classifiers are Well-Defined Classification Technique Needs to Be Improved –Works Well with Simulations and Controlled Environments –Classifier Breaks Down when Conditions Become Bad

Center for Intelligent Machines and Robotics Slide 96 of 97 Future Work Adapt Live Video Capabilities Surface Positioning –Extend to Non-Planar Surfaces

Center for Intelligent Machines and Robotics Slide 97 of 97 Questions & Demo Questions ?