Robust Lane Detection and Tracking

Slides:



Advertisements
Similar presentations
Image Registration  Mapping of Evolution. Registration Goals Assume the correspondences are known Find such f() and g() such that the images are best.
Advertisements

Rear Lights Vehicle Detection for Collision Avoidance Evangelos Skodras George Siogkas Evangelos Dermatas Nikolaos Fakotakis Electrical & Computer Engineering.
Hybrid Position-Based Visual Servoing
Forward-Backward Correlation for Template-Based Tracking Xiao Wang ECE Dept. Clemson University.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Formation et Analyse d’Images Session 8
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Motion Detection And Analysis Michael Knowles Tuesday 13 th January 2004.
1 Interest Operators Find “interesting” pieces of the image –e.g. corners, salient regions –Focus attention of algorithms –Speed up computation Many possible.
Probabilistic video stabilization using Kalman filtering and mosaicking.
Segmentation by Clustering Reading: Chapter 14 (skip 14.5) Data reduction - obtain a compact representation for interesting image data in terms of a set.
Tracking using the Kalman Filter. Point Tracking Estimate the location of a given point along a sequence of images. (x 0,y 0 ) (x n,y n )
Video Google: Text Retrieval Approach to Object Matching in Videos Authors: Josef Sivic and Andrew Zisserman University of Oxford ICCV 2003.
CS 223B Assignment 1 Help Session Dan Maynes-Aminzade.
Student: Hsu-Yung Cheng Advisor: Jenq-Neng Hwang, Professor
Augmented Reality: Object Tracking and Active Appearance Model
MULTIPLE MOVING OBJECTS TRACKING FOR VIDEO SURVEILLANCE SYSTEMS.
COMP 290 Computer Vision - Spring Motion II - Estimation of Motion field / 3-D construction from motion Yongjik Kim.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
Estimating the Driving State of Oncoming Vehicles From a Moving Platform Using Stereo Vision IEEE Intelligent Transportation Systems 2009 M.S. Student,
EE392J Final Project, March 20, Multiple Camera Object Tracking Helmy Eltoukhy and Khaled Salama.
Kalman filter and SLAM problem
Circular Augmented Rotational Trajectory (CART) Shape Recognition & Curvature Estimation Presentation for 3IA 2007 Russel Ahmed Apu & Dr. Marina Gavrilova.
GM-Carnegie Mellon Autonomous Driving CRL TitleAutomated Image Analysis for Robust Detection of Curbs Thrust AreaPerception Project LeadDavid Wettergreen,
Driver’s View and Vehicle Surround Estimation using Omnidirectional Video Stream Abstract Our research is focused on the development of novel machine vision.
1 Interest Operators Harris Corner Detector: the first and most basic interest operator Kadir Entropy Detector and its use in object recognition SIFT interest.
1. Introduction Motion Segmentation The Affine Motion Model Contour Extraction & Shape Estimation Recursive Shape Estimation & Motion Estimation Occlusion.
Landing a UAV on a Runway Using Image Registration Andrew Miller, Don Harper, Mubarak Shah University of Central Florida ICRA 2008.
Reconstructing 3D mesh from video image sequences supervisor : Mgr. Martin Samuelčik by Martin Bujňák specifications Master thesis
Robust global motion estimation and novel updating strategy for sprite generation IET Image Processing, Mar H.K. Cheung and W.C. Siu The Hong Kong.
Processing Images and Video for an Impressionist Effect Author: Peter Litwinowicz Presented by Jing Yi Jin.
An Information Fusion Approach for Multiview Feature Tracking Esra Ataer-Cansizoglu and Margrit Betke ) Image and.
CSCE 643 Computer Vision: Structure from Motion
A Fast and Accurate Tracking Algorithm of the Left Ventricle in 3D Echocardiography A Fast and Accurate Tracking Algorithm of the Left Ventricle in 3D.
Recognizing Action at a Distance Alexei A. Efros, Alexander C. Berg, Greg Mori, Jitendra Malik Computer Science Division, UC Berkeley Presented by Pundik.
Visual SLAM Visual SLAM SPL Seminar (Fri) Young Ki Baik Computer Vision Lab.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Crowd Analysis at Mass Transit Sites Prahlad Kilambi, Osama Masound, and Nikolaos Papanikolopoulos University of Minnesota Proceedings of IEEE ITSC 2006.
Real-Time Simultaneous Localization and Mapping with a Single Camera (Mono SLAM) Young Ki Baik Computer Vision Lab. Seoul National University.
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
A Robust Method for Lane Tracking Using RANSAC James Ian Vaughn Daniel Gicklhorn CS664 Computer Vision Cornell University Spring 2008.
Joint Tracking of Features and Edges STAN BIRCHFIELD AND SHRINIVAS PUNDLIK CLEMSON UNIVERSITY ABSTRACT LUCAS-KANADE AND HORN-SCHUNCK JOINT TRACKING OF.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
Looking at people and Image-based Localisation Roberto Cipolla Department of Engineering Research team
DETECTING AND TRACKING TRACTOR-TRAILERS USING VIEW-BASED TEMPLATES Masters Thesis Defense by Vinay Gidla Apr 19,2010.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
Visual Tracking by Cluster Analysis Arthur Pece Department of Computer Science University of Copenhagen
An Algorithm to Follow Arbitrarily Curved Paths Steven Kapturowski.
Person Following with a Mobile Robot Using Binocular Feature-Based Tracking Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering.
Tracking Hands with Distance Transforms Dave Bargeron Noah Snavely.
Processing Images and Video for An Impressionist Effect Automatic production of “painterly” animations from video clips. Extending existing algorithms.
Invariant Local Features Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging.
SIFT.
SIFT Scale-Invariant Feature Transform David Lowe
Paper – Stephen Se, David Lowe, Jim Little
Motion Detection And Analysis
Video Google: Text Retrieval Approach to Object Matching in Videos
Dynamical Statistical Shape Priors for Level Set Based Tracking
A New Approach to Track Multiple Vehicles With the Combination of Robust Detection and Two Classifiers Weidong Min , Mengdan Fan, Xiaoguang Guo, and Qing.
Range Imaging Through Triangulation
Features Readings All is Vanity, by C. Allan Gilbert,
CSE 455 – Guest Lectures 3 lectures Contact Interest points 1
SIFT.
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 30
Video Google: Text Retrieval Approach to Object Matching in Videos
CSSE463: Image Recognition Day 30
Presented by Mohammad Rashidujjaman Rifat Ph.D Student,
Image Registration  Mapping of Evolution
Presentation transcript:

Robust Lane Detection and Tracking Prasanth Jeevan Esten Grotli Esten is at the CDC conference Bulk of talk will introduce this one algorithm (high level & quick), and then give our results

Motivation Autonomous driving Driver assistance (collision avoidance, more precise driving directions)

Some Terms Lane detection - draw boundaries of a lane in a single frame Lane tracking - uses temporal coherence to track boundaries in a frame sequence Vehicle Orientation- position and orientation of vehicle within the lane boundaries

Goals of our lane tracker Recover lane boundary for straight or curved lanes in suburban environment Recover orientation and position of vehicle in detected lane boundaries Use temporal coherence for robustness First 2 given based on lane model

Starting with lane detection Extended the work of Lopez et. al. 2005’s work on lane detection Ridgel feature Hyperbola lane model RANSAC for model fitting Realtime Our extension: Temporal coherence for lane tracking If you recall, we implemented another Lane tracker by Zhou, with goal of making it realtime, it was really far from realtime

The Setup Data: University of Sydney (Berkeley-Sydney Driving Team) 640x480, grayscale, 24 fps Suburban area of Sydney Lane Model: Hyperbola 2 lane boundaries 4 parameters 2 for vehicle position and orientation 2 for lane width and curvature Features: Ridgels Picks out the center line of lane markers More robust than simple gradient vectors and edges Fitting: RANSAC Robustly fit lane model to ridgel features Lots of lane trackers test on high speed highway video

Setup Faded markings, one sided markings, lots of clutter

Setup Usually by testing on the highway, researchers try to show that they can handle other lane markings or shadows on the road

Setup Dynamic range of the image also changes

The Setup Data: University of Sydney Lane Model: Hyperbola 640x480, grayscale, 24 fps Suburban area of Sydney Lane Model: Hyperbola 2 lane boundaries 4 parameters 2 for vehicle position and orientation 2 for lane width and curvature Features: Ridgels Picks out the center line of lane markers More robust than simple gradient vectors and edges Fitting: RANSAC Robustly fit lane model to ridgel features

Lane Model Assumes flat road, constant curvature L and K are the lane width and road curvature  and x0 are the vehicle’s orientation and position  is the pitch of the camera, assumed to be fixed Our model will estimate…

Lane Model v is the image row of a lane boundary uL and uR are the image column of the left and right lane boundary, respectively

The Setup Data: University of Sydney (Berkeley-Sydney Driving Team) 640x480, grayscale, 24 fps Suburban area of Sydney Lane Model: Hyperbolic 2 lane boundaries 4 parameters 2 for vehicle position and orientation 2 for lane width and curvature Features: Ridgels Picks out the center line of lane markers More robust than simple gradient vectors and edges Fitting: RANSAC Robustly fit lane model to ridgel features

Ridgel Feature Center line of elongated high intensity structures (lane markers) Originally proposed for use in rigid registration of CT and MRI head volumes

Ridgel Feature Recovers dominant gradient orientation of pixel Invariance under monotonic-grey level transforms (shadows) and rigid movements of image Purple lines denote dominant gradient orientation

The Setup Data: University of Sydney Lane Model: Hyperbola 640x480, grayscale, 24 fps Suburban area of Sydney Lane Model: Hyperbola 2 lane boundaries 4 parameters 2 for vehicle position and orientation 2 for lane width and curvature Features: Ridgels Picks out the center line of lane markers More robust than simple gradient vectors and edges Fitting: RANSAC Robustly fit lane model to ridgel features

Fitting with RANSAC Need a minimum of four ridgels to solve for L, K, , and x0 Robust to clutter (outliers)

Fitting with RANSAC Error function Distance measure based on # of pixels between feature and boundary Difference in orientation of ridgel and closest lane boundary point

Temporal Coherence At 24fps the lane boundaries in sequential frames are highly correlated Can remove lots of clutter more intelligently based on coherence Doesn’t make sense to use global (whole image) fixed thresholds for processing a (slowly) varying scene

Classifying and removing ridgels Using the previous lane boundary Dynamically classify left and right ridgels per row image gradient comparison “far left” and “far right” ridgels removed

Velocity Measurements Optical encoder provides velocity Model for vehicle motion Updates lane model parameters  and x0 for next frame

Results, original algorithm Ignore the first few frames because there isn’t any lane Noticed curb detection is quite good Constant curvature assumption may not be a valid

Results, algorithm w/ temporal

Conclusion Robust by incorporating temporal features Still needs work Theoretical speed up by pruning ridgel features Ridgel feature robust Lane model assumptions may not hold in non-highway roads

Future Work Implement in C, possibly using OpenCV Cluster ridgels together based on location Possibly work with Berkeley-Sydney Driving Team to use other sensors to make this more robust (LIDAR, IMU, etc.)

Acknowledgements Allen Yang Dr. Jonathan Sprinkle University of Sydney Professor Kosecka

Important works reviewed/considered Zhou. et. al. 2006 Particle filter and Tabu Search Hyperbolic lane model Sobel edge features Zu Kim 2006 Particle filtering and RANSAC Cubic spline lane model No vehicle orientation/position estimation Template image matching for features