Dr. Stanley Birchfield (Advisor)

Slides:



Advertisements
Similar presentations
Practical Camera Auto-Calibration Based on Object Appearance and Motion for Traffic Scene Visual Surveillance Zhaoxiang Zhang, Min Li, Kaiqi Huang and.
Advertisements

Simultaneous surveillance camera calibration and foot-head homology estimation from human detection 1 Author : Micusic & Pajdla Presenter : Shiu, Jia-Hau.
QR Code Recognition Based On Image Processing
Road-Sign Detection and Recognition Based on Support Vector Machines Saturnino, Sergio et al. Yunjia Man ECG 782 Dr. Brendan.
3D reconstruction.
TRB 89 th Annual Meeting Traffic Monitoring of Motorcycles during Special Events Using Video Detection Dr. Neeraj K. Kanhere Dr. Stanley T. Birchfield.
Hilal Tayara ADVANCED INTELLIGENT ROBOTICS 1 Depth Camera Based Indoor Mobile Robot Localization and Navigation.
Detection and Measurement of Pavement Cracking Bagas Prama Ananta.
3D M otion D etermination U sing µ IMU A nd V isual T racking 14 May 2010 Centre for Micro and Nano Systems The Chinese University of Hong Kong Supervised.
Intelligent Systems Lab. Extrinsic Self Calibration of a Camera and a 3D Laser Range Finder from Natural Scenes Davide Scaramuzza, Ahad Harati, and Roland.
Broadcast Court-Net Sports Video Analysis Using Fast 3-D Camera Modeling Jungong Han Dirk Farin Peter H. N. IEEE CSVT 2008.
Active Calibration of Cameras: Theory and Implementation Anup Basu Sung Huh CPSC 643 Individual Presentation II March 4 th,
Motion Detection And Analysis Michael Knowles Tuesday 13 th January 2004.
UPM, Faculty of Computer Science & IT, A robust automated attendance system using face recognition techniques PhD proposal; May 2009 Gawed Nagi.
Ensemble Tracking Shai Avidan IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE February 2007.
Uncalibrated Geometry & Stratification Sastry and Yang
Multiple Human Objects Tracking in Crowded Scenes Yao-Te Tsai, Huang-Chia Shih, and Chung-Lin Huang Dept. of EE, NTHU International Conference on Pattern.
Robust Lane Detection and Tracking
Vehicle Movement Tracking
Fast Illumination-invariant Background Subtraction using Two Views: Error Analysis, Sensor Placement and Applications Ser-Nam Lim, Anurag Mittal, Larry.
CSSE463: Image Recognition Day 30 Due Friday – Project plan Due Friday – Project plan Evidence that you’ve tried something and what specifically you hope.
CS 223B Assignment 1 Help Session Dan Maynes-Aminzade.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
Multi-camera Video Surveillance: Detection, Occlusion Handling, Tracking and Event Recognition Oytun Akman.
California Car License Plate Recognition System ZhengHui Hu Advisor: Dr. Kang.
Shadow Detection In Video Submitted by: Hisham Abu saleh.
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
Today: Calibration What are the camera parameters?
1 Video Surveillance systems for Traffic Monitoring Simeon Indupalli.
College of Engineering and Science Clemson University
Optical flow (motion vector) computation Course: Computer Graphics and Image Processing Semester:Fall 2002 Presenter:Nilesh Ghubade
EE392J Final Project, March 20, Multiple Camera Object Tracking Helmy Eltoukhy and Khaled Salama.
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
Unsupervised Learning-Based Spatio-Temporal Vehicle Tracking and Indexing for Transportation Multimedia Database Systems Chengcui Zhang Department of Computer.
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
EADS DS / SDC LTIS Page 1 7 th CNES/DLR Workshop on Information Extraction and Scene Understanding for Meter Resolution Image – 29/03/07 - Oberpfaffenhofen.
CSSE463: Image Recognition Day 30 This week This week Today: motion vectors and tracking Today: motion vectors and tracking Friday: Project workday. First.
1. Introduction Motion Segmentation The Affine Motion Model Contour Extraction & Shape Estimation Recursive Shape Estimation & Motion Estimation Occlusion.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
3D SLAM for Omni-directional Camera
Dynamic 3D Scene Analysis from a Moving Vehicle Young Ki Baik (CV Lab.) (Wed)
DETECTION AND CLASSIFICATION OF VEHICLES FROM A VIDEO USING TIME-SPATIAL IMAGE NAFI UR RASHID, NILUTHPOL CHOWDHURY, BHADHAN ROY JOY S. M. MAHBUBUR RAHMAN.
© 2005 Martin Bujňák, Martin Bujňák Supervisor : RNDr.
Vehicle Segmentation and Tracking From a Low-Angle Off-Axis Camera Neeraj K. Kanhere Committee members Dr. Stanley Birchfield Dr. Robert Schalkoff Dr.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
Expectation-Maximization (EM) Case Studies
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
CVPR2013 Poster Detecting and Naming Actors in Movies using Generative Appearance Models.
Reconstruction the 3D world out of two frames, based on camera pinhole model : 1. Calculating the Fundamental Matrix for each pair of frames 2. Estimating.
A Robust Method for Lane Tracking Using RANSAC James Ian Vaughn Daniel Gicklhorn CS664 Computer Vision Cornell University Spring 2008.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
DETECTING AND TRACKING TRACTOR-TRAILERS USING VIEW-BASED TEMPLATES Masters Thesis Defense by Vinay Gidla Apr 19,2010.
Robust Nighttime Vehicle Detection by Tracking and Grouping Headlights Qi Zou, Haibin Ling, Siwei Luo, Yaping Huang, and Mei Tian.
Presented by: Idan Aharoni
CSSE463: Image Recognition Day 29 This week This week Today: Surveillance and finding motion vectors Today: Surveillance and finding motion vectors Tomorrow:
PROBABILISTIC DETECTION AND GROUPING OF HIGHWAY LANE MARKS James H. Elder York University Eduardo Corral York University.
Person Following with a Mobile Robot Using Binocular Feature-Based Tracking Zhichao Chen and Stanley T. Birchfield Dept. of Electrical and Computer Engineering.
Motion Segmentation at Any Speed Shrinivas J. Pundlik Department of Electrical and Computer Engineering, Clemson University, Clemson, SC.
Date of download: 7/8/2016 Copyright © 2016 SPIE. All rights reserved. A scalable platform for learning and evaluating a real-time vehicle detection system.
SEMINAR ON TRAFFIC MANAGEMENT USING IMAGE PROCESSING by Smruti Ranjan Mishra (1AY07IS072) Under the guidance of Prof Mahesh G. Acharya Institute Of Technology.
Motion Detection And Analysis
Dynamical Statistical Shape Priors for Level Set Based Tracking
Fast and Robust Object Tracking with Adaptive Detection
A New Approach to Track Multiple Vehicles With the Combination of Robust Detection and Two Classifiers Weidong Min , Mengdan Fan, Xiaoguang Guo, and Qing.
Vehicle Segmentation and Tracking in the Presence of Occlusions
Vehicle Segmentation and Tracking from a Low-Angle Off-Axis Camera
Dongwook Kim, Beomjun Kim, Taeyoung Chung, and Kyongsu Yi
Video Compass Jana Kosecka and Wei Zhang George Mason University
CSSE463: Image Recognition Day 30
Filtering An image as a function Digital vs. continuous images
Presentation transcript:

Dr. Stanley Birchfield (Advisor) Vision-Based Detection, Tracking and Classification of Vehicles using Features and Patterns with Automatic Camera Calibration Neeraj K. Kanhere Committee members Dr. Stanley Birchfield (Advisor) Dr. John Gowdy Dr. Robert Schalkoff Dr. Wayne Sarasua Clemson University July 10th 2008

Vehicle tracking Why detect and track vehicles ? Non-vision sensors Intelligent Transportation Systems (ITS) Data collection for transportation engineering applications Incident detection and emergency response Non-vision sensors Inductive loop detectors Piezoelectric and Fiber Optic sensors The Infra-Red Traffic Logger (TIRTL) Radar Laser Other sensors provide classification up to 13 classes when the traffic is sparse. Problem with intrusive technologies Safety Easily damaged with street sweepers Road geometry makes it impossible to collect data sometimes Impractical for tracking Vision-based sensors No traffic disruption for installation and maintenance Wide area detection with a single sensor Rich in information for manual inspection

Autoscope (Econolite) Available video commercial systems Autoscope (Econolite) Citilog Vantage (Iteris) Traficon - All systems rely on manually specified detection zones which are prone to errors due to spillover and occlusions

Problems with commercial systems Video

Related research Region/contour (Magee 04, Gupte et al. 02) Computationally efficient Good results when vehicles are well separated 3D model (Ferryman et al. 98) Large number of models needed for different vehicle types Limited experimental results Markov random field (Kamijo et al. 01) Good results on low angle sequences Accuracy drops by 50% when sequence is processed in true order Feature tracking (Kim 08, Beymer et al. 97) Handles partial occlusions Good accuracy for free flowing as well as congested traffic conditions

Overview of the research Scope of this research includes three problems Vehicle detection and tracking Camera calibration Features Patterns Vehicle classification and traffic parameter extraction

Overview of the research Scope of this research includes three problems Vehicle detection and tracking Camera calibration Features Patterns Vehicle classification and traffic parameter extraction

Problem of depth ambiguity Image plane Focal point Road Pinhole camera model All points along the ray map to the same image location

An image point on the roof of the trailer is in the second lane Problem of depth ambiguity 1 2 3 4 Perspective view Top view An image point on the roof of the trailer is in the second lane

The same image point is now in the last lane Problem of depth ambiguity 1 2 3 4 Perspective view Top view The same image point is now in the last lane

Problem of depth ambiguity 1 2 3 4 1 2 3 4

Problem of scale change Grouping based on pixel distances fails when there is a large scale change in the scene.

Feature segmentation using 3D coordinates Background model Calibration 1 Background subtraction 5 make differences more clear. Make it clear next slides talk about new system Correspondence 4 Normalized cuts on affinity matrix 2 Single frame estimation 3 Rigid motion constraint Neeraj Kanhere, Stanley Birchfield and Shrinivas Pundlik (CVPR 2005) Neeraj Kanhere, Stanley Birchfield and Wayne Sarasua (TRR 2006)

Improved real-time implementation Image frame Feature tracking Background subtraction Filtering Group stable features Differences: handling shadows as a preprocessing step rather than post-processing Focus on segmenting features which can be segmented reliably rather than segmenting all features PLP estimation Correspondence, Validation and Classification Group unstable features Vehicle trajectories and data Calibration Neeraj Kanhere and Stanley Birchfield (IEEE Transactions on Intelligent Transportation Systems, 2008)

Offline camera calibration 1) User draws two lines (red) corresponding to the edges of the road 2) User draws a line (green) corresponding to a known length along the road 3) Using either road width or camera height, a calibrated detection zone is computed Differences: handling shadows as a preprocessing step rather than post-processing Focus on segmenting features which can be segmented reliably rather than segmenting all features

Background subtraction and filtering colorize right figure Differences: handling shadows as a preprocessing step rather than post-processing Focus on segmenting features which can be segmented reliably rather than segmenting all features Background features Vehicle features Shadow features Only vehicles features are considered in further processing, reducing distraction from shadows

Plumb line projection (PLP) use notation to relate p-u and q-v note that p is unknown PLP is the projection of a feature on the road in the foreground image. With this projection, an estimate of 3D location of the feature is obtained.

Error in 3D estimation with PLP error is greater for points higher up show z-tilde Differences: handling shadows as a preprocessing step rather than post-processing Focus on segmenting features which can be segmented reliably rather than segmenting all features

Selecting stable features Feature is stable if & zai epsilon Features are stable if close to the ground, and slope is small at plumb line projection

Grouping of stable features grouping rather than segmentation show steps in diagram Within each lane: Seed growing is used to group features with similar Y coordinate Across lanes: Groups with similar Y coordinate are merged if their combined width is acceptable

Grouping unstable features Location of an unstable feature is estimated with respect to each stable group using rigid motion constraint. Centroid of a stable feature group Unstable feature split into two slides and show figure more clearly define all terms

Grouping unstable features Likelihood of the unstable feature is computed based on the estimated 3D location. score for group i validity of location bias terms for large vehicles split into two slides and show figure more clearly define all terms Unstable feature is assigned to the group if it is likely to belong to that group Unlikely to belong to any other group & a is best matching stable group. b is second best matching stable group.

Overview of the research Scope of this research includes three problems Vehicle detection and tracking Camera calibration Features Patterns Vehicle classification and traffic parameter extraction

Combining pattern recognition Feature grouping Pattern recognition Works under varying camera placement Needs a trained detector for significantly different viewpoints Eliminates false counts due to shadows but headlight reflections are still a problem Does not get distracted by headlight reflections Does not need calibration Needs calibration Handles lateral occlusions but fails in case of back-to-back occlusions Handles back-to-back occlusions but difficult to handle lateral occlusions

Back-to-back occlusion Combining pattern recognition Lateral occlusion Back-to-back occlusion Handles lateral occlusions but fails in case of back-to-back occlusions Handles back-to-back occlusions but difficult to handle lateral occlusions A B A B

Boosted Cascade Vehicle Detector (BCVD) Offline supervised training of the detector using training images Vehicles detected in new images Training Run-time Positive training samples BCVD Negative training samples Cascade architecture Stage 1 Stage 2 Stage n Detection …. Rejected sub-windows

Rectangular features with Integral images Haar-like rectangular features Fast computation and fast scaling A B C D 1 2 3 4 sum(A) = val(1) sum(A+B) = val(2) sum(A+C) = val(3) sum(A+B+C+D) = val(4) sum(D) = val(4) – val(3) – val(2) + val(1) Viola and Jones, CVPR 2001

Sample results for static vehicle detection min size and not trained for view from behind

Overview of the research Scope of this research includes three problems Vehicle detection and tracking Camera calibration remind audience that two sections are not as lengthy Vehicle classification and traffic parameter extraction

Two calibration approaches Image-world correspondences f, h, Φ, θ … M[3x4] M[3x4] Direct estimation of projective transform Estimation of parameters for the assumed camera model Goal is to estimate 11 elements of a matrix which transforms points in 3D to a 2D plane Harder to incorporate scene-specific knowledge Goal is to estimate camera parameters such as focal length and pose Easier to incorporate known quantities and constraints

Direct estimation of projective matrix Advantage is that we don’t need to assume zero roll, square pixels etc. replace word manual explain modes (may be cover 3, 4 modes which are important) Atleast six points are required to estimate the 11 unknown parametes of the projective matrix

Camera calibration modes replace word manual explain modes (may be cover 3, 4 modes which are important) Assumptions: Flat road surface, zero skew, square pixels, and principal point at image center Known quantities: Width (W) or, Length (L), or Camera height (H)

Camera calibration modes replace word manual explain modes (may be cover 3, 4 modes which are important) Assumptions: Flat road surface, zero skew, square pixels, principal point at image center, and zero roll angle Known quantities: W or L or H

Camera calibration modes replace word manual explain modes (may be cover 3, 4 modes which are important) Assumptions: Flat road surface, zero skew, square pixels, principal point at image center, and zero roll angle Known quantities: {W, L} or {W, H} or {L, H}

Schoepflin and Dailey (2003) Previous approaches to automatic calibration Schoepflin and Dailey (2003) Dailey et al. (2000) Song et al. (2006) Zhang et al. (2008) Common to all: Do not work in night time Need this? Instead cite prv work in next slide... in contrast to ..... our approach: Previous approaches: Need background image Sensitive to image processing parameters Affected by spillover Do not work at night

Neeraj Kanhere, Stanley Birchfield and Wayne Sarasua (TRR 2008) Our approach to automatic calibration Point out under-laying assumptions (zero roll, square pixels and sufficient pan angle) Fix labels Mention VVWF\ Fix decision box Does not depend on road markings Does not require scene specific parameters such as lane dimensions Works in presence of significant spill-over (low height) Works under night-time condition (no ambient light) Neeraj Kanhere, Stanley Birchfield and Wayne Sarasua (TRR 2008)

Estimating vanishing points Vanishing point in the direction of travel is estimated using vehicle tracks Orthogonal vanishing point is estimated using strong gradients or headlights

Automatic calibration algorithm Focal length (pixels) Pan angle Tilt angle Camera height

Overview of the research Vehicle detection and tracking Camera calibration Vehicle classification and traffic parameter extraction

FHWA highway manual lists 13 vehicle classes based on axle counts: Vehicle classification based on axle counts FHWA highway manual lists 13 vehicle classes based on axle counts: Motorcycles Passenger cars Other two-axle, four-tire single unit vehicles Buses Two-axle, six-tire, single-unit trucks Three-axle single-unit trucks Four or more axle single-unit trucks Four or fewer axle single-trailer trucks Five-axle single-trailer trucks Six or more axle single-trailer trucks Five or fewer axle multi-trailer trucks Six axle multi-trailer trucks Seven or more axle multi-trailer trucks

Vehicle classification based on length four boxes mention last class not evaluated yet mention how diff agencies classify on length show graph or other graphical representation Thanks to Steven Jessberger (FHWA)

Four classes for length-based classification Vehicle classification based on length Four classes for length-based classification Motorcycles Passenger cars Other two-axle, four-tire single unit vehicles Buses Two-axle, six-tire, single-unit trucks Three-axle single-unit trucks Four or more axle single-unit trucks Four or fewer axle single-trailer trucks Five-axle single-trailer trucks Six or more axle single-trailer trucks Five or fewer axle multi-trailer trucks Six axle multi-trailer trucks Seven or more axle multi-trailer trucks four boxes mention last class not evaluated yet mention how diff agencies classify on length show graph or other graphical representation

Traffic parameters Volumes Lane counts Speeds Classification (three classes)

Results

Quantitative results results at end (table after video)

Results for automatic camera calibration

Demo

Conclusion Research contributions: A system for detection, tracking and classification of vehicles Combination of feature tracking and background subtraction to group features in 3D Pattern recognition-based approach to detection and tracking of vehicles Automatic camera calibration technique which doesn’t need pavement markings and works even in absence of ambient light Future work should be aimed at: Extending automatic calibration to handle non-zero roll Improving and extending vehicle classification Long term testing of the system in day and night conditions A framework for combining pattern recognition with features

Questions and Discussion

Thank You

Schoepflin and Dailey (2003) Previous approaches to automatic calibration Dailey et al. (2000) Avoids calculating camera parameters Based on assumptions that reduce the problem to 1-D geometry Uses parameters from the distribution of vehicle lengths. Schoepflin and Dailey (2003) Lane activity map Peaks at lane centers Uses two vanishing points Lane activity map sensitive of spill-over Correction of lane activity map needs background image Common to all: Do not work in night time Need this? Instead cite prv work in next slide... in contrast to ..... our approach: Song et al. (2006) Known camera height Needs background image Depends on detecting road markings

Plumb line projection (PLP) use notation to relate p-u and q-v note that p is unknown PLP is the projection of a feature on the road in the foreground image. With this projection, an estimate of 3D location of the feature is obtained.