Free Space Detection for autonomous navigation in daytime foggy weather Nicolas Hautière, Jean-Philippe Tarel, Didier Aubert.

Slides:



Advertisements
Similar presentations
Genoa, Italy September 2-4, th IEEE International Conference on Advanced Video and Signal Based Surveillance Combination of Roadside and In-Vehicle.
Advertisements

Road-Sign Detection and Recognition Based on Support Vector Machines Saturnino, Sergio et al. Yunjia Man ECG 782 Dr. Brendan.
International Symposium on Automotive Lighting, Darmstadt, Germany Review of the Mechanisms of Visibility Reduction by Rain and Wet Road Nicolas Hautière,
A Practical Analytic Model for Daylight
Extracting Minimalistic Corridor Geometry from Low-Resolution Images Yinxiao Li, Vidya, N. Murali, and Stanley T. Birchfield Department of Electrical and.
Image Processing IB Paper 8 – Part A Ognjen Arandjelović Ognjen Arandjelović
Digital Image Processing In The Name Of God Digital Image Processing Lecture3: Image enhancement M. Ghelich Oghli By: M. Ghelich Oghli
Color spaces CIE - RGB space. HSV - space. CIE - XYZ space.
Internet Vision - Lecture 3 Tamara Berg Sept 10. New Lecture Time Mondays 10:00am-12:30pm in 2311 Monday (9/15) we will have a general Computer Vision.
Foreground Modeling The Shape of Things that Came Nathan Jacobs Advisor: Robert Pless Computer Science Washington University in St. Louis.
Artificial PErception under Adverse CONditions: The Case of the Visibility Range LCPC in cooperation with INRETS, France Nicolas Hautière Young Researchers.
Intervenant - date Distributed Simulation Architecture for the Design of Cooperative ADAS D. Gruyer, S. Glaser, S. Pechberti, R. Gallen, N. Hautière 05/09/2011.
Vehicle-Infrastructure-Driver Interactions Research Unit
ITS World Congress, Stockholm, Sweden Sensing the Visibility Range at Low Cost in the SAFESPOT Road Side Unit Nicolas Hautière 1, Jérémie Bossu 1, Erwan.
Boundary matting for view synthesis Samuel W. Hasinoff Sing Bing Kang Richard Szeliski Computer Vision and Image Understanding 103 (2006) 22–32.
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Shree Nayar and Srinivasa Narasimhan Computer Science Columbia University ICCV Conference Korfu, Greece, September 1999 Sponsors: NSF Vision in Bad Weather.
TRB 2011 “ Visibility Monitoring Using Conventional Roadside Cameras: Shedding Light On and Solving a Multi- National Road Safety Problem“ A project supported.
Vehicle Movement Tracking
COMP322/S2000/L221 Relationship between part, camera, and robot (cont’d) the inverse perspective transformation which is dependent on the focal length.
Fast Illumination-invariant Background Subtraction using Two Views: Error Analysis, Sensor Placement and Applications Ser-Nam Lim, Anurag Mittal, Larry.
Obstacle detection using v-disparity image
CCU VISION LABORATORY Object Speed Measurements Using Motion Blurred Images 林惠勇 中正大學電機系
Chromatic Framework for Vision in Bad Weather Srinivasa G. Narasimhan and Shree K. Nayar Computer Science Department Columbia University IEEE CVPR Conference.
© 2004 by Davi GeigerComputer Vision March 2004 L1.1 Binocular Stereo Left Image Right Image.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Perceptual Hysteresis Thresholding: Towards Driver Visibility Descriptors Nicolas Hautière, Jean-philippe Tarel, Roland Brémond Laboratoire Central des.
Shadow Detection In Video Submitted by: Hisham Abu saleh.
Tricolor Attenuation Model for Shadow Detection. INTRODUCTION Shadows may cause some undesirable problems in many computer vision and image analysis tasks,
Yuan March 10, 2010 Automatic Solar Filaments Segmentation.
GM-Carnegie Mellon Autonomous Driving CRL TitleAutomated Image Analysis for Robust Detection of Curbs Thrust AreaPerception Project LeadDavid Wettergreen,
© Crown copyright Met Office Forecasting Runway Visual Range Lauren Reid, Met Office, 12 September 2013 ECAM.
Shedding Light on the Weather
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
1. Introduction Motion Segmentation The Affine Motion Model Contour Extraction & Shape Estimation Recursive Shape Estimation & Motion Estimation Occlusion.
Multiple Scattering in Vision and Graphics Lecture #21 Thanks to Henrik Wann Jensen.
Towards Night Fog Detection through use of In-Vehicle Multipurpose Cameras Romain Gallen Aurélien Cord Nicolas Hautière Didier Aubert.
Overview of Haze Removal Methods Matteo Pedone Machine Vision Group, University of Oulu, Finland.
02/28/05© 2005 University of Wisconsin Last Time Scattering theory Integrating tranfer equations.
An efficient method of license plate location Pattern Recognition Letters 26 (2005) Journal of Electronic Imaging 11(4), (October 2002)
Colour changes in a natural scene due to the interaction between the light and the atmosphere Raúl Luzón González Colour Imaging Laboratory.
Blind Contrast Restoration Assessment by Gradient Ratioing at Visible Edges Nicolas Hautière 1, Jean-Philippe Tarel 1, Didier Aubert 1-2, Eric Dumont 1.
Road Scene Analysis by Stereovision: a Robust and Quasi-Dense Approach Nicolas Hautière 1, Raphaël Labayrade 2, Mathias Perrollaz 2, Didier Aubert 2 1.
Digital Image Processing (DIP) Lecture # 5 Dr. Abdul Basit Siddiqui Assistant Professor-FURC 1FURC-BCSE7.
Copyright Howie Choset, Renata Melamud, Al Costa, Vincent Lee-Shue, Sean Piper, Ryan de Jonckheere. All Rights Reserved Computer Vision.
3D Face Recognition Using Range Images
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
Robust Nighttime Vehicle Detection by Tracking and Grouping Headlights Qi Zou, Haibin Ling, Siwei Luo, Yaping Huang, and Mei Tian.
CSSE463: Image Recognition Day 29 This week This week Today: Surveillance and finding motion vectors Today: Surveillance and finding motion vectors Tomorrow:
CS 376b Introduction to Computer Vision 03 / 31 / 2008 Instructor: Michael Eckmann.
May 16-18, Tsukuba Science City, Japan Machine Vision Applications 2005 Estimation of the Visibility Distance by Stereovision: a Generic Approach.
Digital Image Processing CSC331 Introduction 1. My Introduction EDUCATION Technical University of Munich, Germany Ph.D. Major: Machine learning.
Che-An Wu Background substitution. Background Substitution AlphaMa p Trimap Depth Map Extract the foreground object and put into another background Objective.
Shadow Detection in Remotely Sensed Images Based on Self-Adaptive Feature Selection Jiahang Liu, Tao Fang, and Deren Li IEEE TRANSACTIONS ON GEOSCIENCE.
1 2D TO 3D IMAGE AND VIDEO CONVERSION. INTRODUCTION The goal is to take already existing 2D content, and artificially produce the left and right views.
3D Perception and Environment Map Generation for Humanoid Robot Navigation A DISCUSSION OF: -BY ANGELA FILLEY.
CSSE463: Image Recognition Day 29
Week 6 Cecilia La Place.
© 2003 University of Wisconsin
Fast and Robust Object Tracking with Adaptive Detection
Machine Vision Acquisition of image data, followed by the processing and interpretation of these data by computer for some useful application like inspection,
Image filtering Hybrid Images, Oliva et al.,
Single Image Haze Removal Using Dark Channel Prior
CSSE463: Image Recognition Day 29
Maximally Stable Extremal Regions
CSSE463: Image Recognition Day 29
© 2010 Cengage Learning Engineering. All Rights Reserved.
CSSE463: Image Recognition Day 29
CSSE463: Image Recognition Day 29
An Edge-preserving Filtering Framework for Visibility Restoration
Presentation transcript:

Free Space Detection for autonomous navigation in daytime foggy weather Nicolas Hautière, Jean-Philippe Tarel, Didier Aubert

2 Light under Daytime Fog Daylight Scattering Atmospheric veil Direct transmission

3 Light attenuation by the atmosphere  Koschmieder’s law: Apparent luminance Object luminance Atmospheric luminance Object distance Extinction coefficient

4 Visibility Range under Daytime Fog  From the Koschmieder’s law ( ) let express the contrast of an object against the sky:  contrast Attenuation Visibility distance: “the greatest distance at which a black object of suitable dimensions can be recognized by day against the horizon sky” (CIE, 1987)  For a black object (C 0 =1) and a visibility contrast threshold of 5%:

5 Flat road assumption Assuming a flat road, the depth of a road point is: Assuming a flat road, the depth of a road point is:where: v h the horizon line  the pixel size. z x f d  S X Y Z C y v u vhvh H M Road plane Image plane

6 Extraction of a region of interest Fitting of a measurement bandwidth V met = 50m Estimation of the meteorological visibility distance Exploitation of the atmospheric veil Measurement and derivation of intensity curve Extraction of the inflection point B&W image Assuming that the camera response function is linear, the Koschmieder’s law becomes within the image space: Method: instanciation of the Koschmieder’s law [Hautière et al., 2006a] Hautière, N., Tarel, J.-P, Lavenant, J. and Aubert, D. (2006). Automatic Fog Detection and Measurement of the visibility Distance through use of an Onboard Camera. Machine Vision Applications Journal, 17(1):8-20  estimation thanks to the inflection point v i :

7 Recovery of the object luminance (1)  Extinction coefficient  is now determined A ∞ is given by the bandwidth above the horizon line  The Atmospheric luminance A ∞ is given by the bandwidth above the horizon line  Lets compute R by reversing the Koschmieder’s law:  There is still one unknown: d

8 Recovery of the object luminance (2)  The previous equation may be rewriting as follows:  The contrast after restoration with respect to the background sky is thus:  The contrast restoration is exponential  =0.05 (visibility = 60m), A  =255

9 Free space segmentation  By using a flat world assumption the vertical objects are falsely restored (their distance being largely overestimated)  Their intensity becomes null after the restoration process  This drawback may be used in our advantage to segment the vertical object  The free space is thus segmented by looking for the biggest connected component in front of the vehicle (whatever the method).

10 Free space segmentation (Foggy weather)

11 Free space segmentation (rainy weather)

12 Thank you for your attention