Perceptual Hysteresis Thresholding: Towards Driver Visibility Descriptors Nicolas Hautière, Jean-philippe Tarel, Roland Brémond Laboratoire Central des.

Slides:



Advertisements
Similar presentations
A Graph based Geometric Approach to Contour Extraction from Noisy Binary Images Amal Dev Parakkat, Jiju Peethambaran, Philumon Joseph and Ramanathan Muthuganapathy.
Advertisements

QR Code Recognition Based On Image Processing
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Computer vision: models, learning and inference Chapter 13 Image preprocessing and feature extraction.
Image Processing A brief introduction (by Edgar Alejandro Guerrero Arroyo)
Automatic Histogram Threshold Using Fuzzy Measures 呂惠琪.
Segmentation (2): edge detection
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
Artificial PErception under Adverse CONditions: The Case of the Visibility Range LCPC in cooperation with INRETS, France Nicolas Hautière Young Researchers.
Vehicle-Infrastructure-Driver Interactions Research Unit
School of Computing Science Simon Fraser University
Midterm Review CS485/685 Computer Vision Prof. Bebis.
Edge detection. Edge Detection in Images Finding the contour of objects in a scene.
Announcements Mailing list: –you should have received messages Project 1 out today (due in two weeks)
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Free Space Detection for autonomous navigation in daytime foggy weather Nicolas Hautière, Jean-Philippe Tarel, Didier Aubert.
Introduction to Image Quality Assessment
Edge Detection Today’s reading Forsyth, chapters 8, 15.1
Segmentation (Section 10.2)
Introduction to Computer Vision CS / ECE 181B Thursday, April 22, 2004  Edge detection (HO #5)  HW#3 due, next week  No office hours today.
Lecture 2: Image filtering
Edge Detection Today’s readings Cipolla and Gee –supplemental: Forsyth, chapter 9Forsyth Watt, From Sandlot ScienceSandlot Science.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
1 REAL-TIME IMAGE PROCESSING APPROACH TO MEASURE TRAFFIC QUEUE PARAMETERS. M. Fathy and M.Y. Siyal Conference 1995: Image Processing And Its Applications.
FEATURE EXTRACTION FOR JAVA CHARACTER RECOGNITION Rudy Adipranata, Liliana, Meiliana Indrawijaya, Gregorius Satia Budhi Informatics Department, Petra Christian.
Tricolor Attenuation Model for Shadow Detection. INTRODUCTION Shadows may cause some undesirable problems in many computer vision and image analysis tasks,
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
Intelligent Vision Systems ENT 496 Object Shape Identification and Representation Hema C.R. Lecture 7.
Purdue University Page 1 Color Image Fidelity Assessor Color Image Fidelity Assessor * Wencheng Wu (Xerox Corporation) Zygmunt Pizlo (Purdue University)
Image Processing Edge detection Filtering: Noise suppresion.
EDGE DETECTION IN COMPUTER VISION SYSTEMS PRESENTATION BY : ATUL CHOPRA JUNE EE-6358 COMPUTER VISION UNIVERSITY OF TEXAS AT ARLINGTON.
Detection of nerves in Ultrasound Images using edge detection techniques NIRANJAN TALLAPALLY.
Phase Congruency Detects Corners and Edges Peter Kovesi School of Computer Science & Software Engineering The University of Western Australia.
Blind Contrast Restoration Assessment by Gradient Ratioing at Visible Edges Nicolas Hautière 1, Jean-Philippe Tarel 1, Didier Aubert 1-2, Eric Dumont 1.
EDGE DETECTION USING MINMAX MEASURES SOUNDARARAJAN EZEKIEL Matthew Lang Department of Computer Science Indiana University of Pennsylvania Indiana, PA.
National Taiwan A Road Sign Recognition System Based on a Dynamic Visual Model C. Y. Fang Department of Information and.
2005/12/021 Content-Based Image Retrieval Using Grey Relational Analysis Dept. of Computer Engineering Tatung University Presenter: Tienwei Tsai ( 蔡殿偉.
COMP322/S2000/L171 Robot Vision System Major Phases in Robot Vision Systems: A. Data (image) acquisition –Illumination, i.e. lighting consideration –Lenses,
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
An Improved Method Of Content Based Image Watermarking Arvind Kumar Parthasarathy and Subhash Kak 黃阡廷 2008/12/3.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
Digital Image Processing
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
May 16-18, Tsukuba Science City, Japan Machine Vision Applications 2005 Estimation of the Visibility Distance by Stereovision: a Generic Approach.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
1 Marco Carli VPQM /01/2007 ON BETWEEN-COEFFICIENT CONTRAST MASKING OF DCT BASIS FUNCTIONS Nikolay Ponomarenko (*), Flavia Silvestri(**), Karen.
Lecture 04 Edge Detection Lecture 04 Edge Detection Mata kuliah: T Computer Vision Tahun: 2010.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Edge Segmentation in Computer Images CSE350/ Sep 03.
Computer Vision Image Features Instructor: Dr. Sherif Sami Lecture 4.
Instructor: Mircea Nicolescu Lecture 7
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Digital Image Processing CSC331
Detection of nerves in Ultrasound Images using edge detection techniques NIRANJAN TALLAPALLY.
A School of Mechanical Engineering, Hebei University of Technology, Tianjin , China Research on Removing Shadow in Workpiece Image Based on Homomorphic.
Motion tracking TEAM D, Project 11: Laura Gui - Timisoara Calin Garboni - Timisoara Peter Horvath - Szeged Peter Kovacs - Debrecen.
CSE 185 Introduction to Computer Vision
April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.
IMAGE PROCESSING APPLIED TO TRAFFIC QUEUE DETECTION ALGORITHM.
Image Filtering Spatial filtering
An Adept Edge Detection Algorithm for Human Knee Osteoarthritis Images
Fitting Curve Models to Edges
Vehicle Segmentation and Tracking in the Presence of Occlusions
Extract Object Boundaries in Noisy Images
Lecture 2: Edge detection
Image Compression Techniques
Edge Detection Today’s readings Cipolla and Gee Watt,
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Review and Importance CS 111.
Presentation transcript:

Perceptual Hysteresis Thresholding: Towards Driver Visibility Descriptors Nicolas Hautière, Jean-philippe Tarel, Roland Brémond Laboratoire Central des Ponts et Chaussées, Paris, France

Presentation overview 1. Introduction 2. Angular resolution of a camera 3. Human vision system modeling 4. Discrete Cosine Transform 5. Design of a visibility criteria 6. Perceptual hysteresis thresholding 7. Towards road visibility descriptors 8. Conclusion

Introduction  The major part of all the information used in driving is visual. Reduced visibility thus leads to accidents.  Reductions in visibility may have a variety of causes (geometry, obstacles, adverse weather/lighting conditions).  Different proposals exist in the literature to mitigate the dangerousness of each of these situations using in-vehicle cameras.  One objective is to inform the driver if he is driving too quickly according to the measured visibility conditions.  Detecting the visible edges in the image is a critical step to assess the driver visibility.  We propose such a technique based on the Contrast Sensitivity Function (CSF) of the Human Visual System (HVS).

Angular resolution of a camera ♦ Let’s express the angular resolution of a camera in cpd. ♦ With the notations of Fig. 1, the length d for a visual field = 1° is expressed by: ♦ To have the maximum angular resolution of the camera in cpd, we divide d by the size of two pixels (black and white alternation) of the CCD array: Cycle per degree (cpd): This unit is used to measure how well details of an object can be seen separately without being blurry. This is the number of lines that can be distinguished in a degree of a visual field. Fig. 1

♦ Our ability to discern low contrast patterns varies with the size of the pattern, i.e. its spatial frequency f (cpd). ♦The CTF is a measure of the minimum contrast needed for an object (a sinusoidal grating) to become visible. ♦This CTF is defined as 1/CSF, where CSF is a Contrast Sensitivity Function (see Fig. 2). In this paper, we use Mannos CSF, plotted in Fig. 3 and expressed by: Human vision system modeling Fig. 2 Fig. 3 0 Spatial Frequency [cpd] Contraste

Discrete Cosine Transform  A={a ij } a block of the original image  B={b ij } the corresponding block in the transformed image where c 0 =1/sqrt(2), c i =1 for i=1...n-1.  The maximum frequency of the DCT is obtained for the maximum resolution of the sensor, i.e. r * cpd :  To express the b ij in cpd, we use the following scale factor obtained by computing the ratio between (1) and (4):

Design of a visibility criteria DCT vs CTF  We can now plot the DCT coefficients with respect to the CTF curve: Fig. 5: plot of the DCT in the marked blocks with respect to the CTF Fig. 4: Curves of the CSF ( __ ) and of the CTF (---) for the sensor used to grab the images t pix =8.3μm, f =8.5mm.

Design of a visibility criteria Visibility Level Definition  Visibility can be related to the contrast C, defined by:  For suprathreshold contrasts, the Visibility Level (VL) of a target can be quantified by the ratio:  As L b is the same for both conditions, then this equation reduces to:  ΔL threshold depends on many parameters and can be estimated using Adrian’s empirical target visibility model (Adrian, 1989).

Design of a visibility criteria Visibility Level for Periodic Targets  We propose a new definition of the VL, denoted VL p, valid for periodic targets, i.e. sinusoidal gratings.  We first consider the ratio r ij between a DCT coefficient of the block and the corresponding coefficient of the CTF:  Based on the CSF definition CSF, r ij ≥1 means that the block contains visible edges.  To define VL p, we choose the greatest r ij : Fig. 6: Map of VL p ≥1

Perceptual hysteresis thresholding: Edges Detection by Segmentation  The proposed approach may be used with different edge detectors (Canny- Deriche, zero-crossing approach, Sobel)  We propose an alternative method which consists in finding the border F which maximizes the contrast C(s 0 ) between two parts of a block, without adding a threshold on this contrast value. The edges are the pixels on this border:  This approach is based on Köhler’s binarization method and is detailed in [16]. [16] N. Hautière, D. Aubert, and M. Jourlin. Measurement of local contrast in images, application to the measurement of visibility distance through use of an onboard camera. Traitement du Signal, 23(2):145–158, Septembre 2006.

Perceptual hysteresis thresholding: Hysteresis Thresholding on the VL p  In the usual hysteresis thresholding, a high threshold and a low threshold of gradient magnitude are set.  We propose to replace these thresholds by thresholds on the VL p (cf. Fig. 7)  Thus, the algorithm is as following: 1.All possible edges are extracted, 2.The edges are selected thanks to its VL p value using low t L and high t H thresholds. Fig. 7: Principle behind thresholding by hysteresis:

Perceptual hysteresis thresholding: Results Samples t L =1 ; t H =10 t L =1 ; t H =20 No noisy features are detected whatever are the lighting conditions whereas thresholds are fixed.  The method is thus clearly adaptive.

Perceptual hysteresis thresholding: Contrast Detection Threshold of the Human Eye  The value of t L is easy to choose, because it can be related to the HVS.  Setting t L =1 should be appropriate for most applications.  The hysteresis thresholding has now only one parameter !  The value of t H depends on the application.  For lighting engineering, the CIE published some guidelines to set the VL according to the visual task complexity.  VL=7 is a adequate value for night-time driving task.  We can set t L =7 as a starting point. However, a psychophysical validation is necessary.

Towards road visibility descriptors  Once visible edges have been extracted, they can be used in the context of an onboard camera to derive driver visibility descriptors: visibility distance estimation…  There are three steps to complete and validate the algorithm from a psychophysical point of view:  An extension to color images may be necessary,  The CSF is valid for a given adaptation level of the HVS. It is interesting to automatically select the properly CSF.  To compare our results with the set of edges which are manually extracted by different people.

Conclusion  We present a visible edges selector and use it for in- vehicle applications.  It proposes an alternative to the traditional hysteresis filtering.  We propose to replace the thresholds on the gradient magnitude by visibility levels.  The low threshold can be fixed at 1 in general.  Some guidelines to set the high threshold are proposed.  This algorithm may be used to develop sophisticated driver visibility descriptors.  Thereafter, it can be fused with other visibility descriptors to develop driving assistance systems which takes into account all the visibility conditions.

Thank you for your attention ! This work is partly founded by the French ANR project DIVAS ( ) dealing with vehicle-infrastructure cooperative systems.