Robust statistical method for background extraction in image segmentation Doug Keen March 29, 2001.

Slides:



Advertisements
Similar presentations
QR Code Recognition Based On Image Processing
Advertisements

Sections 7-1 and 7-2 Review and Preview and Estimating a Population Proportion.
Computer Vision Lecture 16: Region Representation
The Global Digital Elevation Model (GTOPO30) of Great Basin Location: latitude 38  15’ to 42  N, longitude 118  30’ to 115  30’ W Grid size: 925 m.
September 10, 2013Computer Vision Lecture 3: Binary Image Processing 1Thresholding Here, the right image is created from the left image by thresholding,
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
EE 7730 Image Segmentation.
CSE 803 Fall 2008 Stockman1 Veggie Vision by IBM Ideas about a practical system to make more efficient the selling and inventory of produce in a grocery.
Segmentation Divide the image into segments. Each segment:
Recognition Of Textual Signs Final Project for “Probabilistic Graphics Models” Submitted by: Ezra Hoch, Golan Pundak, Yonatan Amit.
CS 223B Assignment 1 Help Session Dan Maynes-Aminzade.
Virtual Control of Optical Axis of the 3DTV Camera for Reducing Visual Fatigue in Stereoscopic 3DTV Presenter: Yi Shi & Saul Rodriguez March 26, 2008.
Deriving Intrinsic Images from Image Sequences Mohit Gupta Yair Weiss.
COMP 290 Computer Vision - Spring Motion II - Estimation of Motion field / 3-D construction from motion Yongjik Kim.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
IMAGE 1 An image is a two dimensional Function f(x,y) where x and y are spatial coordinates And f at any x,y is related to the brightness at that point.
Shadow Detection In Video Submitted by: Hisham Abu saleh.
College Algebra Fifth Edition James Stewart Lothar Redlin Saleem Watson.
Chapter 10: Image Segmentation
Digital Image Processing
Tricolor Attenuation Model for Shadow Detection. INTRODUCTION Shadows may cause some undesirable problems in many computer vision and image analysis tasks,
1. Introduction Motion Segmentation The Affine Motion Model Contour Extraction & Shape Estimation Recursive Shape Estimation & Motion Estimation Occlusion.
Perception Introduction Pattern Recognition Image Formation
Lecture 5. Morphological Image Processing. 10/6/20152 Introduction ► ► Morphology: a branch of biology that deals with the form and structure of animals.
Chapter 9.  Mathematical morphology: ◦ A useful tool for extracting image components in the representation of region shape.  Boundaries, skeletons,
Digital Image Processing In The Name Of God Digital Image Processing Lecture8: Image Segmentation M. Ghelich Oghli By: M. Ghelich Oghli
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
September 5, 2013Computer Vision Lecture 2: Digital Images 1 Computer Vision A simple two-stage model of computer vision: Image processing Scene analysis.
Digital Image Processing CCS331 Relationships of Pixel 1.
Deriving Intrinsic Images from Image Sequences Mohit Gupta 04/21/2006 Advanced Perception Yair Weiss.
Automatic Minirhizotron Root Image Analysis Using Two-Dimensional Matched Filtering and Local Entropy Thresholding Presented by Guang Zeng.
Digital Image Processing Lecture 18: Segmentation: Thresholding & Region-Based Prof. Charlene Tsai.
Detection of nerves in Ultrasound Images using edge detection techniques NIRANJAN TALLAPALLY.
Chapter 10 Image Segmentation.
Chapter 10, Part II Edge Linking and Boundary Detection The methods discussed in the previous section yield pixels lying only on edges. This section.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Pixel Connectivity Pixel connectivity is a central concept of both edge- and region- based approaches to segmentation The notation of pixel connectivity.
Image Segmentation and Edge Detection Digital Image Processing Instructor: Dr. Cheng-Chien LiuCheng-Chien Liu Department of Earth Sciences National Cheng.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Digital Image Processing In The Name Of God Digital Image Processing Lecture2: Digital Image Fundamental M. Ghelich Oghli By: M. Ghelich Oghli
Image Segmentation by Histogram Thresholding Venugopal Rajagopal CIS 581 Instructor: Longin Jan Latecki.
CS654: Digital Image Analysis Lecture 4: Basic relationship between Pixels.
Digital Image Processing
Machine Vision ENT 273 Regions and Segmentation in Images Hema C.R. Lecture 4.
Nottingham Image Analysis School, 23 – 25 June NITS Image Segmentation Guoping Qiu School of Computer Science, University of Nottingham
1 Mathematic Morphology used to extract image components that are useful in the representation and description of region shape, such as boundaries extraction.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Thresholding Foundation:. Thresholding In A: light objects in dark background To extract the objects: –Select a T that separates the objects from the.
Machine Vision ENT 273 Hema C.R. Binary Image Processing Lecture 3.
Course 3 Binary Image Binary Images have only two gray levels: “1” and “0”, i.e., black / white. —— save memory —— fast processing —— many features of.
Lecture(s) 3-4. Morphological Image Processing. 3/13/20162 Introduction ► ► Morphology: a branch of biology that deals with the form and structure of.
1/39 Motion Adaptive Search for Fast Motion Estimation 授課老師:王立洋老師 製作學生: M 蔡鐘葳.
Detection of nerves in Ultrasound Images using edge detection techniques NIRANJAN TALLAPALLY.
Machine Vision ENT 273 Lecture 4 Hema C.R.
Computer Graphics Chapter 9 Rendering.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
IMAGE SEGMENTATION USING THRESHOLDING
Computer Vision Lecture 13: Image Segmentation III
IMAGE PROCESSING Questions and Answers.
Mean Shift Segmentation
Computer Vision Lecture 12: Image Segmentation II
Fast and Robust Object Tracking with Adaptive Detection
Digital Image Processing
CS Digital Image Processing Lecture 5
Digital Image Fundamentals
(c) 2002 University of Wisconsin, CS 559
Computer and Robot Vision I
Redundancy in the Population Code of the Retina
Presentation transcript:

Robust statistical method for background extraction in image segmentation Doug Keen March 29, 2001

Source Paper Rodriguez, Arturo A., Mitchell, O. Robert. “Robust statistical method for background extraction in image segmentation” Stochastic and Neural Methods in Signal Processing, Image Processing, and Computer Vision. Vol. 1569, 1991

Problem Given a digital image, how can we differentiate objects of interest from the background?

Problem One simple and fast method is thresholding –Create a graytone histogram of a sample of the background –Find threshold values for the right and left shoulders of the background histogram –Compare graytone values of all other pixels of the image to the background histogram If pixel falls between right and left background thresholds, that pixel belongs to the background

Problems with thresholding Local background variations may be small, but background variations across the entire image may be substantial Thresholding also assumes just a single background

Problems with thresholding In applications with a fixed background, one could do an empirical analysis of a background only image –Once an object is introduced, graytone properties of the background could change (due to reflectance properties of the object, scene illumination, automatic gain control of the camera, and/or shadows)

Alternate solution A background extraction method that is based on local statistical measurements performed on log-transformed image data –Works despite smooth changes in background –Doesn’t matter if objects are darker or brighter than the background –Works with multiple backgrounds –Works without a priori knowledge of the background of the image

Log-Transformation Log-transformation can help reduce various illumination effects in the image g(x, y) = i(x, y) * f(x, y) g(x,y) is the grayscale value of a pixel i(x,y) is the illumination component f(x,y) is the reflectance component A log-transformation would cause multiplicative illumination effects to become additive

Log-Transformation In an image with graytone values: g  [g L, g R ] The log-transformation is expressed as: Where P is a fraction of the original gray scale resolution

Background Extraction Segmentation Method Step 1: Decompose the image into a grid of non-overlapping blocks Blocks along the periphery are boundary blocks, and blocks that are not boundary blocks are interior blocks.

Step 2: For each block in the image, calculate the mean graytone, the left and right standard deviations of the log-transformed histogram (denoted by , T left, and T right respectively) Background Extraction Segmentation Method

Step 2 (cont.): A block can be considered homogenous if Q percent or more of its pixels belong to the same class. Specifically, a block can be considered homogenous if: and Where S is the standard deviation of the image Homogenous blocks can be object-homogenous or background-homogenous. If the block is not homogenous, it is considered uncertain. Background Extraction Segmentation Method

Step 2 (cont.): Homogenous boundary blocks are assumed to be background blocks, and their measured statistical parameters are considered their background distributions Background Extraction Segmentation Method

Step 3: Non-homogenous boundary blocks are examined by starting from an arbitrary homogenous boundary block and proceeding [counter]clockwise around the boundary blocks The background parameters of a non-homogenous boundary block are estimated from the two nearest background-homogenous blocks Once the background parameters of the non-homogenous boundary block have been estimated, that block is marked background-homogenous Background Extraction Segmentation Method

Step 4: Interior blocks are then examined in a certain sequence that assures that the block being examined has 3 adjacent blocks that have already had their background parameters estimated. One of these blocks must be horizontally adjacent, one must be vertically adjacent, and the third must be diagonally adjacent between the two other adjacent blocks. Background Extraction Segmentation Method XX XY XXX Y X XYX

Step 4 (cont.): A homogenous interior block is considered object- homogenous if its measured graytone mean and the background mean of a vertically or horizontally adjacent block are significantly different. Otherwise it is background-homogenous. Background parameters of background-homogenous blocks are measured, while background parameters of object-homogenous or uncertain blocks are estimated. Background Extraction Segmentation Method

Step 5: Once the background parameters of each block have been measured or estimated, calculate the left and right shoulder thresholds of the background distribution using the following formulas: Where is a prespecified constant (in the case of this study’s experiments, 2.5) Assign these threshold values to the center of each block Background Extraction Segmentation Method

Step 5: (dots indicate positioning of threshold values) Background Extraction Segmentation Method          

Step 6: Left and right shoulder thresholds for a pixel (x, y) are obtained by bilinear interpolation of the left and right shoulder thresholds assigned to the four block centers surrounding that pixel Let L[g(x,y)] be the log-transformed graytone of pixel (x,y) Pixel (x,y) is darker than the background if L[g(x,y)] < t left Pixel (x,y) is lighter than the background if L[g(x,y)] > t right Background Extraction Segmentation Method

Step 6 (cont.): In order to preserve local brightness relationships between object pixels and background pixels, keep two floating histograms as pixels are being classified: one of bright pixels (F b ) and the other of dark pixels (F d ). The abscissa of these floating histograms represents the difference between the log-transformed data value of a pixel and the log-transformed data value of the corresponding background shoulder threshold at that pixel Background Extraction Segmentation Method

Step 7: Once every pixel has been classified by the background extraction procedure, calculate the mean and the left and right standard deviations of F b and F d. Calculate left and right shoulder thresholds of F b and F d : Background Extraction Segmentation Method

Final pixel classification can then be obtained from: Background Extraction Segmentation Method (Where represents the label associated with each of the descriptive categories)

Experimental Results See figures in handout

Assessment ProsCons No a priori knowledge req’d Works with non-uniform backgrounds Performs successfully without parameter adjustments or human interaction Works under different lighting conditions Doesn’t matter if the object is darker or brighter than the background… there doesn’t even have to be an object Only tested on industrial scenes and moderately complex outdoor scenes… unproven on natural scenes Estimation of background parameters may be error-prone Only works on grayscale images

Possible Improvements Color image support Processing spatial and local information in addition to brightness information to reduce misclassified pixels Robust and real-time performance on natural scenes