J. Mike McHugh,Janusz Konrad, Venkatesh Saligrama and Pierre-Marc Jodoin Signal Processing Letters, IEEE Professor: Jar-Ferr Yang Presenter: Ming-Hua Tang.

Slides:



Advertisements
Similar presentations
Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
Advertisements

Linear Time Methods for Propagating Beliefs Min Convolution, Distance Transforms and Box Sums Daniel Huttenlocher Computer Science Department December,
Change Detection C. Stauffer and W.E.L. Grimson, “Learning patterns of activity using real time tracking,” IEEE Trans. On PAMI, 22(8): , Aug 2000.
NASSP Masters 5003F - Computational Astronomy Lecture 5: source detection. Test the null hypothesis (NH). –The NH says: let’s suppose there is no.
1 Video Processing Lecture on the image part (8+9) Automatic Perception Volker Krüger Aalborg Media Lab Aalborg University Copenhagen
1 Detection and Analysis of Impulse Point Sequences on Correlated Disturbance Phone G. Filaretov, A. Avshalumov Moscow Power Engineering Institute, Moscow.
電腦視覺 Computer and Robot Vision I Chapter2: Binary Machine Vision: Thresholding and Segmentation Instructor: Shih-Shinh Huang 1.
Robust Foreground Detection in Video Using Pixel Layers Kedar A. Patwardhan, Guillermoo Sapire, and Vassilios Morellas IEEE TRANSACTION ON PATTERN ANAYLSIS.
AlgirdasBeinaravičius Gediminas Mazrimas Salman Mosslem.
Smoothing 3D Meshes using Markov Random Fields
Markov random field Institute of Electronics, NCTU
Prénom Nom Document Analysis: Parameter Estimation for Pattern Recognition Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Background Removal of Multiview Images by Learning Shape Priors Yu-Pao Tsai, Cheng-Hung Ko, Yi-Ping Hung, and Zen-Chung Shih.
Motion Detection And Analysis Michael Knowles Tuesday 13 th January 2004.
1 On the Statistical Analysis of Dirty Pictures Julian Besag.
Robust Object Segmentation Using Adaptive Thresholding Xiaxi Huang and Nikolaos V. Boulgouris International Conference on Image Processing 2007.
ON THE IMPROVEMENT OF IMAGE REGISTRATION FOR HIGH ACCURACY SUPER-RESOLUTION Michalis Vrigkas, Christophoros Nikou, Lisimachos P. Kondi University of Ioannina.
1 Markov random field: A brief introduction Tzu-Cheng Jen Institute of Electronics, NCTU
Evaluating Hypotheses
MRF Labeling With Graph Cut CMPUT 615 Nilanjan Ray.
Abstract Extracting a matte by previous approaches require the input image to be pre-segmented into three regions (trimap). This pre-segmentation based.
Announcements Readings for today:
1 Integration of Background Modeling and Object Tracking Yu-Ting Chen, Chu-Song Chen, Yi-Ping Hung IEEE ICME, 2006.
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
Computer vision: models, learning and inference Chapter 6 Learning and Inference in Vision.
Data Selection In Ad-Hoc Wireless Sensor Networks Olawoye Oyeyele 11/24/2003.
Image Analysis and Markov Random Fields (MRFs) Quanren Xiong.
EE513 Audio Signals and Systems Statistical Pattern Classification Kevin D. Donohue Electrical and Computer Engineering University of Kentucky.
Chi-squared distribution  2 N N = number of degrees of freedom Computed using incomplete gamma function: Moments of  2 distribution:
BraMBLe: The Bayesian Multiple-BLob Tracker By Michael Isard and John MacCormick Presented by Kristin Branson CSE 252C, Fall 2003.
Topics: Statistics & Experimental Design The Human Visual System Color Science Light Sources: Radiometry/Photometry Geometric Optics Tone-transfer Function.
Physics 270 – Experimental Physics. Standard Deviation of the Mean (Standard Error) When we report the average value of n measurements, the uncertainty.
REVISED CONTEXTUAL LRT FOR VOICE ACTIVITY DETECTION Javier Ram’ırez, Jos’e C. Segura and J.M. G’orriz Dept. of Signal Theory Networking and Communications.
Background Subtraction based on Cooccurrence of Image Variations Seki, Wada, Fujiwara & Sumi Presented by: Alon Pakash & Gilad Karni.
Markov Random Fields Probabilistic Models for Images
Lecture notes for Stat 231: Pattern Recognition and Machine Learning 3. Bayes Decision Theory: Part II. Prof. A.L. Yuille Stat 231. Fall 2004.
Video Segmentation Prepared By M. Alburbar Supervised By: Mr. Nael Abu Ras University of Palestine Interactive Multimedia Application Development.
1 E. Fatemizadeh Statistical Pattern Recognition.
Kevin Cherry Robert Firth Manohar Karki. Accurate detection of moving objects within scenes with dynamic background, in scenarios where the camera is.
Xu Huaping, Wang Wei, Liu Xianghua Beihang University, China.
On optimal quantization rules for some sequential decision problems by X. Nguyen, M. Wainwright & M. Jordan Discussion led by Qi An ECE, Duke University.
Copyright © 2013 Pearson Education, Inc. Publishing as Prentice Hall Statistics for Business and Economics 8 th Edition Chapter 9 Hypothesis Testing: Single.
Expectation-Maximization (EM) Case Studies
Image Analysis, Random Fields and Dynamic MCMC By Marc Sobel.
1 Markov random field: A brief introduction (2) Tzu-Cheng Jen Institute of Electronics, NCTU
3.7 Adaptive filtering Joonas Vanninen Antonio Palomino Alarcos.
GENDER AND AGE RECOGNITION FOR VIDEO ANALYTICS SOLUTION PRESENTED BY: SUBHASH REDDY JOLAPURAM.
A Dynamic Conditional Random Field Model for Object Segmentation in Image Sequences Duke University Machine Learning Group Presented by Qiuhua Liu March.
Spatial Smoothing and Multiple Comparisons Correction for Dummies Alexa Morcom, Matthew Brett Acknowledgements.
Bayesian decision theory: A framework for making decisions when uncertainty exit 1 Lecture Notes for E Alpaydın 2010 Introduction to Machine Learning 2e.
Statistical Inference Statistical inference is concerned with the use of sample data to make inferences about unknown population parameters. For example,
Visual Tracking by Cluster Analysis Arthur Pece Department of Computer Science University of Copenhagen
Wonjun Kim and Changick Kim, Member, IEEE
Lecture 5: Statistical Methods for Classification CAP 5415: Computer Vision Fall 2006.
Statistical Analysis for Expression Experiments Heather Adams BeeSpace Doctoral Forum Thursday May 21, 2009.
Example The strength of concrete depends, to some extent on the method used for drying it. Two different drying methods were tested independently on specimens.
Chapter 7: Hypothesis Testing. Learning Objectives Describe the process of hypothesis testing Correctly state hypotheses Distinguish between one-tailed.
Shadow Detection in Remotely Sensed Images Based on Self-Adaptive Feature Selection Jiahang Liu, Tao Fang, and Deren Li IEEE TRANSACTIONS ON GEOSCIENCE.
Student Gesture Recognition System in Classroom 2.0 Chiung-Yao Fang, Min-Han Kuo, Greg-C Lee, and Sei-Wang Chen Department of Computer Science and Information.
Computer vision: models, learning and inference
IMAGE SEGMENTATION USING THRESHOLDING
Binarization of Low Quality Text Using a Markov Random Field Model
REMOTE SENSING Multispectral Image Classification
Markov Random Fields for Edge Classification
EE513 Audio Signals and Systems
Image Registration 박성진.
Image and Video Processing
Parametric Methods Berlin Chen, 2005 References:
A Block Based MAP Segmentation for Image Compression
Outline Texture modeling - continued Markov Random Field models
Presentation transcript:

J. Mike McHugh,Janusz Konrad, Venkatesh Saligrama and Pierre-Marc Jodoin Signal Processing Letters, IEEE Professor: Jar-Ferr Yang Presenter: Ming-Hua Tang

 Introduction  Background subtraction as a hypothesis test  Foreground modeling  Makov modeling of change labels  Experimental results

 Change detection based on thresholding intensity differences.  We adapt the threshold to varying video statistics by means of two statistical models.  In addition to a nonparametric background model, we introduce a foreground model based on small spatial neighborhood to improve discrimination sensitivity.

 We also apply a Markov model to change labels to improve spatial coherence of the detections.  Our approach is using a spatially-variable detection threshold, offers an improved spatial coherence of the detections.

 Involves two distinct processes that work in a closed loop: 1. Background modeling: a model of the background in the field of view of a camera is created and periodically updated. 2. foreground detection: a decision is made as to whether a new intensity fits the background model; the resulting change label field is fed back into background modeling.

 At each background location n of k frame, this model uses intensity from recent N frames to estimate background PDF:  is a zero-mean Gaussian with variance that, for simplicity, we consider constant throughout the sequence.

 Change labels can be estimated by evaluating intensity in a new frame at each pixels in current image.  Without an explicit foreground model, is usually considered uniform.  This test is prone to randomly-scattered false positives, even for low θ.

 We propose a foreground model based on small spatial neighborhood in the same frame.  Let be a change label at n  Define a set of neighbors belonging to the foreground:  Calculate the foreground probability using the kernel-based method

 At iteration, this results in a refined likelihood ratio test  Since we introduce a positive feedback, the threshold θ must be carefully selected to avoid errors compound.  False negatives will be corrected by Markov model if several neighbors are correctly detected.

 A pixel surrounded by foreground labels should be more likely to receive a foreground label than a pixel with background neighbors.  Suppose that the label field realization is known for all m except n. Then the decision rule at n is :  By mutually independent spatially on the label field

 Since E is a MRF, the a priori probabilities on the right-hand side are Gibbs distributions characterized by the natural temperature γ, cliques c, and potential function V defined on c.

 Z and T(γ) are normalization and natural temperature constants respectively.  The potential function, V(c), in the set of all cliques in the image C. In this work, we take C to include all 2-element cliques of the second-order Markov neighborhood.

 Since the labels are binary, we choose to use the Ising potential function  With Z canceled, the ratio of Gibbs priors becomes

 denote the number of foreground and background neighbors of n  γ is selected by the user to control the nonlinear behavior  smaller values of γ strengthen the influence of MRF model on the estimate, while larger values weaken it.

 (b)Probabilities:  (c) followed by  (d) labels computed using additional MRF model.