Imaging and Depth Estimation in an Optimization Framework

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto.
S INGLE -I MAGE R EFOCUSING AND D EFOCUSING Wei Zhang, Nember, IEEE, and Wai-Kuen Cham, Senior Member, IEEE.
Simultaneous surveillance camera calibration and foot-head homology estimation from human detection 1 Author : Micusic & Pajdla Presenter : Shiu, Jia-Hau.
CSE473/573 – Stereo and Multiple View Geometry
Some problems... Lens distortion  Uncalibrated structure and motion recovery assumes pinhole cameras  Real cameras have real lenses  How can we.
Analysis of Contour Motions Ce Liu William T. Freeman Edward H. Adelson Computer Science and Artificial Intelligence Laboratory Massachusetts Institute.
Human Pose detection Abhinav Golas S. Arun Nair. Overview Problem Previous solutions Solution, details.
GrabCut Interactive Image (and Stereo) Segmentation Carsten Rother Vladimir Kolmogorov Andrew Blake Antonio Criminisi Geoffrey Cross [based on Siggraph.
GrabCut Interactive Image (and Stereo) Segmentation Joon Jae Lee Keimyung University Welcome. I will present Grabcut – an Interactive tool for foreground.
Intelligent Systems Lab. Extrinsic Self Calibration of a Camera and a 3D Laser Range Finder from Natural Scenes Davide Scaramuzza, Ahad Harati, and Roland.
Computer Vision Optical Flow
Active Calibration of Cameras: Theory and Implementation Anup Basu Sung Huh CPSC 643 Individual Presentation II March 4 th,
Boundary matting for view synthesis Samuel W. Hasinoff Sing Bing Kang Richard Szeliski Computer Vision and Image Understanding 103 (2006) 22–32.
Last Time Pinhole camera model, projection
Motion Analysis (contd.) Slides are from RPI Registration Class.
CSci 6971: Image Registration Lecture 4: First Examples January 23, 2004 Prof. Chuck Stewart, RPI Dr. Luis Ibanez, Kitware Prof. Chuck Stewart, RPI Dr.
Probabilistic video stabilization using Kalman filtering and mosaicking.
2010/5/171 Overview of graph cuts. 2010/5/172 Outline Introduction S-t Graph cuts Extension to multi-label problems Compare simulated annealing and alpha-
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
The plan for today Camera matrix
Optical flow and Tracking CISC 649/849 Spring 2009 University of Delaware.
Stereo Computation using Iterative Graph-Cuts
An Iterative Optimization Approach for Unified Image Segmentation and Matting Hello everyone, my name is Jue Wang, I’m glad to be here to present our paper.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Optical flow (motion vector) computation Course: Computer Graphics and Image Processing Semester:Fall 2002 Presenter:Nilesh Ghubade
Multi-Aperture Photography Paul Green – MIT CSAIL Wenyang Sun – MERL Wojciech Matusik – MERL Frédo Durand – MIT CSAIL.
Automatic Camera Calibration
National Center for Supercomputing Applications University of Illinois at Urbana-Champaign Image Features Kenton McHenry, Ph.D. Research Scientist.
Image Formation. Input - Digital Images Intensity Images – encoding of light intensity Range Images – encoding of shape and distance They are both a 2-D.
Prakash Chockalingam Clemson University Non-Rigid Multi-Modal Object Tracking Using Gaussian Mixture Models Committee Members Dr Stan Birchfield (chair)
OBJECT RECOGNITION. The next step in Robot Vision is the Object Recognition. This problem is accomplished using the extracted feature information. The.
3D SLAM for Omni-directional Camera
Integral University EC-024 Digital Image Processing.
From Pixels to Features: Review of Part 1 COMP 4900C Winter 2008.
Object Stereo- Joint Stereo Matching and Object Segmentation Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on Michael Bleyer Vienna.
Forward-Scan Sonar Tomographic Reconstruction PHD Filter Multiple Target Tracking Bayesian Multiple Target Tracking in Forward Scan Sonar.
1 University of Texas at Austin Machine Learning Group 图像与视频处理 计算机学院 Motion Detection and Estimation.
: Chapter 11: Three Dimensional Image Processing 1 Montri Karnjanadecha ac.th/~montri Image.
Wenqi Zhu 3D Reconstruction From Multiple Views Based on Scale-Invariant Feature Transform.
Vehicle Segmentation and Tracking From a Low-Angle Off-Axis Camera Neeraj K. Kanhere Committee members Dr. Stanley Birchfield Dr. Robert Schalkoff Dr.
A Flexible New Technique for Camera Calibration Zhengyou Zhang Sung Huh CSPS 643 Individual Presentation 1 February 25,
The geometry of the system consisting of the hyperbolic mirror and the CCD camera is shown to the right. The points on the mirror surface can be expressed.
Presented by: Idan Aharoni
RECONSTRUCTION OF MULTI- SPECTRAL IMAGES USING MAP Gaurav.
Motion tracking TEAM D, Project 11: Laura Gui - Timisoara Calin Garboni - Timisoara Peter Horvath - Szeged Peter Kovacs - Debrecen.
Computer vision: models, learning and inference
Geometric Model of Camera
SIFT Scale-Invariant Feature Transform David Lowe
LINEAR CLASSIFIERS The Problem: Consider a two class task with ω1, ω2.
CS 4501: Introduction to Computer Vision Sparse Feature Detectors: Harris Corner, Difference of Gaussian Connelly Barnes Slides from Jason Lawrence, Fei.
: Chapter 11: Three Dimensional Image Processing
Distributed Ray Tracing
Markov Random Fields with Efficient Approximations
Robust Visual Motion Analysis: Piecewise-Smooth Optical Flow
Fast Preprocessing for Robust Face Sketch Synthesis
Dynamical Statistical Shape Priors for Level Set Based Tracking
Geometry 3: Stereo Reconstruction
Fitting Curve Models to Edges
Recap from Wednesday Spectra and Color Light capture in cameras and humans.
Iterative Optimization
Video Compass Jana Kosecka and Wei Zhang George Mason University
Presented by: Chang Jia As for: Pattern Recognition
Analysis of Contour Motions
Image and Video Processing
Filtering An image as a function Digital vs. continuous images
The Pinhole Camera Model
Random Neural Network Texture Model
Presentation transcript:

Imaging and Depth Estimation in an Optimization Framework Avinash Kumar 200507010 International Institute of Information Technology, Hyderabad 2007 Advisor : Dr C. V. Jawahar

Thesis: Motivation Computer Vision Algorithms can be applied to real life problems only if they are FAST and ACCURATE FAST: Optimization framework required. ACCURATE: Better algorithms required.

1. Omnifocus Imaging: Completely sharp images Thesis: Objective Propose New algorithms in Optimization framework for the following Three computer vision problems 1. Omnifocus Imaging: Completely sharp images

2. Depth Estimation: Obtain 3D depth from 2D images Thesis: Objective Propose New algorithms in Optimization framework for the following Three computer vision problems 2. Depth Estimation: Obtain 3D depth from 2D images

3. Background Subtracted Imaging: Remove static/unwanted image regions Thesis: Objective Propose New algorithms in Optimization framework for the following Three computer vision problems 3. Background Subtracted Imaging: Remove static/unwanted image regions

Thesis: Outline and Contributions Optimization Framework Imaging Depth Estimation Background Subtraction Omni focus Imaging Discrete Optimization Edge Based Discrete Optimization Camera Calibration Generative Focus Measure Discrete Optimization

Thesis: Outline Optimization Framework Imaging Depth Estimation Background Subtraction Omni focus Imaging Discrete Optimization Edge Based Discrete Optimization Camera Calibration Generative Focus Measure Discrete Optimization

Optimization Framework Labeling Problem Markov Random Fields (MRF) Maximum a Posteriori (MAP) - MRF Labeling Energy Functions Graph Cuts for Minimizing Energy Functions

Input: Given a set of Sites S and Labels L Labeling Problem Input: Given a set of Sites S and Labels L 1 2 3 4 5

Labeling Problem Problem: Optimally assign Labels to Sites 2 5 1 3 4 Total number of possible assignments C 1 3 4 Optimal assignment is encoded in an Energy Function

Optimization Framework Labeling Problem Markov Random Fields (MRF) Maximum a Posteriori(MAP) - MRF Labeling Energy Functions Graph Cuts for Minimizing Energy Functions

Markov Random Fields (MRF) Given a labeling , it belongs to a MARKOV RANDOM FIELD iff the following two hold.

Optimization Framework Labeling Problem Markov Random Fields (MRF) Maximum a Posteriori (MAP) - MRF Labeling Energy Functions Graph Cuts for Minimizing Energy Functions

Ensure Optimal Labeling ? A Labeling l can be realized if some Observation d is given Observation: d Unknown label: l Labeling l belongs to MRF Given d what is the optimal l ?

Bayesian Justification For optimality: labeling in Bayesian Framework Observation: d Unknown label: l Maximum A Posteriori (MAP) estimate

Bayesian Justification Prior is defined since l belongs to a Markov Random Field (MRF)

Optimization Framework Labeling Problem Markov Random Fields (MRF) Maximum a Posteriori (MAP) - MRF Labeling Energy Functions Graph Cuts for Minimizing Energy Functions

Energy Function E Taking negative on MAP makes Maximization a Energy Minimization problem Separation Cost Assignment Cost

Assignment Cost = Cost of assigning label li to di Energy Function E Assignment Cost = Cost of assigning label li to di Separation Cost = Cost of assigning labels li and lj to neighboring sites

Optimization Framework Labeling Problem Markov Random Fields (MRF) Maximum a Posteriori (MAP) - MRF Labeling Energy Functions Graph Cuts for Minimizing Energy Functions

Graph Cuts for Minimization Source Graph G A cut Sink Minimum Cut on G = Local Minima of E(l) Alpha Expansion Algorithm based on Minimum Cut finds approximate Global Minima of E(l)

Thesis: Outline Optimization Framework Imaging Depth Estimation Background Subtraction Omni focus Imaging Discrete Optimization Edge Based Discrete Optimization Camera Calibration Generative Focus Measure Discrete Optimization

Lens Imaging Lens Object Optical Axis Image

Narrow Field of View (FoV) Two Limitations Depth of Field (DoF) Narrow Field of View (FoV)

Narrow Field of View (FoV) Two Limitations Depth of Field (DoF) Narrow Field of View (FoV)

Depth of Field Sensor Plane Lens 1 CCD = 1 pixel where all rays get focused Depth of Field A Range of depths get focused at same pixel location in an image.

But, we want images where all objects are in FOCUS !! Sample Image Depth of Field But, we want images where all objects are in FOCUS !!

Something like this where ALL objects are in FOCUS !! Desired Image Something like this where ALL objects are in FOCUS !!

Narrow Field of View (FoV) Two Limitations Depth of Field (DoF) Narrow Field of View (FoV)

FOV is limited by the dimensions of Sensor Plane Field of View Gets Imaged Does not get Imaged Sensor Plane Limited Field of View 1 CCD = 1 pixel Lens FOV is limited by the dimensions of Sensor Plane

Sample Image SMALL Field of View

Desired Image LARGE Field Of View

Solving DoF and FoV Simultaneously Move the sensor plane and take multiple images These images focus different depths

Solving DoF and FoV Simultaneously Move the sensor plane and take multiple images These images focus different depths

Solving DoF and FoV Simultaneously Move the sensor plane and take multiple images These images focus different depths

Solving DoF and FoV Simultaneously For an object select the image in which it is best focused Rotate the camera about optic center to obtain larger FOV

Solving DoF and FoV Simultaneously For an object select the image in which it is best focused Rotate the camera about optic center to obtain larger FOV

A Modified Imaging System Non-Frontal Imaging CAMera (NICAM) [Ahuja’93] Conventional Camera Frontal Sensor Single depth is captured

Non-Frontal Imaging CAMera (NICAM) Modified Camera Non Frontal Sensor Multiple depths are captured simultaneously

Non-Frontal Imaging CAMera (NICAM) Rotate NICAM to get large FOV

Non-Frontal Imaging CAMera (NICAM) Rotate NICAM to get large FOV

Thesis: Outline Optimization Framework Imaging Depth Estimation Background Subtraction Omni focus Imaging Discrete Optimization Edge Based Discrete Optimization Camera Calibration Generative Focus Measure Discrete Optimization

Calibration Required for pixel level registration of NICAM images. Tilted Sensor Plane Pan Center Required for pixel level registration of NICAM images. Pan Centering of NICAM Calculating the tilt of the sensor plane

Various Coordinate Systems Checkerboard is placed in front of the camera Camera is placed on a rotating stage Coordinate systems assigned to World, Stage and Camera

Outline Pan Centering Tilt of Lens

Outline Pan Centering Tilt of Lens

Pan Centering Align the optic center with the rotation axis. Camera is movable on the Stage Rotation Axis

Pan Centering Align the optic center with the rotation axis. Camera is movable on the Stage Rotation Axis

Pan Centering: Algorithm STEP 1: Take images of a checkerboard pattern by rotating NICAM

Pan Centering: Algorithm STEP 1: Take images of a checkerboard pattern by rotating NICAM

Pan Centering: Algorithm STEP 1: Take images of a checkerboard pattern by rotating NICAM

Pan Centering: Algorithm STEP 1: Take images of a checkerboard pattern by rotating NICAM

Pan Centering: Algorithm STEP 1: Take images of a checkerboard pattern by rotating NICAM

Pan Centering: Algorithm STEP 2: Use MATLAB Calibration toolbox to obtain extrinsic parameters of each camera position Yw Zc Yc Xc (R t) Zw Xw R = Rotation t= Translation Extrinsic Parameters (R and t): Camera coordinate system (c) in a fixed world coordinate system (w) attached to the checker board.

Pan Centering: Algorithm STEP 3: Since the camera is not pan centered, camera positions are obtained in a elliptical arc as shown below

Pan Centering: Algorithm STEP 4: Project the points on the XY plane and do circle fitting to obtain the center of the circle ≈ 8 mm Centering Error ≈ 8 mm. The camera is off center by this amount

Pan Centering: Algorithm STEP 5: Move the camera in X and Y directions on the stage by the centering error calculated above and repeat STEP 1 to 4 till very small centering errors After Convergence ≈ 0.2 mm Centering Error ≈ 0.2 mm

Outline Pan Centering Tilt of Lens

Tilt of the Lens Define the following Transformations between Coordinate Systems (CS) Transformation from Board CS to Camera CS Transformation from Board CS to World CS Transformation from World CS to Stage CS Transformation from Stage CS to Camera CS

Tilt of the Lens {R,T} = Unknown Rotation and Translation matrices [3 rotation variables] Rotation about {Xs/Xw} axis by θ angle increments Since pan centering is done, Translation matrix is 0 [3 rotation variables] Obtained from MATLAB Calibration Toolbox during Pan Centering

Tilt of the Lens Thus, we formulate the following equality : After simplification, we get and

The Error Minimization Function is Tilt of the Lens The Error Minimization Function is where, are the rotation angles associated with the rotation matrices. 6 variables, use more than 6 images to formulate an over determined set of equations. Minimize using FMINSEARCH (MatLab)

Final Rotation angles between the coordinate systems Results Final Rotation angles between the coordinate systems

Registered Images of a checkerboard Results Registered Images of a checkerboard

Conclusions A new algorithm for Pan Centering is proposed An optimization technique obtain lens tilt is given Accurate registration of NICAM images is done

Thesis: Outline Optimization Framework Imaging Depth Estimation Background Subtraction Omni focus Imaging Discrete Optimization Edge Based Discrete Optimization Camera Calibration Generative Focus Measure Discrete Optimization

Omnifocus Imaging Extend the Depth of Field of the image Input: Given a set of multi focus images. Output: An image where EVERYTHING is in FOCUS.

Images with different depths in focus Multi focused Images Images with different depths in focus Focused Depth

Images with different depths in focus Multi focused Images Images with different depths in focus Focused Depth

Images with different depths in focus Multi focused Images Images with different depths in focus Focused Depth

Images with different depths in focus Multi focused Images Images with different depths in focus Focused Depth

Images with all depths in focus Omni focused Image Images with all depths in focus All Depths in Focus

Algorithm Outline Capture multi focus images using NICAM. Register the multi focus images. Find the image in which a pixel is most focused. Extract the pixel and paste on a new image. Repeat the procedure for all pixels in the image. Obtain an Omni focus image.

Algorithm Outline Capture multi focus images using NICAM. Register the multi focus images. Find the image in which a pixel is most focused. Extract the pixel and paste on a new image. Repeat the procedure for all pixels in the image. Obtain an Omni focus image.

Sensor Plane at multiple locations Imaging Optics Lens Sensor Plane at multiple locations Blur Circle Si Object D Optical Axis u f = Blurred Image g = Focused Image v

Imaging: Some Details u = Object Distance v = Image Distance F = Focal Length Si = Distance of Ith Sensor Plane R = Radius of Blur f = Focused Image g = Blurred Image D = Aperture Diameter Thin Lens Equation Blur Radius Gaussian Blur Kernel Blur Image Formation

Algorithm Outline Capture multi focus images using NICAM. Register the multi focus images. Find the image in which a pixel is most focused. Extract the pixel and paste on a new image. Repeat the procedure for all pixels in the image. Obtain an Omni focus image.

Algorithm Outline Capture multi focus images using NICAM. Register the multi focus images. Find the image in which a pixel is most focused. Extract the pixel and paste on a new image. Repeat the procedure for all pixels in the image. Obtain an Omni focus image.

Focus Measure Focus Measure is a metric to find the BEST FOCUSED pixel Conventional metric: Energy of Gradient Higher the Gradient --------- More Focused is the image Focused Blurred Calculate intensity gradient and assign the best focus image in which it maximizes

But there are Drawbacks! Near an edge, the gradient maximizes for defocused pixel Focused No Gradient Defocused High Gradient

Thesis: Outline Optimization Framework Imaging Depth Estimation Background Subtraction Omni focus Imaging Discrete Optimization Edge Based Discrete Optimization Camera Calibration Generative Focus Measure Discrete Optimization

A Generative Focus Measure INPUT: Set of multi focus images captured from NICAM. ‘3’ is blurred with varying amounts in each image

A Generative Focus Measure STEP 1: Select a pixel location p = (x, y) in image frame k. k p = (x, y) Assume I(x, y) is a focused pixel

A Generative Focus Measure STEP 2: Calculate the radius R of blur produced at location (x, y) in image frame k-d. Similarly in frame k+d. k-d k k+d k p = (x, y) From Imaging Optics, Std. Dev. for Gaussian Blur

A Generative Focus Measure STEP 3: Obtain the Gaussian Blur Kernel and blur p(x, y) to obtain new intensities p’ and p’’ for k-d and k+d frames respectively. Gaussian Blur Kernel New intensity for k-d and k+d frames

A Generative Focus Measure STEP 4: Compare p’ and p’’ with the actual intensities at (x, y) location in k-d and k+d frames. Let actual intensities be q(x, y) in k-d frame and r(x, y) in k+d frame. The criteria for comparison is

A Generative Focus Measure STEP 5: If the criteria is satisfied, k becomes a candidate frame in which p could be focused. Calculate the focus measure Due to ambiguity, there could be multiple such candidates for each pixel. (Proof in thesis) The focus measure for frame k being in focus at p = (x, y) is

Algorithm Outline Capture multi focus images using NICAM. Register the multi focus images. Find the image in which a pixel is most focused. Extract the pixel and paste on a new image. Repeat the procedure for all pixels in the image. Obtain an Omni focus image.

Thesis: Outline Optimization Framework Imaging Depth Estimation Background Subtraction Omni focus Imaging Discrete Optimization Edge Based Discrete Optimization Camera Calibration Generative Focus Measure Discrete Optimization

Discrete Optimization To obtain smooth and fast solution, omnifocus imaging is formulated in a discrete optimization framework The Energy Minimization Function is formulated as Labels = {1,…N}, where N is the number of multi focus images Apply Graph Cuts to minimize

Algorithm Outline Capture multi focus images using NICAM. Register the multi focus images. Find the image in which a pixel is most focused. Extract the pixel and paste on a new image. Repeat the procedure for all pixels in the image. Obtain an Omni focus image.

Two Checkerboards at different depths Results: Synthetic Two Checkerboards at different depths Single Checkerboard

Results: Real Data Set1

Results: Real Data Set2

Conclusions Omnifocus Imaging is proposed in an Optimization framework New Generative Focus Measure is proposed Fast convergence since Graph cuts takes few seconds to minimize E

Thesis: Outline Optimization Framework Imaging Depth Estimation Background Subtraction Omni focus Imaging Discrete Optimization Edge Based Discrete Optimization Camera Calibration Generative Focus Measure Discrete Optimization

Outline Depth Estimation Labeling Problem Discrete Optimization Approach Results

Outline Depth Estimation Labeling Problem Discrete Optimization Approach Results

Depth Estimation Estimate the 3D Depth of objects in a scene Input: Given a set of multi focus images. Output: Depth Map of the scene

Set of multi focus images captured from NICAM Input Set of multi focus images captured from NICAM

Outline Depth Estimation Labeling Problem Discrete Optimization Approach Results

Thesis: Outline Optimization Framework Imaging Depth Estimation Background Subtraction Omni focus Imaging Discrete Optimization Edge Based Discrete Optimization Camera Calibration Generative Focus Measure Discrete Optimization

It’s a Labeling Problem! Object at different depths produce a different set of multi focus images Label set is possible depth values in the 3D world

Outline Depth Estimation Labeling Problem Discrete Optimization Approach Results

Discrete Optimization Approach Cost of assigning depth label to pixel p = Focus measure at p across the set of multi focus images. Pott’s Model Apply Graph Cuts to E to obtain depth map

Outline Depth Estimation Labeling Problem Discrete Optimization Approach Results

Results 1

Results 2

Results 3

Conclusions Depth Estimation is proposed in an Optimization framework Obtain difficult to obtain smooth and sharp depth boundaries. Fast convergence since Graph cuts takes few seconds to minimize E

Thesis: Outline Optimization Framework Imaging Depth Estimation Background Subtraction Omni focus Imaging Discrete Optimization Edge Based Discrete Optimization Camera Calibration Generative Focus Measure Discrete Optimization

Outline Background Subtraction Intermodal Train Monitoring Edge and Continuous Optimization Approach Discrete Optimization Approach Results and Comparison

Outline Background Subtraction Intermodal Train Monitoring Edge and Continuous Optimization Approach Discrete Optimization Approach Results and Comparison

Background Subtraction Background : Static/Slow moving objects Foreground : Moving objects

Gaussian Mixture Model (GMM) Intensities in an image are modeled as Mixture of K Gaussians The weights and parameters of each Gaussian are learned with time Able to model multimodal background distributions

Outline Background Subtraction Intermodal Train Monitoring Edge and Continuous Optimization Approach Discrete Optimization Approach Results and Comparison

Intermodal Train Monitoring Intermodal Train Video Image Frames Extraction Background Subtraction Train Velocity Estimation Train Mosaic Creation Load Types Detection Gap Length detection

Outline Background Subtraction Intermodal Train Monitoring Edge and Continuous Optimization Approach Discrete Optimization Approach Results and Comparison

Thesis: Outline Optimization Framework Imaging Depth Estimation Background Subtraction Omni focus Imaging Discrete Optimization Edge Based Discrete Optimization Camera Calibration Generative Focus Measure Discrete Optimization

Edge & Continuous Optimization STEP 1: Apply Edge Detection on each frame

Edge & Continuous Optimization STEP 2: Detect Top Edge

Edge & Continuous Optimization STEP 3: Detect Side Edges

Edge & Continuous Optimization STEP 4: GMM learning near the edges of gaps Thus, background is subtracted

Outline Background Subtraction Intermodal Train Monitoring Edge and Continuous Optimization Approach Discrete Optimization Approach Results and Comparison

Thesis: Outline Optimization Framework Imaging Depth Estimation Background Subtraction Omni focus Imaging Discrete Optimization Edge Based Discrete Optimization Camera Calibration Generative Focus Measure Discrete Optimization

Discrete Optimization Background Subtraction is modeled as Velocity Estimation Energy Function is given as: Label set f is unknown velocity of the train in pixel shifts per frame v = Maximum possible velocity

Pictorial Explanation At any pixel location, take a window of size w_p and cross correlate across frames with different velocity shifts v.

Data Term 0 Average out NCC over (2n+1) frames p Input Background Removed Data Cost Vs Velocity plot at p

Data Term 1 Weighted NCC over distance of a frame from the reference frame p Input Background Removed Data Cost Vs Velocity plot at p

Data Term 2 From DT1, choose the max from both sides of reference frame p Input Background Removed Data Cost Vs Velocity plot at p

Outline Background Subtraction Intermodal Train Monitoring Edge and Continuous Optimization Approach Discrete Optimization Approach Results and Comparison

Results Input Frames Velocity Map Velocity Map after Graph Cuts Background Subtracted Frames

Proposed Technique performs better than conventional techniques Comparison Input Frames Template based GMM based Proposed method Proposed Technique performs better than conventional techniques

Edge based and GMM based background subtraction techniques is proposed Conclusions Edge based and GMM based background subtraction techniques is proposed Background Subtraction for Train Monitoring is proposed in a Discrete Optimization Framework Proposed Discrete Optimization technique gives better results than existing methods.

Thank You !!