Semantic Alignment Spring 2009 Ben-Gurion University of the Negev.

Slides:



Advertisements
Similar presentations
Pattern Classification & Decision Theory. How are we doing on the pass sequence? Bayesian regression and estimation enables us to track the man in the.
Advertisements

An Adaptive Learning Method for Target Tracking across Multiple Cameras Kuan-Wen Chen, Chih-Chuan Lai, Yi-Ping Hung, Chu-Song Chen National Taiwan University.
The fundamental matrix F
Ch2 Data Preprocessing part3 Dr. Bernard Chen Ph.D. University of Central Arkansas Fall 2009.
QR Code Recognition Based On Image Processing
Image classification Given the bag-of-features representations of images from different classes, how do we learn a model for distinguishing them?
Human Identity Recognition in Aerial Images Omar Oreifej Ramin Mehran Mubarak Shah CVPR 2010, June Computer Vision Lab of UCF.
E.G.M. PetrakisImage Segmentation1 Segmentation is the process of partitioning an image into regions –region: group of connected pixels with similar properties.
Support Vector Machines
PCA + SVD.
EE663 Image Processing Histogram Equalization Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Chapter 4: Linear Models for Classification
Segmentation (2): edge detection
SEMANTIC FEATURE ANALYSIS IN RASTER MAPS Trevor Linton, University of Utah.
Computer Vision Group University of California Berkeley Estimating Human Body Configurations using Shape Context Matching Greg Mori and Jitendra Malik.
Segmentation Divide the image into segments. Each segment:
Detecting and Tracking Moving Objects for Video Surveillance Isaac Cohen and Gerard Medioni University of Southern California.
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
Color a* b* Brightness L* Texture Original Image Features Feature combination E D 22 Boundary Processing Textons A B C A B C 22 Region Processing.
Fitting a Model to Data Reading: 15.1,
COMP322/S2000/L23/L24/L251 Camera Calibration The most general case is that we have no knowledge of the camera parameters, i.e., its orientation, position,
Lecture 4: Edge Based Vision Dr Carole Twining Thursday 18th March 2:00pm – 2:50pm.
Comparison and Combination of Ear and Face Images in Appearance-Based Biometrics IEEE Trans on PAMI, VOL. 25, NO.9, 2003 Kyong Chang, Kevin W. Bowyer,
Pattern Recognition. Introduction. Definitions.. Recognition process. Recognition process relates input signal to the stored concepts about the object.
Information that lets you recognise a region.
Abstract Overall Algorithm Target Matching Error Checking: By comparing what we transform from Kinect Camera coordinate to robot coordinate with what we.
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
CSCI 347 / CS 4206: Data Mining Module 04: Algorithms Topic 06: Regression.
Radial Basis Function Networks
Chapter 12 Spatial Sharpening of Spectral Image Data.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean Hall 5409 T-R 10:30am – 11:50am.
Image Classification and its Applications
Face Alignment Using Cascaded Boosted Regression Active Shape Models
CSE554AlignmentSlide 1 CSE 554 Lecture 8: Alignment Fall 2014.
06 - Boundary Models Overview Edge Tracking Active Contours Conclusion.
CSE554AlignmentSlide 1 CSE 554 Lecture 5: Alignment Fall 2011.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
S EGMENTATION FOR H ANDWRITTEN D OCUMENTS Omar Alaql Fab. 20, 2014.
EDGE DETECTION IN COMPUTER VISION SYSTEMS PRESENTATION BY : ATUL CHOPRA JUNE EE-6358 COMPUTER VISION UNIVERSITY OF TEXAS AT ARLINGTON.
Chapter 10, Part II Edge Linking and Boundary Detection The methods discussed in the previous section yield pixels lying only on edges. This section.
Ensemble Learning Spring 2009 Ben-Gurion University of the Negev.
Combining Audio Content and Social Context for Semantic Music Discovery José Carlos Delgado Ramos Universidad Católica San Pablo.
Soccer Video Analysis EE 368: Spring 2012 Kevin Cheng.
CS654: Digital Image Analysis Lecture 25: Hough Transform Slide credits: Guillermo Sapiro, Mubarak Shah, Derek Hoiem.
Wenqi Zhu 3D Reconstruction From Multiple Views Based on Scale-Invariant Feature Transform.
CSE554AlignmentSlide 1 CSE 554 Lecture 8: Alignment Fall 2013.
CVPR2013 Poster Detecting and Naming Actors in Movies using Generative Appearance Models.
Ensemble Color Segmentation Spring 2009 Ben-Gurion University of the Negev.
Linear Models for Classification
Radiometric Normalization Spring 2009 Ben-Gurion University of the Negev.
CSE 185 Introduction to Computer Vision Feature Matching.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Digital Image Processing
A New Method for Crater Detection Heather Dunlop November 2, 2006.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
Similarity Measures Spring 2009 Ben-Gurion University of the Negev.
Course 5 Edge Detection. Image Features: local, meaningful, detectable parts of an image. edge corner texture … Edges: Edges points, or simply edges,
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Markov Random Fields (MRF) Spring 2009 Ben-Gurion University of the Negev.
Photoconsistency constraint C2 q C1 p l = 2 l = 3 Depth labels If this 3D point is visible in both cameras, pixels p and q should have similar intensities.
Cell Segmentation in Microscopy Imagery Using a Bag of Local Bayesian Classifiers Zhaozheng Yin RI/CMU, Fall 2009.
Detecting Image Features: Corner. Corners Given an image, denote the image gradient. C is symmetric with two positive eigenvalues. The eigenvalues give.
CSE 554 Lecture 8: Alignment
Nonparametric Semantic Segmentation
A special case of calibration
Brief Review of Recognition + Context
Announcements Project 2 artifacts Project 3 due Thursday night
CSE 185 Introduction to Computer Vision
Calibration and homographies
Presentation transcript:

Semantic Alignment Spring 2009 Ben-Gurion University of the Negev

Sensor Fusion Spring 2009 Instructor Dr. H. B Mitchell

Sensor Fusion Spring 2009 Semantic Alignment For fusion to take place inputs must be converted into a common representational format Semantical alignment requires that all measurements refer to the same object or phenomena.

Sensor Fusion Spring 2009 Semantical Alignment: Image Fusion Generally not required if captured by cameras of same type. In fact even sensors which are different are often regarded as measuring same phenomena and therefore semantically aligned. Example: Infra-red and visible cameras.

Sensor Fusion Spring 2009 Semantic Alignment: Feature Map Fusion Multiple feature maps are generated by  Same feature operator acting on multiple input images. In this case the feature maps refer to the same phenomena and no semantic alignment is required.  Multiple feature operators acting on a single input image. If the feature operators all measure the same object or phenomena (but using different algorithms) then no semantic alignment is required.  Multiple feature operators acting on a single input image. If the feature operators measure different object or phenomena then semantic alignment is required.

Sensor Fusion Spring 2009 Semantic Alignment: Feature Map Fusion Example. Canny and Sobel edge operators acting on the same input image. Both feature operators refer to same phenomena. Semantic alignment is therefore not required. However radiometric normalization is required: The dynamic scale of Sobel and Canny operators give very different.

Sensor Fusion Spring 2009 Semantic Alignment: Feature Map Fusion Example. Edge operator and Blob detector acting on the same input image. The feature operators refer to different phenomena. Therefore semantic alignment is required. Theory of ATR suggests edge and blob are casually linked to “presence of target” scores or likelihoood. No radiometric alignment is required if we use the same semantic alignment algorithm

Sensor Fusion Spring 2009 Semantic Alignment: Likelihood Feature map F(m,n) = strength of feature operator at pixel (m,n) Likelihood L(m,n) = likelihood or probability that target exists at pixel (m,n) If have training data with ground truth then can learn likelihood function L(m,n).

Sensor Fusion Spring 2009 Platt Calibration Given training data: K examples of edge values: Corresponding ground truth Suppose likelihood curve follows a sigmoid curve. Find optimum sigmoid curve using method of maximum likelihood

Sensor Fusion Spring 2009 Histogram Calibration Given training data: K examples of edge values: Corresponding ground truth Assume no knowledge regarding shape of likelihood curve. Then likelihood is relative number of edges in each bin.

Sensor Fusion Spring 2009 Likelihood. Isotonic Regression Isotonic regression assumes likelihood curve is monotonically increasing (or decreasing). It therefore represents a intermediate case between Platt calibration and Histogram calibration. A simple algorithm for isotonic curve fitting is PAV (Pair- Adjacent Violation Algorithm). Monotonically increasing likelihood curve of unknown shape

Sensor Fusion Spring 2009 Likelihood. Isotonic Regression Find montonically increasing function f(x) which minimizes Use PAV algorithm. This works iteratively as follows: Arrange the such that If f is isotonic then f*=f and stop If f is not isotonic then there must exist a label l such that Eliminate this pair by creating a single entry with which is now isotonic.

Sensor Fusion Spring 2009 Likelihood. Isotonic Regression # score init iterations In first iteration entries 12 and 13 are removed by pooling the two entries together and giving them a value of 0.5. This introduces a new violation between entry 11 and the group 12-13, which are pooled together formin a pool of 3 entries with value 0.33

Sensor Fusion Spring 2009 Semantic Alignment: Decision Map Fusion Multiple decision maps are generated by  Same decision operator acting on multiple feature maps. If the feature maps are semantically equivalent, then the decision maps are semantically equivalent and no semantic alignment is required.  If the feature maps are not semantically equivalent then the decision maps are also not semantically equivalent and semantic alignment is required.  If multiple decision operator act on a single feature map. If the decision operators all refer to the same object or phenomena then the decision maps are semantically equivalent and no semantic alignment is required.  If the decision operators refer to different objects or phenomena then semantic alignment is required.

Sensor Fusion Spring 2009 Association Given two decision maps A and B let be the hth label in A and be the kth label in B Often a one-to-one relationship exists between and Finding the relationship is often the decision fusion itself.

Sensor Fusion Spring 2009 Association Given two decision maps A and B let be the hth label in A and be the kth label in B Often a one-to-one relationship exists between and Finding the relationship is often the decision fusion itself.

Sensor Fusion Spring 2009 Association Given a test target and a model. For each point on the boundary of the test target A we wish to find the corresponding point on the boundary of the model A. Estimate transformation Measure similarity model target...

Sensor Fusion Spring 2009 Association Compute matching costs using Chi Squared distance: Recover correspondences by solving linear assignment problem with costs C ij [Jonker & Volgenant 1987] Recover correspondences by solving linear assignment problem with costs C ij The result of associating each point on test target with the model

Sensor Fusion Spring 2009 Assignment. Key Point Association Given two images A and B we may perform spatial alignment by matching key points in A with their corresponding key points in B

Sensor Fusion Spring 2009 Image Alignment In spatial alignment the assignment problem is made difficult because of the large number of key points in A which have no corresponding key point in B and vice versa

Sensor Fusion Spring 2009 Association. Multiple Sensor Tracking Two targets 1 and 2 are detected by two sensors A and B which separately track them. Ideally we have two tracks from A – A1 and A2 and two tracks from B – B1 and B2 We may only combine (and obtain more accurate tracks) if we can correctly assign A1 and A2 to B2 and B2 With false and missing tracks the task becomes very difficult.

Sensor Fusion Spring 2009 Binary Map Fusion Given K binary maps which we suppose are semantically equivalent: ie label {0,1} in corresponds to label {0,1} in May fuse them together by voting: However this tends to give an image with a broken- appearance

Sensor Fusion Spring 2009 Binary Map Fusion May fuse using distance map. d(i,j|1)=Euclidean distance from nearest black pixel d(i,j|0)=Euclidean distance from nearest white pixel Add the distance maps and then choose the smallest

Sensor Fusion Spring 2009 Binary Map Fusion Given K binary maps which we suppose are semantically equivalent: ie label {0,1} in corresponds to label {0,1} in May fuse them together by voting: However this tends to give an image with a broken- appearance