Probabilistic Graph and Hypergraph Matching

Slides:



Advertisements
Similar presentations
Bayesian Belief Propagation
Advertisements

Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Heuristics for the Hidden Clique Problem Robert Krauthgamer (IBM Almaden) Joint work with Uri Feige (Weizmann)
Low Complexity Keypoint Recognition and Pose Estimation Vincent Lepetit.
Graph Laplacian Regularization for Large-Scale Semidefinite Programming Kilian Weinberger et al. NIPS 2006 presented by Aggeliki Tsoli.
Convergent Message-Passing Algorithms for Inference over General Graphs with Convex Free Energies Tamir Hazan, Amnon Shashua School of Computer Science.
Proportion Priors for Image Sequence Segmentation Claudia Nieuwenhuis, etc. ICCV 2013 Oral.
Qualifying Exam: Contour Grouping Vida Movahedi Supervisor: James Elder Supervisory Committee: Minas Spetsakis, Jeff Edmonds York University Summer 2009.
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
10/11/2001Random walks and spectral segmentation1 CSE 291 Fall 2001 Marina Meila and Jianbo Shi: Learning Segmentation by Random Walks/A Random Walks View.
Approximation Algoirthms: Semidefinite Programming Lecture 19: Mar 22.
Distributed Message Passing for Large Scale Graphical Models Alexander Schwing Tamir Hazan Marc Pollefeys Raquel Urtasun CVPR2011.
Presented by Marlene Shehadeh Advanced Topics in Computer Vision ( ) Winter
Lecture 21: Spectral Clustering
Multiple View Geometry
Clustering… in General In vector space, clusters are vectors found within  of a cluster vector, with different techniques for determining the cluster.
Semidefinite Programming
Improving code generation. Better code generation requires greater context Over expressions: optimal ordering of subtrees Over basic blocks: Common subexpression.
Problem Sets Problem Set 3 –Distributed Tuesday, 3/18. –Due Thursday, 4/3 Problem Set 4 –Distributed Tuesday, 4/1 –Due Tuesday, 4/15. Probably a total.
MRF Labeling With Graph Cut CMPUT 615 Nilanjan Ray.
2-Layer Crossing Minimisation Johan van Rooij. Overview Problem definitions NP-Hardness proof Heuristics & Performance Practical Computation One layer:
Adapted by Doug Downey from Machine Learning EECS 349, Bryan Pardo Machine Learning Clustering.
Linear Solution to Scale and Rotation Invariant Object Matching Professor: 王聖智 教授 Student : 周 節.
Bootstrapping a Heteroscedastic Regression Model with Application to 3D Rigid Motion Evaluation Bogdan Matei Peter Meer Electrical and Computer Engineering.
A Tensor-Based Algorithm for High-Order Graph Matching Olivier Duchenne, Francis Bach, Inso Kweon, Jean Ponce École Normale Supérieure, INRIA, KAIST Team.
כמה מהתעשייה? מבנה הקורס השתנה Computer vision.
Domain decomposition in parallel computing Ashok Srinivasan Florida State University COT 5410 – Spring 2004.
Image Segmentation Rob Atlas Nick Bridle Evan Radkoff.
CSE 185 Introduction to Computer Vision
Computer Vision James Hays, Brown
Segmentation Techniques Luis E. Tirado PhD qualifying exam presentation Northeastern University.
CSE554Laplacian DeformationSlide 1 CSE 554 Lecture 8: Laplacian Deformation Fall 2012.
Adaptive CSMA under the SINR Model: Fast convergence using the Bethe Approximation Krishna Jagannathan IIT Madras (Joint work with) Peruru Subrahmanya.
An Efficient Algorithm for Enumerating Pseudo Cliques Dec/18/2007 ISAAC, Sendai Takeaki Uno National Institute of Informatics & The Graduate University.
Parameter estimation. 2D homography Given a set of (x i,x i ’), compute H (x i ’=Hx i ) 3D to 2D camera projection Given a set of (X i,x i ), compute.
Pyramidal Implementation of Lucas Kanade Feature Tracker Jia Huang Xiaoyan Liu Han Xin Yizhen Tan.
CSE 589 Part VI. Reading Skiena, Sections 5.5 and 6.8 CLR, chapter 37.
NP-COMPLETE PROBLEMS. Admin  Two more assignments…  No office hours on tomorrow.
Approximate Inference: Decomposition Methods with Applications to Computer Vision Kyomin Jung ( KAIST ) Joint work with Pushmeet Kohli (Microsoft Research)
Presenter : Kuang-Jui Hsu Date : 2011/3/24(Thur.).
Gaussian Mixture Models and Expectation-Maximization Algorithm.
Image Segmentation Superpixel methods Speaker: Hsuan-Yi Ko.
DM GROUP MEETING PRESENTATION PLAN Eigenvector-based Centrality Measures For Temporal Networks by D Taylor et.al. Uncovering the Small Community.
Machine learning optimization Usman Roshan. Machine learning Two components: – Modeling – Optimization Modeling – Generative: we assume a probabilistic.
Classification and Regression Trees
1 Microarray Clustering. 2 Outline Microarrays Hierarchical Clustering K-Means Clustering Corrupted Cliques Problem CAST Clustering Algorithm.
776 Computer Vision Jan-Michael Frahm Spring 2012.
Linear Solution to Scale and Rotation Invariant Object Matching Hao Jiang and Stella X. Yu Computer Science Department Boston College.
Affine Registration in R m 5. The matching function allows to define tentative correspondences and a RANSAC-like algorithm can be used to estimate the.
Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!
Markov Networks: Theory and Applications Ying Wu Electrical Engineering and Computer Science Northwestern University Evanston, IL 60208
Photoconsistency constraint C2 q C1 p l = 2 l = 3 Depth labels If this 3D point is visible in both cameras, pixels p and q should have similar intensities.
Parameter estimation class 5 Multiple View Geometry CPSC 689 Slides modified from Marc Pollefeys’ Comp
Hongyu Liang Institute for Theoretical Computer Science Tsinghua University, Beijing, China The Algorithmic Complexity.
Clustering (2) Center-based algorithms Fuzzy k-means Density-based algorithms ( DBSCAN as an example ) Evaluation of clustering results Figures and equations.
Spectral Methods for Dimensionality
Semi-Supervised Clustering
A Tensor-Based Algorithm for High-Order Graph Matching
Object Matching Using a Locally Affine Invariant and Linear Programming Techniques - H. Li, X. Huang, L. He Ilchae Jung.
Clustering (3) Center-based algorithms Fuzzy k-means
Latent Variables, Mixture Models and EM
Probabilistic Models for Linear Regression
Structure from motion Input: Output: (Tomasi and Kanade)
Graph matching algorithms
Efficient Graph Cut Optimization for Full CRFs with Quantized Edges
Segmentation (continued)
Graduate School of Information Sciences, Tohoku University
A Graph-Matching Kernel for Object Categorization
Structure from motion Input: Output: (Tomasi and Kanade)
Normal Form (Matrix) Games
Presentation transcript:

Probabilistic Graph and Hypergraph Matching Ron Zass & Amnon Shashua School of Engineering and Computer Science, The Hebrew University, Jerusalem

Example: Object Matching No global affine transform. Local affine transforms + small non-rigid motion. Match by local features + local structure. Images from: www.operationdoubles.com/one_handed_backhand_tennis.htm

Hypergraph Matching in Computer Vision In graph matching, we describe objects as graphs (features  nodes, distances  edges) and match objects by matching graphs. Problem: Distances are not affine invariant.

Hypergraph Matching in Computer Vision 4 Affine invariant properties. Properties of four or more points. Example: area ratio, Area1 / Area2 Describe objects as hypergraphs (features  nodes, area ratio  hyperedges) Match objects by doing Hypergraph Matching. In general, if n points are required to solve the local transformation, d = n+1 points are required for an invariant property. 2 Area2 Area1 1 3

Related Work Hypergraph matching Wong, Lu & Rioux, PAMI 1989 Sabata & Aggarwal, CVIU 1996 Demko, GbR 1998 Bunke, Dickinson & Kraetzl, ICIAP 2005 All search for an exact matching. Edges are matched to edges of the exact same label. Search algorithms for the largest sub-isomorphism. Unrealistic We are interested in an inexact matching. Edges are matched to edges with similar labels. Find the best matching according to some score function.

Related Work Inexact Graph Matching Popular line of works: Continuous relaxation As an SDP problem Schellewald & Schnörr, CVPR 2005 As a spectral decomposition problem Leordeanu & Hebert, ICCV 2005; Cour, Srinivasan & Shi, NIPS 2006 Iterative Linear approximations using Taylor expansions Gold & Rangarajan, CVPR 1995 And many more. Some continuous relaxation may be interpreted as soft matching. Our work differ: We assume probabilistic interpretation of the input and extract probabilistic matching

From Soft to Hard Given the optimal soft solution X , the nearest hard matching is found by solving a Linear Assignment Problem. The two steps (soft matching and nearest hard matching) are optimal. The overall hard matching is not optimal (NP-hard). Better Matching Soft Matching Hard Matching

m (v1 ,…,vd ) = (m (v1 ),…,m (vd ) ) Hypergraph Matching Two directed hypergraphs of degree d, G=(V,E) and G ’=(V ’,E ’). A hyper-edge is an ordered d -tuple of vertices. Include the undirected version as a private case. Matching: m : V  V ’ Induce edge matching, m : E  E ’, m (v1 ,…,vd ) = (m (v1 ),…,m (vd ) )

Probabilistic Hypergraph Matching Input: Probability that an edge from e E match to an edge in e ’ E ’: Output: Probability that two vertices match We will derive an algebraic connection between S and X , and then use it for finding the optimal X .

Kronecker Product Kronecker product between an i xj matrix A to a k xl matrix B is a ik xjl matrix:

S ↔ X connection Assumption Proposition (S ↔ X connection) Proof

S ↔ X connection for graphs V ’ xV ’ V ’ V V xV

Globally Optimal Soft Hypergraph Matching Nearest to S , where X is a valid matrix of probabilities:  Vertex can be left unmatched. With equalities, all vertices must be matched.

Cour, Srinivasan & Shi 2006 Our result can explain some previously used heuristics. Cour et al 2006 preprocessing: Replace S with the nearest doubly stochastic matrix (in relative entropy) before any other graph matching algorithm. Proposition: For X ≥ 0, X is doubly stochastic iff is doubly stochastic.  is doubly stochastic.

Globally Optimal Soft Hypergraph Matching We use the Relative Entropy (Maximum Likelihood) error measure, Global Optimum, Efficient.

Globally Optimal Soft Hypergraph Matching Define: Convex problem, with |V |x|V ’| inputs and outputs!

Globally Optimal Soft Hypergraph Matching Define , the number of matches. X * (k ) is convex in k. We give optimal solution for X * (k ), and solve for k numerically (convex minimization in single variable).

Globally Optimal Soft Hypergraph Matching Define three sub-problems ( j =1,2,3): Each has an optimal closed form solution.

Successive Projections [Tseng 93, Censor & Reich 98] Set For t =1,2,… till convergence: For j = 1,2,3: Optimal!

Globally Optimal Soft Hypergraph Matching When the hypergraphs are of the same size, and all vertices has to be matched, our algorithm reduces to the Sinkhorn algorithm for nearest doubly stochastic matrix in relative entropy.

Sampling Given Y, the problem size reduce to |V |x|V ’|. Calculate Y : simple sum on all hyper-edges. Problem: Compute S, the hyper-edge to hyper-edge correlation. Sampling heuristic: For each vertex, use only z closest hyper-edges. Heuristic applies to transformation that are locally affine (but globally not affine). O(|V |·|V ’|·z 2) correlations.

Runtime Without edge correlations time With hyperedge correlations time (50 points) Hyperedges per vertex Our scheme (graphs) Spectral Matching Leordeanu05 Our scheme (hypergraphs)

to a single graph to both graphs Experiments on Graphs to a single graph to both graphs Two duplicates of 25 points. Graphs based on distances. Additional random points. Spectral Matching Leordeanu05 with Cour06 preprocessing Spectral Matching Leordeanu05 Our scheme

Experiments on Graphs Mean distance between neighboring points is 1. One duplicate distorted with a random noise. Spectral uses Frobenius norm – should have better resilience to additive noise. Due to the global optimal solution, Relative Entropy shows comparable results. Spectral Matching Leordeanu05 with Cour06 preprocessing Spectral Matching Leordeanu05 Our scheme

Affine Transformation (doesn’t preserve distances) Limitations of Graphs Affine Transformation (doesn’t preserve distances) random distortion additional points to a single graph to both graphs Our scheme (hypergraphs,z=60) Spectral Matching Leordeanu05 with Cour06 preprocessing Our scheme (graphs) Spectral Matching Leordeanu05

Feature Matching in Computer Vision Describe objects by local features (e.g., SIFT). Match objects by matching features. Based solely on local appearance: Different features might look the same. Same feature might look differently.

Global Affine Transformation Spectral Graph Matching Hypergraph Matching based on distances based on area ratio 10/33 mismatches no mismatches Images from: www.robots.ox.ac.uk/vgg/research/affine/index.html

Non-rigid Matching Match first and last frames of a 200 frames video (6 seconds), using [Torresani & Bregler, Space-Time Tracking, 2002] features. Videos and points from: movement.stanford.edu/nonrig/

Non-rigid Matching Videos and points from: movement.stanford.edu/nonrig/

Summary Structure translates to hypergraphs, not graphs. Probabilistic interpretation leads to a simple connection between input and output: Globally Optimal solution under Relative Entropy (Maximum Likelihood). Efficient for both graphs and hypergraphs. Apply to graph matching problems as well.

Probabilistic Interpretation of Graph and Hypergraph Matching Soft Matching Criterion Explain Previous Heuristics Efficient Globally Optimal Soft Matching

Why Soft Matching? Soft matching holds matching ambiguities until more data comes in to disambiguate the matching. Example: Tracking. Frame to frame tracking may be ambiguous. Structure information from later frames may resolve these ambiguities.

Globally Optimal Soft Hypergraph Matching Define , the number of matches. X * (k ) is convex in k. We give optimal solution for X * (k ), and solve for k numerically (convex minimization in single variable).

Globally Optimal Soft Hypergraph Matching Define three sub-problems ( j =1,2,3): Each has an optimal closed form solution.

Successive Projections [Tseng 93, Censor & Reich 98] Set For t =1,2,… till convergence: For j = 1,2,3: Optimal!

Globally Optimal Soft Hypergraph Matching When the hypergraphs are of the same size, and all vertices has to be matched, our algorithm reduces to the Sinkhorn algorithm for nearest doubly stochastic matrix in relative entropy.

Future Work The efficiency of the sampling scheme has to be further learnt. Hyperedges of mixed degree. Including d =1 (feature matching). Straightforward from a theoretical point of view. However, balancing different type of measurements has to be addressed. Other Error Norms Frobenius norm is connected to the hard matching problem, and offers resiliency to additive noise.

Sampling mean noise =1 50 additional points to a single graph