# Solving Markov Random Fields using Second Order Cone Programming Relaxations M. Pawan Kumar Philip Torr Andrew Zisserman.

## Presentation on theme: "Solving Markov Random Fields using Second Order Cone Programming Relaxations M. Pawan Kumar Philip Torr Andrew Zisserman."— Presentation transcript:

Solving Markov Random Fields using Second Order Cone Programming Relaxations M. Pawan Kumar Philip Torr Andrew Zisserman

Aim Accurate MAP estimation of pairwise Markov random fields 2 5 4 2 6 3 3 7 0 1 1 0 0 2 3 1 1 41 0 V1V1 V2V2 V3V3 V4V4 Label 0 Label 1 Labelling m = {1, 0, 0, 1} Random Variables V = {V 1,..,V 4 } Label Set L = {0,1}

Aim Accurate MAP estimation of pairwise Markov random fields 2 5 4 2 6 3 3 7 0 1 1 0 0 2 3 1 1 41 0 V1V1 V2V2 V3V3 V4V4 Label 0 Label 1 Cost(m) = 2

Aim Accurate MAP estimation of pairwise Markov random fields 2 5 4 2 6 3 3 7 0 1 1 0 0 2 3 1 1 41 0 V1V1 V2V2 V3V3 V4V4 Label 0 Label 1 Cost(m) = 2 + 1

Aim Accurate MAP estimation of pairwise Markov random fields 2 5 4 2 6 3 3 7 0 1 1 0 0 2 3 1 1 41 0 V1V1 V2V2 V3V3 V4V4 Label 0 Label 1 Cost(m) = 2 + 1 + 2

Aim Accurate MAP estimation of pairwise Markov random fields 2 5 4 2 6 3 3 7 0 1 1 0 0 2 3 1 1 41 0 V1V1 V2V2 V3V3 V4V4 Label 0 Label 1 Cost(m) = 2 + 1 + 2 + 1

Aim Accurate MAP estimation of pairwise Markov random fields 2 5 4 2 6 3 3 7 0 1 1 0 0 2 3 1 1 41 0 V1V1 V2V2 V3V3 V4V4 Label 0 Label 1 Cost(m) = 2 + 1 + 2 + 1 + 3

Aim Accurate MAP estimation of pairwise Markov random fields 2 5 4 2 6 3 3 7 0 1 1 0 0 2 3 1 1 41 0 V1V1 V2V2 V3V3 V4V4 Label 0 Label 1 Cost(m) = 2 + 1 + 2 + 1 + 3 + 1

Aim Accurate MAP estimation of pairwise Markov random fields 2 5 4 2 6 3 3 7 0 1 1 0 0 2 3 1 1 41 0 V1V1 V2V2 V3V3 V4V4 Label 0 Label 1 Cost(m) = 2 + 1 + 2 + 1 + 3 + 1 + 3

Aim Accurate MAP estimation of pairwise Markov random fields 2 5 4 2 6 3 3 7 0 1 1 0 0 2 3 1 1 41 0 V1V1 V2V2 V3V3 V4V4 Label 0 Label 1 Cost(m) = 2 + 1 + 2 + 1 + 3 + 1 + 3 = 13 Minimum Cost Labelling = MAP estimate Pr(m) exp(-Cost(m))

Aim Accurate MAP estimation of pairwise Markov random fields 2 5 4 2 6 3 3 7 0 1 1 0 0 2 3 1 1 41 0 V1V1 V2V2 V3V3 V4V4 Label 0 Label 1 Objectives Applicable for all neighbourhood relationships Applicable for all forms of pairwise costs Guaranteed to converge

Motivation Subgraph Matching - Torr - 2003, Schellewald et al - 2005 G1G1 G2G2 Unary costs are uniform V2V2 V3V3 V1V1 MRF A B C D A B C D A B C D

Motivation Subgraph Matching - Torr - 2003, Schellewald et al - 2005 G1G1 G2G2 | d(m i,m j ) - d(V i,V j ) | < 1 2 YESNO Potts Model Pairwise Costs

Motivation V2V2 V3V3 V1V1 MRF A B C D A B C D A B C D Subgraph Matching - Torr - 2003, Schellewald et al - 2005

Motivation V2V2 V3V3 V1V1 MRF A B C D A B C D A B C D Subgraph Matching - Torr - 2003, Schellewald et al - 2005

Motivation Matching Pictorial Structures - Felzenszwalb et al - 2001 Part likelihoodSpatial Prior Outline Texture Image P1P1 P3P3 P2P2 (x,y,, ) MRF

Motivation Image P1P1 P3P3 P2P2 (x,y,, ) MRF Unary potentials are negative log likelihoods Valid pairwise configuration Potts Model Matching Pictorial Structures - Felzenszwalb et al - 2001 1 2 YESNO

Motivation P1P1 P3P3 P2P2 (x,y,, ) Pr(Cow)Image Unary potentials are negative log likelihoods Matching Pictorial Structures - Felzenszwalb et al - 2001 Valid pairwise configuration Potts Model 1 2 YESNO

Outline Integer Programming Formulation Previous Work Our Approach –Second Order Cone Programming (SOCP) –SOCP Relaxation –Robust Truncated Model Applications –Subgraph Matching –Pictorial Structures

Integer Programming Formulation 2 5 4 2 0 1 3 0 V1V1 V2V2 Label 0 Label 1 Unary Cost Unary Cost Vector u = [ 5 Cost of V 1 = 0 2 Cost of V 1 = 1 ; 2 4 ] Labelling m = {1, 0}

Integer Programming Formulation 2 5 4 2 0 1 3 0 V1V1 V2V2 Label 0 Label 1 Unary Cost Unary Cost Vector u = [ 5 2 ; 2 4 ] T Labelling m = {1, 0} Label vector x = [ -1 V 1 0 1 V 1 = 1 ; 1 -1 ] T Recall that the aim is to find the optimal x

Integer Programming Formulation 2 5 4 2 0 1 3 0 V1V1 V2V2 Label 0 Label 1 Unary Cost Unary Cost Vector u = [ 5 2 ; 2 4 ] T Labelling m = {1, 0} Label vector x = [ -11; 1 -1 ] T Sum of Unary Costs = 1 2 i u i (1 + x i )

Integer Programming Formulation 2 5 4 2 0 1 3 0 V1V1 V2V2 Label 0 Label 1 Pairwise Cost Labelling m = {1, 0} 0 Cost of V 1 = 0 and V 1 = 0 0 00 0 Cost of V 1 = 0 and V 2 = 0 3 Cost of V 1 = 0 and V 2 = 1 10 00 00 10 30 Pairwise Cost Matrix P

Integer Programming Formulation 2 5 4 2 0 1 3 0 V1V1 V2V2 Label 0 Label 1 Pairwise Cost Labelling m = {1, 0} Pairwise Cost Matrix P 00 00 0 3 10 00 00 10 30 Sum of Pairwise Costs 1 4 ij P ij (1 + x i )(1+x j )

Integer Programming Formulation 2 5 4 2 0 1 3 0 V1V1 V2V2 Label 0 Label 1 Pairwise Cost Labelling m = {1, 0} Pairwise Cost Matrix P 00 00 0 3 10 00 00 10 30 Sum of Pairwise Costs 1 4 ij P ij (1 + x i +x j + x i x j ) 1 4 ij P ij (1 + x i + x j + X ij )= X = x x T X ij = x i x j

Integer Programming Formulation Constraints Each variable should be assigned a unique label x i = 2 - |L| i V a Marginalization constraint X ij = (2 - |L|) x i j V b

Integer Programming Formulation Chekuri et al., SODA 2001 x* = argmin 1 2 u i (1 + x i ) + 1 4 P ij (1 + x i + x j + X ij ) x i = 2 - |L| i V a X ij = (2 - |L|) x i j V b x i {-1,1} X = x x T Convex Non-Convex

Outline Integer Programming Formulation Previous Work Our Approach –Second Order Cone Programming (SOCP) –SOCP Relaxation –Robust Truncated Model Applications –Subgraph Matching –Pictorial Structures

Linear Programming Formulation x* = argmin 1 2 u i (1 + x i ) + 1 4 P ij (1 + x i + x j + X ij ) x i = 2 - |L| i V a X ij = (2 - |L|) x i j V b x i {-1,1} X = x x T Chekuri et al., SODA 2001 Retain Convex Part Relax Non-convex Constraint

Linear Programming Formulation x* = argmin 1 2 u i (1 + x i ) + 1 4 P ij (1 + x i + x j + X ij ) x i = 2 - |L| i V a X ij = (2 - |L|) x i j V b x i [-1,1] X = x x T Chekuri et al., SODA 2001 Retain Convex Part Relax Non-convex Constraint

Linear Programming Formulation x* = argmin 1 2 u i (1 + x i ) + 1 4 P ij (1 + x i + x j + X ij ) x i = 2 - |L| i V a X ij = (2 - |L|) x i j V b x i [-1,1] Chekuri et al., SODA 2001 Retain Convex Part

Feasible Region (IP) x {-1,1}, X = x 2 Linear Programming Formulation

Feasible Region (IP) Feasible Region (Relaxation 1) x {-1,1}, X = x 2 x [-1,1], X = x 2 Linear Programming Formulation

Feasible Region (IP) Feasible Region (Relaxation 1) Feasible Region (Relaxation 2) x {-1,1}, X = x 2 x [-1,1], X = x 2 x [-1,1] Linear Programming Formulation

Bounded algorithms proposed by Chekuri et al, SODA 2001 -expansion - Komodakis and Tziritas, ICCV 2005 TRW - Wainwright et al., NIPS 2002 TRW-S - Kolmogorov, AISTATS 2005 Efficient because it uses Linear Programming Not accurate

Semidefinite Programming Formulation x* = argmin 1 2 u i (1 + x i ) + 1 4 P ij (1 + x i + x j + X ij ) x i = 2 - |L| i V a X ij = (2 - |L|) x i j V b x i {-1,1} X = x x T Lovasz and Schrijver, SIAM Optimization, 1990 Retain Convex Part Relax Non-convex Constraint

x* = argmin 1 2 u i (1 + x i ) + 1 4 P ij (1 + x i + x j + X ij ) x i = 2 - |L| i V a X ij = (2 - |L|) x i j V b x i [-1,1] X = x x T Semidefinite Programming Formulation Retain Convex Part Relax Non-convex Constraint Lovasz and Schrijver, SIAM Optimization, 1990

Semidefinite Programming Formulation x1x1 x2x2 xnxn 1... 1x1x1 x2x2... xnxn 1xTxT x X = Rank = 1 X ii = 1 Positive Semidefinite Convex Non-Convex

Semidefinite Programming Formulation x1x1 x2x2 xnxn 1... 1x1x1 x2x2... xnxn 1xTxT x X = X ii = 1 Positive Semidefinite Convex

Schurs Complement AB BTBT C = I0 B T A -1 I A0 0 C - B T A -1 B IA -1 B 0 I 0 A 0 C -B T A -1 B 0

Semidefinite Programming Formulation X - xx T 0 1xTxT x X = 10 x I 10 0 X - xx T IxTxT 0 1 Schurs Complement

x* = argmin 1 2 u i (1 + x i ) + 1 4 P ij (1 + x i + x j + X ij ) x i = 2 - |L| i V a X ij = (2 - |L|) x i j V b x i [-1,1] X = x x T Semidefinite Programming Formulation Relax Non-convex Constraint Retain Convex Part Lovasz and Schrijver, SIAM Optimization, 1990

x* = argmin 1 2 u i (1 + x i ) + 1 4 P ij (1 + x i + x j + X ij ) x i = 2 - |L| i V a X ij = (2 - |L|) x i j V b x i [-1,1] Semidefinite Programming Formulation X ii = 1 X - xx T 0 Retain Convex Part Lovasz and Schrijver, SIAM Optimization, 1990

Feasible Region (IP) x {-1,1}, X = x 2 Semidefinite Programming Formulation

Feasible Region (IP) Feasible Region (Relaxation 1) x {-1,1}, X = x 2 x [-1,1], X = x 2 Semidefinite Programming Formulation

Feasible Region (IP) Feasible Region (Relaxation 1) Feasible Region (Relaxation 2) x {-1,1}, X = x 2 x [-1,1], X = x 2 x [-1,1], X x 2 Semidefinite Programming Formulation

Formulated by Lovasz and Schrijver, 1990 Finds a full X matrix Max-cut - Goemans and Williamson, JACM 1995 Max-k-cut - de Klerk et al, 2000 Accurate Not efficient because of Semidefinite Programming

Previous Work - Overview LPSDP Examples TRW-S, -expansion Max-k-Cut AccuracyLowHigh EfficiencyHighLow Is there a Middle Path ???

Outline Integer Programming Formulation Previous Work Our Approach –Second Order Cone Programming (SOCP) –SOCP Relaxation –Robust Truncated Model Applications –Subgraph Matching –Pictorial Structures

Second Order Cone Programming Second Order Cone || v || t OR || v || 2 st x 2 + y 2 z 2

Minimize f T x Subject to || A i x+ b i || <= c i T x + d i i = 1, …, L Linear Objective Function Affine mapping of Second Order Cone (SOC) Constraints are SOC of n i dimensions Feasible regions are intersections of conic regions Second Order Cone Programming

|| v || t tIv vTvT t 0 LP SOCP SDP = 10 vTvT I tI0 0 t 2 - v T v Iv 0 1 Schurs Complement

Outline Integer Programming Formulation Previous Work Our Approach –Second Order Cone Programming (SOCP) –SOCP Relaxation –Robust Truncated Model Applications –Subgraph Matching –Pictorial Structures

Matrix Dot Product AB = ij A ij B ij A 11 A 12 A 21 A 22 B 11 B 12 B 21 B 22 = A 11 B 11 + A 12 B 12 + A 21 B 21 + A 22 B 22

SDP Relaxation x* = argmin 1 2 u i (1 + x i ) + 1 4 P ij (1 + x i + x j + X ij ) x i = 2 - |L| i V a X ij = (2 - |L|) x i j V b x i [-1,1] X ii = 1 X - xx T 0 Derive SOCP relaxation from the SDP relaxation Further Relaxation

1-D Example X - xx T 0 X - x 2 0 For two semidefinite matrices, the dot product is non-negative A A 0 x 2 X SOC of the form || v || 2 st

Feasible Region (IP) Feasible Region (Relaxation 1) Feasible Region (Relaxation 2) x {-1,1}, X = x 2 x [-1,1], X = x 2 x [-1,1], X x 2 SOCP Relaxation Same as the SDP formulation

2-D Example X 11 X 12 X 21 X 22 1X 12 1 = X = x1x1x1x1 x1x2x1x2 x2x1x2x1 x2x2x2x2 xx T = x12x12 x1x2x1x2 x1x2x1x2 = x22x22

2-D Example (X - xx T ) 1 - x 1 2 X 12 -x 1 x 2. 0 10 00 1 - x 2 2 x 1 2 1 -1 x 1 1 C 1. 0 C 1 0

2-D Example (X - xx T ) 1 - x 1 2 X 12 -x 1 x 2 C 2. 0. 0 00 01 X 12 -x 1 x 2 1 - x 2 2 x 2 2 1 LP Relaxation -1 x 2 1 C 2 0

2-D Example (X - xx T ) 1 - x 1 2 X 12 -x 1 x 2 C 3. 0. 0 11 11 X 12 -x 1 x 2 1 - x 2 2 (x 1 + x 2 ) 2 2 + 2X 12 SOC of the form || v || 2 st C 3 0

2-D Example (X - xx T ) 1 - x 1 2 X 12 -x 1 x 2 C 4. 0. 0 1 1 X 12 -x 1 x 2 1 - x 2 2 (x 1 - x 2 ) 2 2 - 2X 12 SOC of the form || v || 2 st C 4 0

SOCP Relaxation Consider a matrix C 1 = UU T 0 (X - xx T ) ||U T x || 2 X. C 1 C 1. 0 Continue for C 2, C 3, …, C n SOC of the form || v || 2 st Kim and Kojima, 2000

SOCP Relaxation How many constraints for SOCP = SDP ? Infinite. For all C 0 We specify constraints similar to the 2-D example

SOCP Relaxation Muramatsu and Suzuki, 2001 10 00 00 01 11 11 1 1 Constraints hold for the above semidefinite matrices

SOCP Relaxation Muramatsu and Suzuki, 2001 10 00 00 01 11 11 1 1 a + b + c+ d a 0 b 0 c 0 d 0 Constraints hold for the linear combination

SOCP Relaxation Muramatsu and Suzuki, 2001 a+c+dc-d b+c+d a 0 b 0 c 0 d 0 Includes all semidefinite matrices where Diagonal elements Off-diagonal elements

SOCP Relaxation - A x* = argmin 1 2 u i (1 + x i ) + 1 4 P ij (1 + x i + x j + X ij ) x i = 2 - |L| i V a X ij = (2 - |L|) x i j V b x i [-1,1] X ii = 1 X - xx T 0

SOCP Relaxation - A x* = argmin 1 2 u i (1 + x i ) + 1 4 P ij (1 + x i + x j + X ij ) x i = 2 - |L| i V a X ij = (2 - |L|) x i j V b x i [-1,1] (x i + x j ) 2 2 + 2X ij (x i - x j ) 2 2 - 2X ij Specified only when P ij 0

Triangular Inequality At least two of x i, x j and x k have the same sign At least one of X ij, X jk, X ik is equal to one X ij + X jk + X ik -1 X ij - X jk - X ik -1 -X ij - X jk + X ik -1 -X ij + X jk - X ik -1 SOCP-B = SOCP-A + Triangular Inequalities

Outline Integer Programming Formulation Previous Work Our Approach –Second Order Cone Programming (SOCP) –SOCP Relaxation –Robust Truncated Model Applications –Subgraph Matching –Pictorial Structures

Robust Truncated Model Pairwise cost of incompatible labels is truncated Potts ModelTruncated Linear Model Truncated Quadratic Model Robust to noise Widely used in Computer Vision - Segmentation, Stereo

Robust Truncated Model Pairwise Cost Matrix can be made sparse P = [0.5 0.5 0.3 0.3 0.5] Q = [0 0 -0.2 -0.2 0] Reparameterization Sparse Q matrix Fewer constraints

Compatibility Constraint Q(m a, m b ) < 0 for variables V a and V b Relaxation Q ij (1 + x i + x j + X ij ) < 0 SOCP-C = SOCP-B + Compatibility Constraints

SOCP Relaxation More accurate than LP More efficient than SDP Time complexity - O( |V| 3 |L| 3 ) Same as LP Approximate algorithms exist for LP relaxation We use |V| 10 and |L| 200

Outline Integer Programming Formulation Previous Work Our Approach –Second Order Cone Programming (SOCP) –SOCP Relaxation –Robust Truncated Model Applications –Subgraph Matching –Pictorial Structures

Subgraph Matching Subgraph Matching - Torr - 2003, Schellewald et al - 2005 G1G1 G2G2 Unary costs are uniform V2V2 V3V3 V1V1 MRF A B C D A B C D A B C D Pairwise costs form a Potts model

Subgraph Matching 1000 pairs of graphs G 1 and G 2 #vertices in G 2 - between 20 and 30 #vertices in G 1 - 0.25 * #vertices in G 2 5% noise to the position of vertices NP-hard problem

Subgraph Matching Method Time (sec)Accuracy (%) LP0.856.64 SDP-A35.093.11 SOCP-A3.092.01 SOCP-B4.594.79 SOCP-C4.896.18

Outline Integer Programming Formulation Previous Work Our Approach –Second Order Cone Programming (SOCP) –SOCP Relaxation –Robust Truncated Model Applications –Subgraph Matching –Pictorial Structures

Pictorial Structures Image P1P1 P3P3 P2P2 (x,y,, ) MRF Matching Pictorial Structures - Felzenszwalb et al - 2001 Outline Texture

Pictorial Structures Image P1P1 P3P3 P2P2 (x,y,, ) MRF Unary costs are negative log likelihoods Pairwise costs form a Potts model | V | = 10| L | = 200

Pictorial Structures LBP GBP SOCP

Pictorial Structures LBP GBP SOCP

Pictorial Structures LBP GBP SOCP

Pictorial Structures LBP GBP SOCP LBP and GBP do not converge

Pictorial Structures ROC Curves for 450 +ve and 2400 -ve images

Pictorial Structures ROC Curves for 450 +ve and 2400 -ve images

Conclusions We presented an SOCP relaxation to solve MRF More efficient than SDP More accurate than LP, LBP, GBP #variables can be reduced for Robust Truncated Model Provides excellent results for subgraph matching and pictorial structures

Future Work Quality of solution –Additive bounds exist –Multiplicative bounds for special cases ?? Message passing algorithm ?? –Similar to TRW-S or -expansion –To handle image sized MRF

Download ppt "Solving Markov Random Fields using Second Order Cone Programming Relaxations M. Pawan Kumar Philip Torr Andrew Zisserman."

Similar presentations