Presentation is loading. Please wait.

Presentation is loading. Please wait.

Bilinear Classifiers for Visual Recognition

Similar presentations


Presentation on theme: "Bilinear Classifiers for Visual Recognition"— Presentation transcript:

1 Bilinear Classifiers for Visual Recognition
Hamed Pirsiavash, Deva Ramanan, Charless Fowlkes Computational Vision Lab, University of California Irvine, CA USA Main ideas: -Treat image features and templates as matrices or tensors rather than simply “vectorizing” them -Learn a low rank template by performing alternating minimization using standard linear SVM solver -Share a subset of parameters (subspace) across different problems (transfer learning) Linear SVM: For a given set of training pairs: Experiment 1: Pedestrian detection on INRIA motion dataset f X n ; y g m i n W L ( ) = 1 2 T r + C P a x ; y X n s = 1 4 6 , f 8 d 5 Regularizer Constraints Was Linear: f ( X ) = T r W > Detection function: Now it is Bilinear: f ( X ) = T r W s > It is biconvex so we use coordinate descent with off-the-shelf linear SVM solver in the loop Learning: Change of basis: ~ W f = A 1 2 T , X n s m i n ~ W f L ( ; s ) = 1 2 T r + C P a x y X Experiment 2: Human action recognition on UCF sports dataset A window on sample image Sample template Advantages : -Rank constraint provides natural regularization. -10X runtime speedup with no loss in performance. (Fewer number of convolutions in detection) -Learn shared subspaces across multiple classes or training sets to allow for transfer learning. (e.g., share subspace between “human detector” and “cat detector”) Features Linear (0.518) (Not always feasible) PCA on features (0.444) Bilinear (0.648) 1 x = 2 6 4 . 3 7 5 n y f 1 Features Feature vector Vectorizing (traditional) n f n y n f Joint minimization: Our method X = 2 6 4 : . 3 7 5 n y x f m i n ~ W f P = 1 L ( ; s ) Feature matrix n x PCA to initialize Experiment 3: Sharing subspace on UCF sports dataset (Using only two training samples for each class) W = s T f Rank restriction: d m i n ( s ; f ) Train Train Train where n s = y x W 1 s W 2 s Iteration 1 Refined at Iteration 2 W m s n f d Average classification rate 2 6 4 : . 3 7 5 n s f = d h i n f Train Train Train Train Coordinate decent iteration 1 Coordinate decent iteration 2 Independent bilinear (C=0.01) 0.222 0.289 Joint bilinear (C=0.1) 0.269 0.356 W f W 1 f W 2 f W m f Related work : -Tenenbaum & Freeman’s bilinear model (Neural Computation’00) -Wolf et al’s low-rank SVM paper (CVPR’07) -Ando and Zhang's transfer learning paper (JMLR'05) W T f W W s Low dim template Subspace W = s T f


Download ppt "Bilinear Classifiers for Visual Recognition"

Similar presentations


Ads by Google