Presentation is loading. Please wait.

Presentation is loading. Please wait.

Machine Learning for Vision-Based Motion Analysis Learning pullback metrics for linear models Oxford Brookes Vision Group Oxford Brookes University 17/10/2008.

Similar presentations


Presentation on theme: "Machine Learning for Vision-Based Motion Analysis Learning pullback metrics for linear models Oxford Brookes Vision Group Oxford Brookes University 17/10/2008."— Presentation transcript:

1 Machine Learning for Vision-Based Motion Analysis Learning pullback metrics for linear models Oxford Brookes Vision Group Oxford Brookes University 17/10/2008 Fabio Cuzzolin

2 Learning pullback metrics for linear models Distances between dynamical models Learning a metric from a training set Pullback metrics Spaces of linear systems and Fisher metric Experiments on scalar AR(2) models

3 Distances between dynamical models Problem: motion classification linear dynamical model Approach: representing each movement as a linear dynamical model for instance, each image sequence can be mapped to an ARMA, or AR linear model distance function in the space of dynamical models Classification is then reduced to find a suitable distance function in the space of dynamical models We can then use this distance in any distance-based classification scheme: k-NN, SVM, etc.

4 Proposed distances... Fisher information matrix Fisher information matrix [Amari] on a family of probability distributions Kullback-Leibler divergence Gap metric Gap metric [Zames,El-Sakkary]: compares graphs associated with linear systems as input-output maps Cepstrum normSubspace angles Cepstrum norm [Martin], Subspace angles [DeCock] all task specific!

5 Learning pullback metrics for linear models Distances between dynamical models Learning a metric from a training set Pullback metrics Spaces of linear systems and Fisher metric Experiments on scalar AR(2) models

6 Learning metrics from a training set it makes no sense to choose a single distance for all possible classification problems as…... labels can be assigned arbitrarily to dynamical systems, no matter what their structure is when some a-priori info is available (training set).... we can learn in a supervised fashion the best metric for the classification problem! volume minimization of pullback metrics a math tool for this task: volume minimization of pullback metrics

7 Learning distances many algorithms take an input dataset and map it to an embedded space, implicitly learning a metric (LLE, etc) they fail to learn a full metric for the whole input space optimal Mahalanobis distance [Xing, Jordan]: maximizes classification performance for linear maps y=A 1/2 x > optimal Mahalanobis distance relevant component analysis [Shental et al]: relevant component analysis – changes the feature space by a global linear transformation which assigns large weights to relevant dimensions

8 Learning pullback metrics for linear models Distances between dynamical models Learning a metric from a training set Pullback metrics Spaces of linear systems and Fisher metric Experiments on scalar AR(2) models

9 Learning pullback metrics consider than a family of diffeomorphisms F between the original space M and a metric space N (can be M itself) the diffeomorphism F induces on M a pullback metric pullback geodesics are liftings of the original ones

10 Pullback metrics - detail diffeomorphism diffeomorphism on M: push-forward push-forward map: diven a metric on M, g:TM TM, the pullback metric pullback metric is

11 Inverse volume Inverse volume: Inverse volume maximization the natural criterion would be to optimize the classification performance in a nonlinear setup this is hard to formulate and solve reasonable to choose a different but related objective function finds the manifold which better interpolates the data (geodesics have to pass through crowded regions)

12 Learning pullback metrics for linear models Distances between dynamical models Learning a metric from a training set Pullback metrics Spaces of linear systems and Fisher metric Experiments on scalar AR(2) models

13 Space of AR(2) models given an input sequence, we can identify the parameters of the linear model which better describes it autoregressive models of order 2 AR(2) Fisher metric on AR(2) Compute the geodesics of the pullback metric on M

14 A family of diffeomorphisms stretches the triangle towards the vertex with the largest lambda

15 Effect of optimal diffeomorphism effect of diffeomorphism on a training set of labeled dynamical models

16 Learning pullback metrics for linear models Distances between dynamical models Learning a metric from a training set Pullback metrics Spaces of linear systems and Fisher metric Experiments on scalar AR(2) models

17 Exps on Mobo database experiments on action and ID recognition on the Mobo database single feature (box width) used NN to classify image sequences seen as AR(2) relative performance of pullback and other distances measured

18 Results – ID recognition identity of 25 people from 6 different views (hard!) pullback metrics based on two different diffeomorphisms...... are compared with other classical applicable a- priori distances

19 Results - action Action recognition performance, all views considered – second best distance function Action recognition performance, all views considered – pullback Fisher metric Action recognition, view 5 only – difference between classification rates pullback metric – second best

20 Conclusions motions as dynamical systems classification finding distance between systems Having a training set we can learn the best such metric formalism of pullback metrics induced by Fisher distance design suitable family of diffeomorphism extension multilinear system easy! better objective function!


Download ppt "Machine Learning for Vision-Based Motion Analysis Learning pullback metrics for linear models Oxford Brookes Vision Group Oxford Brookes University 17/10/2008."

Similar presentations


Ads by Google