Download presentation

Presentation is loading. Please wait.

Published byJackson MacLeod Modified over 3 years ago

1
Olivier Duchenne ， Armand Joulin ， Jean Ponce Willow Lab ， ICCV2011

2
Many applications: 1. Object recognition 2. Text categorization 3. time-series prediction 4. Gene expression profile analysis......

3
Given a set of data (x 1, y 1 ), (x 2, y 2 ),..., (x n, y n ), the Kernel Method maps them into a potentially much higher dimensional feature space F.

4
For a given learning problem one now considers the same algorithm in instead of R N, one works with the sample The kernel method seeks a pattern among the data in the feature sapce.

5
Idea: The nonlinear problem in a lower space can be solved by a linear method in a higher space. Example:

7
【 Kernel function 】 A kernel function is a function k that for all x, z ∈ X satisfies where is a mapping from X to an (inner product) feature space F

8
The computation of a scalar product between two feature space vectors, can be readily reformulated in terms of a kernel function k

9
Is necessary? Not necessary What kind of k can be used? symmetric positive semi-definite ( kernel matrix ) Given a feature mapping, caan we compute the inner product in feature space? Yes Given a kernel function k, whether a feature mapping is existence? Yes [Mercer’s theorem]

10
Linear Kernel Polynomial Kernel RBF (Gaussian) Kernel Inverse multiquadric Kernel

11
Kernel matrix Consider the problem of finding a real-valued linear function that best intopolates a given training set S = {(x 1, y 1 ), (x 2, y 2 ),..., (x l, y l )} (least square)

12
Dual form where K is the kernel matrix.

16
CAT DINOSAUR PAND A

17
Feature correspondences can be used to construct an image comparison kernel that is appropriate for SVM-based classification, and often outperforms BOFs. Image representations that enforce some degree of spatial consistency usually perform better in image classification tasks than pure bags of features that discard all spatial information.

18
We need to design a good image similarity measure: ≈ ?

19
Graph-matching Method in this paper Sparse Features NN Classifier Slow Use pair-wise Information Lower performance As Dense SVM Classifier Fast enough Use pair-wise Information State-of-the-art performance

20
An image I = a graph G = Nodes + Edges A node n=d n (x n,y n ) represent a region of I, Each region is represented by a image Feature vector F n,e.g. SIFT....

21
Matching two iamges is realized by maximizing the energy function:

Similar presentations

OK

Support Vector Classification (Linearly Separable Case, Primal) The hyperplanethat solves the minimization problem: realizes the maximal margin hyperplane.

Support Vector Classification (Linearly Separable Case, Primal) The hyperplanethat solves the minimization problem: realizes the maximal margin hyperplane.

© 2018 SlidePlayer.com Inc.

All rights reserved.

To ensure the functioning of the site, we use **cookies**. We share information about your activities on the site with our partners and Google partners: social networks and companies engaged in advertising and web analytics. For more information, see the Privacy Policy and Google Privacy & Terms.
Your consent to our cookies if you continue to use this website.

Ads by Google

Ppt on muscle physiology Ppt on brand marketing Ppt on area and perimeter of quadrilaterals Download ppt on android operating system Ppt on bluetooth security Ppt on statistics for class 11th Ppt on recent changes in service tax Ppt on mechanical power transmission system Ppt on 60 years of indian parliamentary Ppt on teachers day cards