Presentation is loading. Please wait.

Presentation is loading. Please wait.

Semi-supervised protein classification using cluster kernels Jason Weston, Christina Leslie, Eugene Ie, Dengyong Zhou, Andre Elisseeff and William Stafford.

Similar presentations


Presentation on theme: "Semi-supervised protein classification using cluster kernels Jason Weston, Christina Leslie, Eugene Ie, Dengyong Zhou, Andre Elisseeff and William Stafford."— Presentation transcript:

1 Semi-supervised protein classification using cluster kernels Jason Weston, Christina Leslie, Eugene Ie, Dengyong Zhou, Andre Elisseeff and William Stafford Noble

2 Why Semi-supervised &How Unlabeled data may contains information (structure-distribution information) that is helpful for improve classification performance Unlabeled data may contains information (structure-distribution information) that is helpful for improve classification performance Labeled instances are important hints for clustering Labeled instances are important hints for clustering Cluster assumption Cluster assumption Two points which can be connected by a high density path (i.e. in the same cluster) are likely to be of the same label. Decision boundary should lie in a low density region.

3 Why Semi-supervised &How (Cont.) Recent work in semi-supervised learning has focused on changing the representation given to a classifier by taking into account the structure described by the unlabeled data An Example: Profile hidden Markov models (HMMs) Can be trained iteratively using both positively labeled and unlabeled examples by pulling in close homologs and adding them to the positive set.

4 Main Results of the Paper Develop simple and scalable cluster kernel techniques for incorporating unlabeled data into the representation of protein sequences. We show that the methods greatly improve the classification performance. We achieve equal or superior performance to previously presented cluster kernel methods and at the same time achieving far greater computational efficiency.

5 Neighborhood Kernel & Bagged Kernel Neighborhood Kernel Neighborhood Kernel uses averaging over a neighborhood of sequences defined by a local sequence similarity measure Bagged Kernel Bagged Kernel uses bagged clustering of the full sequence dataset to modify the base kernel. The main idea is to change the distance metric so that the relative distance between two points is smaller if the points are in the same cluster.

6 Neighborhood Kernel & Bagged Kernel (Cont.) kernels make use of two complementary (dis)similarity measures: A base kernel representation that implicitly exploits features useful for discrimination between classes. (mismatch string kernel) A distance measure that describes how close examples are to each other. (BLAST or PSI-BLAST E-values) We note that string kernels have proved to be powerful representations for SVM classification, but do not give sensitive pairwise similarity scores like the BLAST family methods

7 Neighborhood Kernel (1) Use a standard sequence similarity measure like BLAST or PSI-BLAST to define a neighborhood for each input sequence Use a standard sequence similarity measure like BLAST or PSI-BLAST to define a neighborhood for each input sequence The neighborhood Nbd(x) of sequence x is the set of sequences x ’ with similarity score to x below a fixed E- value threshold, together with x itself. The neighborhood Nbd(x) of sequence x is the set of sequences x ’ with similarity score to x below a fixed E- value threshold, together with x itself. Given a fixed original feature representation, x is represented by the average of the feature vectors for members of its neighborhood

8 Neighborhood Kernel (2)

9 Bagged Mismatch Kernel Using K-means Using K-means

10 Step (2) is a valid kernel because it is the inner product in an nk-dimensional space Φ(x i ) =. Products of kernels as in Step (3) are also valid kernels. The intuition behind the approach is that the original kernel is rescaled by the ‘ probability ’ that two points are in the same cluster, hence encoding the cluster assumption.

11 Revisiting the Key Idea Changing the representation given to a classifier by taking into account the structure described by the unlabeled data

12 Experiments We use the same 54 target families and the same test and training set splits as in the remote homology experiments from Liao and Noble (2002) Semi-supervised setting Transductive setting Large-scale experiments

13 Semi-supervised setting A signed rank test shows that the neighborhood mismatchkernel yields significant improvement over adding homologs (P value =3.9 × 10 −5 )

14 Transductive Setting A signed rank test (with adjusted P-value cutoff of 0.05) finds no significant difference between the neighborhood kernel, the bagged kernel (k = 100) and the random walk kernel in this transductive setting.

15 Transductive Setting (Cont.) A signed rank test (with adjusted P-value cutoff of 0.05) finds no significant difference between the neighborhood kernel, the bagged kernel (k = 100) and the random walk kernel in this transductive setting.

16 Large-scale experiments Using 101,602 Swiss- Prot protein sequences as additional unlabeled data.

17 Large-scale experiments (Cont.)

18 Although the cluster kernel method does not outperform the profile kernel on average, results of the two methods are similar (20 wins, 25 losses, 9 ties for the cluster kernel); a signed rank test with a P-value threshold of 0.05 finds no significant difference in performance between the two methods. In practice, in the large-scale experiments reported, the profile kernel takes less time to compute than the neighborhood kernel, since we do not limit the number of sequences represented in the neighborhoods and hence the neighborhood representation is much less compact than the profile kernel representation.

19 Large-scale experiments (Cont.) The profile kernel might perform best when the training profiles themselves are not too distant from the test sequences (as measured, e.g. by PSIBLAST E-value) Whereas the neighborhood kernel might perform better in the more difficult situation where the test sequences are poorly described by every positive training profile.

20 Conclusion Described two kernels, the neighborhood mismatch kernel and the bagged mismatch kernel, which combine both approaches and yield state-of-the-art performance in protein classification. Revisiting the key idea: Changing the representation given to a classifier by taking into account the structure described by the unlabeled data

21 Thanks


Download ppt "Semi-supervised protein classification using cluster kernels Jason Weston, Christina Leslie, Eugene Ie, Dengyong Zhou, Andre Elisseeff and William Stafford."

Similar presentations


Ads by Google