Presentation on theme: "Using Real-Valued Meta Classifiers to Integrate Binding Site Predictions Yi Sun, Mark Robinson, Rod Adams, Paul Kaye, Alistair G. Rust, Neil Davey University."— Presentation transcript:
Using Real-Valued Meta Classifiers to Integrate Binding Site Predictions Yi Sun, Mark Robinson, Rod Adams, Paul Kaye, Alistair G. Rust, Neil Davey University of Hertfordshire, 2005
Outline Problem Domain Description of the Datasets Experimental Techniques Experiments Summary
Problem Domain (1) One of the most exciting and active areas of research in biology currently, is understanding the regulation of gene expression. It is known that many of the mechanisms of gene regulation take place directly at the transcriptional or sequence level.
Problem Domain (2) Transcription factors will bind to a number of different but related sequences, thereby effecting changes in the expression of genes. The current state of the art algorithms for transcription factor binding site prediction are, in spite of recent advances, still severely limited in accuracy.
The original dataset has 68910 possible binding sites. A prediction result for each of 12 algorithms. –Single sequence algorithms (7); –Coregulatory algorithms (3); –Comparative algorithm (1); –Evolutionary algorithm (1). It includes two classes labelled as either binding sites or non-binding sites with about 93% being non-binding sites. Description of the Datasets (1)
Description of the Datasets (2) Fig. 1. Organisation of dataset, showing alignment of algorithmic predictions, known information and original DNA sequence data.
Description of the Datasets (3) Windowing Fig. 2. The window size is set to 7 in this study. The middle label of 7 continuous prediction sites is the label for a new windowed inputs. The length of each windowed input now is 12X7.
Imbalanced Data ( 93% being Non-binding Sites ) For the under-sampling, we randomly selected a subset of data points from the majority class. The synthetic minority over-sampling technique (SMOTE) proposed by N.V.Chawla, et al,. is applied for the over-sampling. –For each pattern in the minority class, we search for its K-nearest neighbours in the minority class using Euclidean distance. –For continuous features, the difference of each feature between the pattern and its nearest neighbour is taken, and then multiplied by a random number between 0 and 1, and added to the corresponding feature of the pattern. –For binary features, the majority voting principle to each element of the K-nearest neighbours in the feature vector space is employed. Sampling Techniques for Imbalanced Dataset Learning
Majority Voting (MV); Weighted Majority Voting (WMV); Single Layer Networks (SLN); Rules Sets (C4.5-Rules); Support Vector Machines (SVM). Experimental Techniques The Classification Techniques
Performance Metrics TN FP FN TP A confusion matrix
InputsclassifierRecallPrecisionF-ScoreAccuracyFp-Rate Best Alg. 40.9517.4624.4882.2214.66 MV 43.1013.1420.1475.9521.57 single WMV 41.1917.3524.4282.0514.86 SLN 28.8122.1625.0587.867.66 SVM 32.1424.4627.7888.237.52 C4.5Rules 29.2923.0825.8188.157.39 SLN 34.2918.8724.3485.0011.16 windowedSVM 38.8120.2526.6184.9311.58 C4.5Rules 23.5718.6420.8287.387.79 Table 1: Common performance metrics (%) tested on the same consistent possible binding sites with single and windowed inputs separately. Experiments (1) Consistent Dataset
InputsclassifierRecallPrecisionF-ScoreAccuracyFp-Rate Best Alg. 36.3618.4024.4485.9710.73 MV 35.7315.1221.2583.4813.35 single WMV 34.7520.0425.4287.289.23 SLN 25.1925.0925.1490.645.01 SVM 27.9126.9727.4390.795.03 C4.5Rules 23.0323.1423.0890.435.09 SLN 31.8222.6626.4788.977.23 windowedSVM 36.7823.5028.6788.587.97 C4.5Rules 22.2619.7020.9089.496.04 Experiments (2) Full Dataset Table 2: Common performance metrics (%) tested on the full test dataset with single and windowed inputs separately.
Fig. 3. ROC graph: five classifiers applied to the consistent test set with single inputs. Experiments (3)
Fig. 4. ROC graph: 3 classifiers applied to the full test set with windowed inputs. Experiments (4)
Summary By integrating the 12 algorithms we considerably improve binding site prediction using the SVM. Employing a window of consecutive results in the input vector can contextualise the neighbouring results, so that it can use the distribution of data to improve binding site prediction.