To put your footer here go to View > Header and Footer2 ModFOLD v 1.1 (Server) Combines 6 QA scores using a Neural Network (4 scores in CASP7) Considers models individually Trained using TM-scores and fold recognition models Outputs a single score for each model (QMODE1) InputsHidden LayerOutput TM-score SS (new) SS-weighted (new) ModSSEA MODCHECK ProQ-MX ProQ-LG
To put your footer here go to View > Header and Footer3 ModFOLDclust (Server) Simple clustering method - unsupervised Compares all sever models against one another Outputs overall score plus per-residue accuracy (QMODE2) 2. Per-residue accuracy - Mean S-score rearranged to give distance in Angstroms S i = S-score for residue i d i = distance between aligned residues according to TM-score superposition d 0 = distance threshold (3.9) S r = predicted residue accuracy for the model N = number of models A = set of alignments S ia = Si score for a residue in a structural alignment (a) 1. Overall/global model quality - Mean TM-score between models (Similar to 3D-Jury) S = quality score for model N-1 = number of pairwise structural alignments carried out for model M = set of alignments T m = TM-score for alignment of models
To put your footer here go to View > Header and Footer4 ModFOLD v 2.0 (Manual) Combines ModFOLD scores, ModFOLDclust score and initial server ranking using a NN Considers models individually (sort of) Compares each model against 30 nFOLD3 server models to get a ModFOLDclust score (server version) Per-residue accuracy from ModFOLDclust method (server version)
To put your footer here go to View > Header and Footer5 Observed quality (GDT-TS) Predicted quality ModFOLDclust – all TS1 models Predicted quality Observed quality (GDT-TS) ModFOLD 2.0 - all TS1 models ModFOLDclust – T0498 Observed quality (GDT-TS) Predicted quality ModFOLDclust – T0499 Observed quality (GDT-TS) Predicted quality
To put your footer here go to View > Header and Footer6 Correlation of output with GDT-TS Method Kendall (Tau) Spearman (Rho) Pearson (R) ModFOLDclust0.760.910.92 ModFOLD 2.00.740.900.91 ModFOLD 1.10.520.71 Wilcoxon signed rank sum tests (H 0 = GDTx GDTy, H 1 = GDTx > GDTy) ModFOLDclust Zhang -Server ModFOLD 2.0 pro-sp3- TASSER ModFOLDclust1.0000.1810.147 0.000 Zhang-Server0.8201.0000.162 0.000 ModFOLD 2.00.8540.8391.000 0.000 pro-sp3- TASSER1.000 Results continued… ModFOLD 1.1: Increase in average per-target correlation since CASP7? Decrease in global correlation? But diff. data sets. ModFOLD 2.0: Fewer outliers but no significant difference from ModFOLDclust Benchmarking on CASP7 set showed an increase in Kendalls Tau (not significant, training artefact?) ModFOLDclust: Most simple & effective method, but CPU intensive Still room for improvement, doesnt consistently recognise best model Marginally better than Zhang-Server in terms of cumulative GDT-TS, but difference is not significant Conclusions
To put your footer here go to View > Header and Footer7 The ModFOLD server Method Relative speed Upload options Output mode ModFOLD 1.1Fast Single and multiple QMODE1 ModFOLDclustSlow Multiple only QMODE2 ModFOLD 2.0MediumSingle and multiple QMODE2 http://www.reading.ac.uk/bioinf/ModFOLD/ firstname.lastname@example.org References: McGuffin, L. J. (2008) The ModFOLD Server for the Quality Assessment of Protein Structural Models. Bioinformatics, 24, 586-7. McGuffin, L. J. (2007) Benchmarking consensus model quality assessment for protein fold recognition. BMC Bioinformatics, 8, 345.