Presentation is loading. Please wait.

Presentation is loading. Please wait.

Informed by Informatics? Dr John Mitchell Unilever Centre for Molecular Science Informatics Department of Chemistry University of Cambridge, U.K.

Similar presentations


Presentation on theme: "Informed by Informatics? Dr John Mitchell Unilever Centre for Molecular Science Informatics Department of Chemistry University of Cambridge, U.K."— Presentation transcript:

1 Informed by Informatics? Dr John Mitchell Unilever Centre for Molecular Science Informatics Department of Chemistry University of Cambridge, U.K.

2 Modelling in Chemistry Density Functional Theory ab initio Molecular Dynamics Monte Carlo Docking PHYSICS-BASED EMPIRICAL ATOMISTIC Car-Parrinello NON-ATOMISTIC DPD CoMFA 2-D QSAR/QSPR Machine Learning AM1, PM3 etc. Fluid Dynamics

3 Density Functional Theory ab initio Molecular Dynamics Monte Carlo Docking Car-Parrinello DPD CoMFA 2-D QSAR/QSPR Machine Learning AM1, PM3 etc. HIGH THROUGHPUT LOW THROUGHPUT Fluid Dynamics

4 Density Functional Theory ab initio Molecular Dynamics Monte Carlo Docking Car-Parrinello DPD CoMFA 2-D QSAR/QSPR Machine Learning AM1, PM3 etc. INFORMATICS THEORETICAL CHEMISTRY NO FIRM BOUNDARIES! Fluid Dynamics

5 Density Functional Theory ab initio Molecular Dynamics Monte Carlo Docking Car-Parrinello DPD CoMFA 2-D QSAR/QSPR Machine Learning AM1, PM3 etc. Fluid Dynamics

6 Informatics and Empirical Models In general, Informatics methods represent phenomena mathematically, but not in a physics-based way. Inputs and output model are based on an empirically parameterised equation or more elaborate mathematical model. Do not attempt to simulate reality. Usually High Throughput.

7 QSPR Quantitative Structure  Property Relationship Physical property related to more than one other variable Hansch et al developed QSPR in 1960’s, building on Hammett (1930’s). Property-property relationships from 1860’s General form (for non-linear relationships): y = f (descriptors)

8 QSPR Y = f (X 1, X 2,..., X N ) Optimisation of Y = f(X 1, X 2,..., X N ) is called regression. Model is optimised upon N “training molecules” and then tested upon M “test” molecules.

9 QSPR Quality of the model is judged by three parameters:

10 QSPR Different methods for carrying out regression: LINEAR - Multi-linear Regression (MLR), Partial Least Squares (PLS), Principal Component Regression (PCR), etc. NON-LINEAR - Random Forest, Support Vector Machines (SVM), Artificial Neural Networks (ANN), etc.

11 QSPR However, this does not guarantee a good predictive model….

12 QSPR Problems with experimental error. QSPR only as accurate as data it is trained upon. Therefore, we are need accurate experimental data.

13 QSPR Problems with “chemical space”. “Sample” molecules must be representative of “Population”. Prediction results will be most accurate for molecules similar to training set. Global or Local models?

14 1. Solubility Dave Palmer Pfizer Institute for Pharmaceutical Materials Science http://www.msm.cam.ac.uk/pfizer D.S. Palmer, et al., J. Chem. Inf. Model., 47, 150-158 (2007) D.S. Palmer, et al., Molecular Pharmaceutics, ASAP article (2008)

15 Solubility is an important issue in drug discovery and a major source of attrition This is expensive for the industry A good model for predicting the solubility of druglike molecules would be very valuable.

16 Datasets Phase 1 – Literature Data Compiled from Huuskonen dataset and AquaSol database – pharmaceutically relevant molecules All molecules solid at room temperature n = 1000 molecules Phase 2 – Our Own Experimental Data ● Measured by Toni Llinàs using CheqSol Machine ● pharmaceutically relevant molecules ● n = 135 molecules ● Aqueous solubility – the thermodynamic solubility in unbuffered water (at 25 o C)

17 Diversity-Conserving Partitioning ● MACCS Structural Key fingerprints ● Tanimoto coefficient ● MaxMin Algorithm Full dataset n = 1000 molecules Training n = 670 molecules Test n = 330 molecules

18 Diversity Analysis

19 Structures & Descriptors ● 3D structures from Concord ● Minimised with MMFF94 ● MOE descriptors 2D/ 3D ● Separate analysis of 2D and 3D descriptors ● QuaSAR Contingency Module (MOE) ● 52 descriptors selected

20 Machine Learning Method Random Forest

21 ● Introduced by Briemann and Cutler (2001) ● Development of Decision Trees (Recursive Partitioning): ● Dataset is partitioned into consecutively smaller subsets ● Each partition is based upon the value of one descriptor ● The descriptor used at each split is selected so as to optimise splitting ● Bootstrap sample of N objects chosen from the N available objects with replacement

22 Random Forest ● Random Forest is a collection of Decision or Regression Trees grown with the CART algorithm. ● Standard Parameters: ● 500 decision trees ● No pruning back: Minimum node size = 5 ● “mtry” descriptors (square root of number of total number) tried at each split Important features: ● Incorporates descriptor selection ● Incorporates “Out-of-bag” validation – using those molecules not in the bootstrap samples

23 Random Forest for Solubility Prediction A Forest of Regression Trees Dataset is partitioned into consecutively smaller subsets (of similar solubility) Each partition is based upon the value of one descriptor The descriptor used at each split is selected so as to minimise the MSE Leo Breiman, "Random Forests“, Machine Learning 45, 5-32 (2001).

24 Random Forest for Predicting Solubility A Forest of Regression Trees Each tree grown until terminal nodes contain specified number of molecules No need to prune back High predictive accuracy Includes method for descriptor selection No training problems – largely immune from overfitting.

25 Random Forest: Solubility Results RMSE(te)=0.69 r 2 (te)=0.89 Bias(te)=-0.04 RMSE(tr)=0.27 r 2 (tr)=0.98 Bias(tr)=0.005 RMSE(oob)=0.68 r 2 (oob)=0.90 Bias(oob)=0.01 DS Palmer et al., J. Chem. Inf. Model., 47, 150-158 (2007)

26 Drug Disc.Today, 10 (4), 289 (2005)

27 Can we use theoretical chemistry to calculate solubility via a thermodynamic cycle?

28  G sub from lattice energy & an entropy term  G hydr from a semi-empirical solvation model (i.e., different kinds of theoretical/computational methods)

29 … or this alternative cycle?

30  G sub from lattice energy (DMAREL) plus entropy  G solv from SCRF using the B3LYP DFT functional  G tr from ClogP (i.e., different kinds of theoretical/computational methods)

31 ● Nice idea, but didn’t quite work – errors larger than QSPR ● “Why not add a correction factor to account for the difference between the theoretical methods?” …

32 ● Within a week this had become a hybrid method, essentially a QSPR with the theoretical energies as descriptors

33 This regression equation gave r 2 =0.77 and RMSE=0.71

34 Solubility by TD Cycle: Conclusions ● We have a hybrid part-theoretical, part-empirical method. ● An interesting idea, but relatively low throughput as a crystal structure is needed. ● Slightly more accurate than pure QSPR for a druglike set. ● Instructive to compare with literature of theoretical solubility studies.

35 2. Bioactivity Florian Nigsch F. Nigsch, et al., J. Chem. Inf. Model., 48, 306-318 (2008)

36 Feature Space - Chemical Space m = (f 1,f 2,…,f n ) f1f1 f2f2 f3f3 Feature spaces of high dimensionality COX2 f2f2 f3f3 f1f1 DHFR CDK1 CDK2

37 Properties of Drugs High affinity to protein target Soluble Permeable Absorbable High bioavailability Specific rate of metabolism Renal/hepatic clearance? Volume of distribution? Low toxicity Plasma protein binding? Blood-Brain-Barrier penetration? Dosage (once/twice daily?) Synthetic accessibility Formulation (important in development)

38 Multiobjective Optimisation Bioactivity Synthetic accessibility Permeability Toxicity Metabolism Solubility Huge number of candidates …

39 Multiobjective Optimisation Bioactivity Synthetic accessibility Permeability Toxicity Metabolism Solubility U S E L E S S Drug Huge number of candidates most of which are useless!

40 Spam Unsolicited (commercial) email Approx. 90% of all email traffic is spam Where are the legitimate messages? Filtering

41 Analogy to Drug Discovery Huge number of possible candidates Virtual screening to help in selection process

42 Winnow Algorithm Invented in late 1980s by Nick Littlestone to learn Boolean functions Name from the verb “to winnow” –High-dimensional input data Natural Language Processing (NLP), text classification, bioinformatics Different varieties (regularised, Sparse Network Of Winnow - SNOW, …) Error-driven, linear threshold, online algorithm

43 Machine Learning Methods Winnow (“Molecular Spam Filter”)

44 Training Example

45 Features of Molecules Based on circular fingerprints

46 Combinations of Features Combinations of molecular features to account for synergies.

47 Pairing Molecular Features Gives Bigrams

48 Orthogonal Sparse Bigrams Technique used in text classification/spam filtering Sliding window process Sparse - not all combinations Orthogonal - non- redundancy

49 Workflow

50 Protein Target Prediction Which proteins does a given molecule bind to? Virtual Screening Multiple endpoint drugs - polypharmacology New targets for existing drugs Prediction of adverse drug reactions (ADR) –Computational toxicology

51 Predicted Protein Targets Selection of ~230 classes from the MDL Drug Data Report ~90,000 molecules 15 independent 50%/50% splits into training/test set

52 Predicted Protein Targets Cumulative probability of correct prediction within the three top-ranking predictions: 82.1% (±0.5%)

53 3. Melting Points Florian Nigsch F. Nigsch, et al., J. Chem. Inf. Model., 46, 2412-2422 (2006)

54 Interest in Modelling Interest in modelling properties of molecules - solubility, toxicity, drug-drug interactions, … Virtual screening has become a necessity in terms of time and cost efficiency ADMET integration into drug development process - many fewer late stage failures - continuous refinement with new experimental data Absorption, Distribution, Metabolism, Excretion and Toxicity

55 Melting Points are Important Melting point facilitates prediction of solubility - General Solubility Equation (Yalkowsky) Notorious challenge (unknown crystal packing, interactions in liquid state) Goal: Stepwise model for bioavailability, from solid compound to systemic availability

56 Can Current Models be Improved? Bergström: 277 total compounds, PLS, RMSE=49.8 ºC, q 2 =0.54; n test :n train =185:92 Karthikeyan: 4173 total compounds, ANN, RMSE=49.3 ºC, q 2 =0.66; n test :n train =1042:2089 Jain: 1215 total compounds, MLR, AAE=33.2 ºC, no q 2 reported; n test :n train =119:1094 RMSE slightly below 50 ºC! q 2 just above 0.50!

57 Machine Learning Method k-Nearest Neighbours

58 Used for classification and regression Suitable for large and diverse datasets Local (tuneable via k) Non-linear Calculation of distance matrix instead of training step Distance matrix depends on descriptors used Classification Regression

59 Molecules and Numbers Datasets 4119 organic molecules of diverse complexity, MPs from 14 to 392.5 ºC (dataset 1) 277 drug-like molecules, MPs from 40 to 345 ºC (dataset 2) Both datasets available in the recent literature Descriptors Quasar descriptors from MOE (Molecular Operating Environment), 146 2D and 57 3D descriptors Sybyl Atom Type Counts (53 different types, including information about hybridisation state)

60 Finding the Neighbours Distances Euclidean distance between point i and point j in r dimensions Set of k nearest neighbours N = {T i, d i }k = 1,5,10,15

61 Calculation of Predictions Predictions: T pred = f(N) unweightedweighted Arithmetic (1) and geometric (2) average Inverse distance (3) Exponential decrease (4) 1 3 2 4

62 Assessment by Cross-validation Select m random molecules for test set N-m remaining molecules form training set OrganicsDrugs N: 4119 m: 1000 m/N: 24.3% N: 277 m: 80 m/N: 28.9% Data Random split TrainingTest Predict test set (from training set) RMSEP Repeat 25 times Cross-validated root-mean-square error

63 Assessment by Cross-validation OrganicsDrugs N: 4119 m: 1000 m/N: 24.3% N: 277 m: 80 m/N: 28.9% Data Random split TrainingTest Predict test set (from training set) RMSEP Repeat 25 times Cross-validated root-mean-square error Not “25-fold CV” where 1/25th is predicted based on the rest Prediction of 25 random sets of 1000 molecules Monte Carlo Cross-validation

64 Results NN Method AGIE RMSEr2r2 q2q2 r2r2 q2q2 r2r2 q2q2 r2r2 q2q2 157.60.360.21--------- 548.90.440.4350.00.430.4048.20.45 48.80.450.43 1048.80.440.4350.20.430.4047.70.46 47.60.470.46 1549.30.430.4250.90.420.3848.10.460.4547.30.480.47 Using a combination of 2D and 3D descriptors: Training set Test set1000 molecules (organic) 3119 molecules (organic)

65 Results: Organic Dataset NN Method AGIE RMSEr2r2 q2q2 r2r2 q2q2 r2r2 q2q2 r2r2 q2q2 157.60.360.21--------- 548.90.440.4350.00.430.4048.20.45 48.80.450.43 1048.80.440.4350.20.430.4047.70.46 47.60.470.46 1549.30.430.4250.90.420.3848.10.460.4547.30.480.47 RMSE = 47.2 ºC r 2 = 0.47 q 2 = 0.47 2D only RMSE = 50.7 ºC r 2 = 0.39 q 2 = 0.39 3D only Using 15 nearest neighbours and exponential weighting Exponential weighting performs best!

66 Results: Drugs Dataset NN Method AGIE RMSEr2r2 q2q2 r2r2 q2q2 r2r2 q2q2 r2r2 q2q2 157.10.19---------- 547.70.270.2548.80.260.2247.10.290.2748.50.280.22 1046.90.290.2847.80.300.2546.30.300.2947.40.290.26 1548.00.250.2449.00.260.2147.20.280.2747.20.290.27 RMSE = 46.5 ºC r 2 = 0.30 q 2 = 0.29 2D only RMSE = 48.4 ºC r 2 = 0.24 q 2 = 0.23 3D only Using 10 nearest neighbours and inverse distance weighting Inverse distance weighting performs best!

67 Summary of Results Organics: RMSE = 46.2 ºC, r 2 = 0.49, n test :n train = 1000:3119 Drugs: RMSE = 42.2 ºC, r 2 = 0.42, n test :n train = 80:197 Difficulties at extremities of temperature scale due to: distances in descriptor space and temperature

68 Summary of Results Nonetheless: Comparison with other models in the literature shows that the presented model: - performs better in terms of RMSE under more stringent conditions (even unoptimised) - parameterless; optimised model many fewer parameters than, for example, in an ANN

69 Conclusions Information content of descriptors Exponential weighting performs best Genetic algorithm is able to improve performance, especially for atom type counts as descriptors Remaining error - interactions in liquid and/or solid state not captured (by single-molecule descriptors) - kNN method in this setting (datasets + descriptors)

70 Thanks Pfizer & PIPMS Unilever Dave Palmer Florian Nigsch

71 All slides after here are for information only

72 The Two Faces of Computational Chemistry Theory Empiricism

73 Informatics and Empirical Models In general, Informatics methods represent phenomena mathematically, but not in a physics-based way. Inputs and output model are based on an empirically parameterised equation or more elaborate mathematical model. Do not attempt to simulate reality. Usually High Throughput.

74 Theoretical Chemistry Calculations and simulations based on real physics. Calculations are either quantum mechanical or use parameters derived from quantum mechanics. Attempt to model or simulate reality. Usually Low Throughput.

75 Optimisation with Genetic Algorithm Transformation of data to yield lower global RMSEP - global means: for whole data set Three operations to transform the columns of data matrix for optimal weighting One bit per descriptor on chromosomes: (op; param) consisting of an operation and a parameter { multiplication power log e { (a,b) = (0, 30) for mult. (a,b) = (0, 3) for power none for log e

76 Iterative Genetic Optimisation Initial dataInitial chromosomes Transformation of data Calculation of distance matrix Evaluation of models Survival of the fittest Genetic operations Stop if criterion reached Set of optimal transformations

77 Genetic Algorithm Able to Improve Results Data setDescriptorsRMSE (ºC)r2r2 q2q2 12D46.20.49 22D42.20.420.40 1SAC49.10.400.39 2SAC46.70.290.27 SAC: Sybyl Atom Counts Application of optimal transformations to initial data Reassessment of models Averages over 25 runs!

78 Errors at Low/High-Melting Ends… Signed RMSE analysisDefinition of signed RMSE Details In 30 ºC steps across whole range All 4119 molecules of data set 1

79 … Result From Large Distances Distance in descriptor space NNs of high-melting compounds typically further away Distance in temperature { < 0: negative neighbour > 0: positive neighbour T test -T NN

80 Over- & Underpredictions at Extremities Distance in descriptor space NNs of high-melting compounds typically further away Distance in temperature { < 0: negative neighbour > 0: positive neighbour T test -T NN Underprediction! Overprediction!


Download ppt "Informed by Informatics? Dr John Mitchell Unilever Centre for Molecular Science Informatics Department of Chemistry University of Cambridge, U.K."

Similar presentations


Ads by Google