Presentation is loading. Please wait.

Presentation is loading. Please wait.

Phylogenetic Tree Reconstruction Tandy Warnow The Program in Evolutionary Dynamics at Harvard University The University of Texas at Austin.

Similar presentations


Presentation on theme: "Phylogenetic Tree Reconstruction Tandy Warnow The Program in Evolutionary Dynamics at Harvard University The University of Texas at Austin."— Presentation transcript:

1 Phylogenetic Tree Reconstruction Tandy Warnow The Program in Evolutionary Dynamics at Harvard University The University of Texas at Austin

2 Phylogeny Orangutan GorillaChimpanzee Human From the Tree of the Life Website, University of Arizona

3 Evolution informs about everything in biology Big genome sequencing projects just produce data – so what? Evolutionary history relates all organisms and genes, and helps us understand and predict –interactions between genes (genetic networks) –drug design –predicting functions of genes –influenza vaccine development –origins and spread of disease –origins and migrations of humans

4 Reconstructing the “Tree” of Life Handling large datasets: millions of species

5 Cyber Infrastructure for Phylogenetic Research Purpose: to create a national infrastructure of hardware, algorithms, database technology, etc., necessary to infer the Tree of Life. Group: 40 biologists, computer scientists, and mathematicians from 13 institutions. Funding: $11.6 M (large ITR grant from NSF).

6 CIPRes Members University of New Mexico Bernard Moret David Bader Tiffani Williams UCSD/SDSC Fran Berman Alex Borchers David Stockwell Phil Bourne John Huelsenbeck Mark Miller Michael Alfaro Tracy Zhao University of Connecticut Paul O Lewis University of Pennsylvania Susan Davidson Junhyong Kim Sampath Kannan UT Austin Tandy Warnow David M. Hillis Warren Hunt Robert Jansen Randy Linder Lauren Meyers Daniel Miranker Usman Roshan Luay Nakhleh University of Arizona David R. Maddison University of British Columbia Wayne Maddison North Carolina State University Spencer Muse American Museum of Natural History Ward C. Wheeler UC Berkeley Satish Rao Steve Evans Richard M Karp Brent Mishler Elchanan Mossel Eugene W. Myers Christos M. Papadimitriou Stuart J. Russell SUNY Buffalo William Piel Florida State University David L. Swofford Mark Holder Yale University Michael Donoghue Paul Turner sanofi-aventis Lisa Vawter

7 Phylogeny Problem TAGCCCATAGACTTTGCACAATGCGCTTAGGGCAT UVWXY U VW X Y

8 Steps in a phylogenetic analysis Gather data Align sequences Reconstruct phylogeny on the multiple alignment - often obtaining a large number of trees Compute consensus (or otherwise estimate the reliable components of the evolutionary history) Perform post-tree analyses.

9 CIPRES research in algorithms Heuristics for NP-hard problems in phylogeny reconstruction Compact representation of sets of trees Reticulate evolution reconstruction Performance of phylogeny reconstruction methods under stochastic models of evolution Gene order phylogeny Genomic alignment Lower bounds for MP Distance-based reconstruction Gene family evolution High-throughput phylogenetic placement Multiple sequence alignment

10 DNA Sequence Evolution AAGACTT TGGACTTAAGGCCT -3 mil yrs -2 mil yrs -1 mil yrs today AGGGCATTAGCCCTAGCACTT AAGGCCTTGGACTT TAGCCCATAGACTTAGCGCTTAGCACAAAGGGCAT TAGCCCTAGCACTT AAGACTT TGGACTTAAGGCCT AGGGCATTAGCCCTAGCACTT AAGGCCTTGGACTT AGCGCTTAGCACAATAGACTTTAGCCCAAGGGCAT

11 Markov models of site evolution Simplest (Jukes-Cantor): The model tree is a pair (T,{p(e)}), where T is a rooted binary tree, and p(e) is the probability of a substitution on the edge e The state at the root is random If a site changes on an edge, it changes with equal probability to each of the remaining states The evolutionary process is Markovian More complex models (such as the General Markov model) are also considered, with little change to the theory.

12 Phylogeny Reconstruction TAGCCCATAGACTTTGCACAATGCGCTTAGGGCAT UVWXY U VW X Y

13 1.Hill-climbing heuristics for hard optimization criteria (Maximum Parsimony and Maximum Likelihood) Phylogenetic reconstruction methods Phylogenetic trees Cost Global optimum Local optimum 2.Polynomial time distance-based methods: Neighbor Joining, etc.

14 Performance criteria Running time. Space. Statistical performance issues (e.g., statistical consistency) with respect to a Markov model of evolution. “Topological accuracy” with respect to the underlying true tree. Typically studied in simulation. Accuracy with respect to a particular criterion (e.g. tree length or likelihood score), on real data.

15 Maximum Parsimony Input: Set S of n aligned sequences of length k Output: A phylogenetic tree T –leaf-labeled by sequences in S –additional sequences of length k labeling the internal nodes of T such that is minimized.

16 Theoretical results Neighbor Joining is polynomial time, and statistically consistent under typical models of evolution. Maximum Parsimony is NP-hard, and even exact solutions are not statistically consistent under typical models. Maximum Likelihood is of unknown computational complexity, but statistically consistent under typical models.

17 Problems with NJ Theory: The convergence rate is exponential: the number of sites needed to obtain an accurate reconstruction of the tree with high probability grows exponentially in the evolutionary diameter. Empirical: NJ has poor performance on datasets with some large leaf-to-leaf distances.

18 Neighbor joining has poor accuracy on large diameter model trees [Nakhleh et al. ISMB 2001] Simulation study based upon fixed edge lengths, K2P model of evolution, sequence lengths fixed to 1000 nucleotides. Error rates reflect proportion of incorrect edges in inferred trees. NJ 0 40080016001200 No. Taxa 0 0.2 0.4 0.6 0.8 Error Rate

19 Solving NP-hard problems exactly is … unlikely Number of (unrooted) binary trees on n leaves is (2n-5)!! If each tree on 1000 taxa could be analyzed in 0.001 seconds, we would find the best tree in 2890 millennia #leaves#trees 43 515 6105 7945 810395 9135135 102027025 202.2 x 10 20 1004.5 x 10 190 10002.7 x 10 2900

20 1.Hill-climbing heuristics (which can get stuck in local optima) 2.Randomized algorithms for getting out of local optima 3.Approximation algorithms for MP (based upon Steiner Tree approximation algorithms). Approaches for “solving” MP/ML Phylogenetic trees Cost Global optimum Local optimum

21 How good an MP analysis do we need? Our research shows that we need to get within 0.01% of optimal (or better even, on large datasets) to return reasonable estimates of the true tree’s “topology”

22 Datasets 1322 lsu rRNA of all organisms 2000 Eukaryotic rRNA 2594 rbcL DNA 4583 Actinobacteria 16s rRNA 6590 ssu rRNA of all Eukaryotes 7180 three-domain rRNA 7322 Firmicutes bacteria 16s rRNA 8506 three-domain+2org rRNA 11361 ssu rRNA of all Bacteria 13921 Proteobacteria 16s rRNA Obtained from various researchers and online databases

23 Problems with current techniques for MP Average MP scores above optimal of best methods at 24 hours across 10 datasets Best current techniques fail to reach 0.01% of optimal at the end of 24 hours, on large datasets

24 Problems with current techniques for MP Shown here is the performance of the TNT heuristic search for maximum parsimony on a real dataset of almost 14,000 sequences. The required level of accuracy with respect to MP score is no more than 0.01% error (otherwise high topological error results). (“Optimal” here means best score to date, using any method for any amount of time.) Performance of TNT with time

25 Empirical problems with existing methods Heuristics for Maximum Parsimony (MP) and Maximum Likelihood (ML) cannot handle large datasets (take too long!) – we need new heuristics for MP/ML that can analyze large datasets Polynomial time methods have poor topological accuracy on large datasets – we need better polynomial time methods

26 “Boosting” phylogeny reconstruction methods DCMs “boost” the performance of phylogeny reconstruction methods. DCM Base method MDCM-M

27 DCMs: Divide-and-conquer for improving phylogeny reconstruction

28 DCMs (Disk-Covering Methods) DCMs for polynomial time methods improve topological accuracy (empirical observation), and have provable theoretical guarantees under Markov models of evolution DCMs for hard optimization problems reduce running time needed to achieve good levels of accuracy (empirically observation) Each DCM is designed by considering the kinds of datasets the base method will do well or poorly on, and these designs are then tested on real and simulated data.

29 DCM1 Decompositions DCM1 decomposition : compute the maximal cliques Input: Set S of sequences, distance matrix d, threshold value 1. Compute threshold graph 2. If the graph is not triangulated, add additional edges to triangulate.

30 DCM1-boosting distance-based methods [Nakhleh et al. ISMB 2001] DCM1-boosting makes distance- based methods more accurate Theoretical guarantees that DCM1-NJ converges to the true tree from polynomial length sequences NJ DCM1-NJ 0 40080016001200 No. Taxa 0 0.2 0.4 0.6 0.8 Error Rate

31 Major challenge: MP and ML Maximum Parsimony (MP) and Maximum Likelihood (ML) remain the methods of choice for most systematists The main challenge here is to make it possible to obtain good solutions to MP or ML in reasonable time periods on large datasets

32 Maximum Parsimony Input: Set S of n aligned sequences of length k Output: A phylogenetic tree T –leaf-labeled by sequences in S –additional sequences of length k labeling the internal nodes of T such that is minimized.

33 Maximum parsimony (example) Input: Four sequences –ACT –ACA –GTT –GTA Question: which of the three trees has the best MP scores?

34 Maximum Parsimony ACT GTTACA GTA ACA ACT GTA GTT ACT ACA GTT GTA

35 Maximum Parsimony ACT GTT GTA ACA GTA 1 2 2 MP score = 5 ACA ACT GTA GTT ACAACT 3 1 3 MP score = 7 ACT ACA GTT GTA ACAGTA 1 2 1 MP score = 4 Optimal MP tree

36 Maximum Parsimony: computational complexity ACT ACA GTT GTA ACAGTA 1 2 1 MP score = 4 Finding the optimal MP tree is NP-hard Optimal labeling can be computed in linear time O(nk)

37 Problems with current techniques for MP Even the best of the current methods do not reach 0.01% of “optimal” on large datasets in 24 hours. (“Optimal” means best score to date, using any method over any amount of time.) Performance of TNT with time

38 Observations The best MP heuristics cannot get acceptably good solutions within 24 hours on most of these large datasets. Datasets of these sizes may need months (or years) of further analysis to reach reasonable solutions. Apparent convergence can be misleading.

39 How can we improve upon existing techniques?

40 Using divide-and-conquer Conjecture: better (more accurate) solutions will be found in less time, if we analyze a small number of smaller subsets and then combine solutions

41 Our objective: speed up the best MP heuristics Time MP score of best trees Performance of hill-climbing heuristic Desired Performance Fake study

42 DCM Decompositions DCM1 decomposition : DCM2 decomposition: Clique-separator plus component Input: Set S of sequences, distance matrix d, threshold value 1. Compute threshold graph 2. Perform minimum weight triangulation

43 Divide-and-conquer technique for speeding up MP/ML searches

44 But: it didn’t work! A simple divide-and-conquer was insufficient for the best performing MP heuristics -- TNT by itself was as good as DCM(TNT).

45 Empirical observation DCM1 not as good as DCM2 for MP DCM2 decompositions too large, too slow to compute. Neither improved the best MP heuristics.

46 How can we improve upon existing techniques?

47 Tree Bisection and Reconnection (TBR)

48 Delete an edge

49 Tree Bisection and Reconnection (TBR)

50 Reconnect the trees with a new edge that bifurcates an edge in each tree

51 A conjecture as to why current techniques are poor: Our studies suggest that trees with near optimal scores tend to be topologically close (RF distance less than 15%) from the other near optimal trees. The standard technique (TBR) for moving around tree space explores O(n 3 ) trees, which are mostly topologically distant. So TBR may be useful initially (to reach near optimality) but then more “localized” searches are more productive.

52 Using DCMs differently Observation: DCMs make small local changes to the tree New algorithmic strategy: use DCMs iteratively and/or recursively to improve heuristics on large datasets However, the initial DCMs for MP –produced large subproblems and –took too long to compute We needed a decomposition strategy that produces small subproblems quickly.

53 Using DCMs differently Observation: DCMs make small local changes to the tree New algorithmic strategy: use DCMs iteratively and/or recursively to improve heuristics on large datasets However, the initial DCMs for MP –produced large subproblems and –took too long to compute We needed a decomposition strategy that produces small subproblems quickly.

54 Using DCMs differently Observation: DCMs make small local changes to the tree New algorithmic strategy: use DCMs iteratively and/or recursively to improve heuristics on large datasets However, the initial DCMs for MP –produced large subproblems and –took too long to compute We needed a decomposition strategy that produces small subproblems quickly.

55 New DCM3 decomposition Input: Set S of sequences, and guide-tree T 1. Compute short subtree graph G(S,T), based upon T 2. Find clique separator in the graph G(S,T) and form subproblems DCM3 decompositions (1) can be obtained in O(n) time (2) yield small subproblems (3) can be used iteratively

56 Strict Consensus Merger (SCM)

57 Iterative-DCM3 T T’ Base method DCM3

58 New DCMs DCM3 1.Compute subproblems using DCM3 decomposition 2.Apply base method to each subproblem to yield subtrees 3.Merge subtrees using the Strict Consensus Merger technique 4.Randomly refine to make it binary Recursive-DCM3 Iterative DCM3 1.Compute a DCM3 tree 2.Perform local search and go to step 1 Recursive-Iterative DCM3

59 Comparison of DCM decompositions (Maximum subset size) DCM2 subproblems are almost as large as the full dataset size on datasets 1 through 4. On datasets 5-10 DCM2 was too slow to compute a decomposition within 24 hours.

60 Comparison of DCMs (4583 sequences) Base method is the TNT-ratchet. DCM2 tree takes almost 10 hours to produce a tree and is too slow to run on larger datasets. Rec-I-DCM3 is the best method at all times.

61 Comparison of DCMs (13,921 sequences) Base method is the TNT-ratchet. Note the improvement in DCMs as we move from the default to recursion to iteration to recursion+iteration. On very large datasets Rec-I-DCM3 gives significant improvements over unboosted TNT.

62 Rec-I-DCM3 significantly improves performance Comparison of TNT to Rec-I-DCM3(TNT) on one large dataset Current best techniques DCM boosted version of best techniques

63 Rec-I-DCM3(TNT) vs. TNT (Comparison of scores at 24 hours) The base method is the default TNT technique, the current best method for MP on large datasets. Rec-I-DCM3 significantly improves upon the unboosted TNT.

64 Observations Rec-I-DCM3 improves upon the best performing heuristics for MP. The improvement increases with the difficulty of the dataset. DCMs also boost the performance of ML heuristics (not shown).

65 Questions Tree shape (including branch lengths) has an impact on phylogeny reconstruction - but what model of tree shape to use? What is the sequence length requirement for Maximum Likelihood? (Result by Szekely and Steel is worse than that for Neighbor Joining.) Why is MP not so bad?

66 General comments There is interesting computer science research to be done in computational phylogenetics, with a tremendous potential for impact. Algorithm development must be tested on both real and simulated data. The interplay between data, stochastic models of evolution, optimization problems, and algorithms, is important and instructive.

67 Reconstructing the “Tree” of Life Handling large datasets: millions of species The “Tree of Life” is not really a tree: reticulate evolution

68 Ringe-Warnow Phylogenetic Tree of Indo-European (dates not meant seriously)

69 Websites for more information Please see the CIPRES web site at http://www.phylo.org for more about biological phylogenetics http://www.phylo.org The Computational Phylogenetics for Historical Linguistics project web site has papers, data, and additional material, as well as a workshop announcement for March 21, 2005 on Mathematical Modelling and Analysis of Language Diversification: http://www.cs.rice.edu/~nakhleh/CPHL http://www.cs.rice.edu/~nakhleh/CPHL

70 Acknowledgements NSF The David and Lucile Packard Foundation The Radcliffe Institute for Advanced Study, and the Program in Evolutionary Dynamics at Harvard The Institute for Cellular and Molecular Biology at UT-Austin Collaborators: Usman Roshan, Bernard Moret, and Tiffani Williams


Download ppt "Phylogenetic Tree Reconstruction Tandy Warnow The Program in Evolutionary Dynamics at Harvard University The University of Texas at Austin."

Similar presentations


Ads by Google