Presentation is loading. Please wait.

Presentation is loading. Please wait.

The Disk-Covering Method for Phylogenetic Tree Reconstruction

Similar presentations


Presentation on theme: "The Disk-Covering Method for Phylogenetic Tree Reconstruction"— Presentation transcript:

1 The Disk-Covering Method for Phylogenetic Tree Reconstruction
Tandy Warnow The Program in Evolutionary Dynamics at Harvard University The University of Texas at Austin

2 Phylogeny Orangutan Gorilla Chimpanzee Human
From the Tree of the Life Website, University of Arizona Orangutan Gorilla Chimpanzee Human A phylogeny is a tree representation for the evolutionary history relating the species we are interested in. This is an example of a 13-species phylogeny. At each leaf of the tree is a species – we also call it a taxon in phylogenetics (plural form is taxa). They are all distinct. Each internal node corresponds to a speciation event in the past. When reconstructing the phylogeny we compare the characteristics of the taxa, such as their appearance, physiological features, or the composition of the genetic material.

3 Evolution informs about everything in biology
Big genome sequencing projects just produce data – so what? Evolutionary history relates all organisms and genes, and helps us understand and predict interactions between genes (genetic networks) drug design predicting functions of genes influenza vaccine development origins and spread of disease origins and migrations of humans

4 Reconstructing the “Tree” of Life
Handling large datasets: millions of species

5 Purpose: to create a national infrastructure of hardware,
Cyber Infrastructure for Phylogenetic Research Purpose: to create a national infrastructure of hardware, algorithms, database technology, etc., necessary to infer the Tree of Life. Group: 40 biologists, computer scientists, and mathematicians from 13 institutions. Funding: $11.6 M (large ITR grant from NSF).

6 CIPRes Members University of New Mexico Bernard Moret David Bader
Tiffani Williams UCSD/SDSC Fran Berman Alex Borchers Phil Bourne John Huelsenbeck Terri Liebowitz Mark Miller University of Connecticut Paul O Lewis University of Pennsylvania Susan Davidson Junhyong Kim Sampath Kannan UT Austin Tandy Warnow David M. Hillis Warren Hunt Robert Jansen Randy Linder Lauren Meyers Daniel Miranker Usman Roshan Luay Nakhleh University of Arizona David R. Maddison University of British Columbia Wayne Maddison North Carolina State University Spencer Muse American Museum of Natural History Ward C. Wheeler UC Berkeley Satish Rao Steve Evans Richard M Karp Brent Mishler Elchanan Mossel Eugene W. Myers Christos M. Papadimitriou Stuart J. Russell SUNY Buffalo William Piel Florida State University David L. Swofford Mark Holder Yale Michael Donoghue Paul Turner

7 DNA Sequence Evolution
-3 mil yrs -2 mil yrs -1 mil yrs today AAGACTT TGGACTT AAGGCCT AGGGCAT TAGCCCT AGCACTT AGCGCTT AGCACAA TAGACTT TAGCCCA AAGACTT TGGACTT AAGGCCT AGGGCAT TAGCCCT AGCACTT AAGGCCT TGGACTT TAGCCCA TAGACTT AGCGCTT AGCACAA AGGGCAT TAGCCCT AGCACTT

8 Steps in a phylogenetic analysis
Gather data Align sequences Reconstruct phylogeny on the multiple alignment - often obtaining a large number of trees Compute consensus (or otherwise estimate the reliable components of the evolutionary history) Perform post-tree analyses.

9 Phylogeny Problem U V W X Y X U Y V W AGGGCAT TAGCCCA TAGACTT TGCACAA
TGCGCTT X U Y V W

10 CIPRES research in algorithms
Heuristics for NP-hard problems in phylogeny reconstruction Compact representation of sets of trees Reticulate evolution reconstruction Performance of phylogeny reconstruction methods under stochastic models of evolution Gene order phylogeny Genomic alignment Lower bounds for MP Distance-based reconstruction Gene family evolution High-throughput phylogenetic placement Multiple sequence alignment

11 CIPRES Algorithms Group
Tandy Warnow (focus leader) David Bader, Steve Evans, John Huelsenbeck, Warren Hunt, Sampath Kannan, Dick Karp, Paul Lewis, Bernard Moret, Elchanan Mossel, Luay Nakhleh, Christos Papadimitriou, Satish Rao, Usman Roshan, Jijun Tang, Li-San Wang, Tiffani Williams

12 Phylogeny Reconstruction
V W X Y AGGGCAT TAGCCCA TAGACTT TGCACAA TGCGCTT X U Y V W

13 Phylogenetic reconstruction methods
Hill-climbing heuristics for hard optimization criteria (Maximum Parsimony and Maximum Likelihood) Phylogenetic trees Cost Global optimum Local optimum Polynomial time distance-based methods: Neighbor Joining, FastME, Weighbor, etc.

14 Performance criteria Running time. Space.
Statistical performance issues (e.g., statistical consistency) with respect to a Markov model of evolution. “Topological accuracy” with respect to the underlying true tree. Typically studied in simulation. Accuracy with respect to a particular criterion (e.g. tree length or likelihood score), on real data.

15 Markov models of site evolution
Simplest (Jukes-Cantor): The model tree is a pair (T,{e,p(e)}), where T is a rooted binary tree, and p(e) is the probability of a substitution on the edge e The state at the root is random If a site changes on an edge, it changes with equal probability to each of the remaining states The evolutionary process is Markovian More complex models (such as the General Markov model) are also considered, with little change to the theory. Variation between different sites is either prohibited or minimized, in order to ensure identifiability of the model.

16 Distance-based Phylogenetic Methods

17 Maximum Parsimony Input: Set S of n aligned sequences of length k
Output: A phylogenetic tree T leaf-labeled by sequences in S additional sequences of length k labeling the internal nodes of T such that is minimized.

18 Maximum Likelihood Input: Set S of n aligned sequences of length k, and a specified parametric model Output: A phylogenetic tree T leaf-labeled by sequences in S With additional model parameters (e.g. edge “lengths”) such that Pr[S|(T, params)] is maximized.

19 Approaches for “solving” MP/ML
Hill-climbing heuristics (which can get stuck in local optima) Randomized algorithms for getting out of local optima Approximation algorithms for MP (based upon Steiner Tree approximation algorithms). Phylogenetic trees Cost Global optimum Local optimum

20 Theoretical results Neighbor Joining is polynomial time, and statistically consistent under typical models of evolution. Maximum Parsimony is NP-hard, and even exact solutions are not statistically consistent under typical models. Maximum Likelihood is of unknown computational complexity, but statistically consistent under typical models.

21 Quantifying Error FN: false negative (missing edge) FP: false positive
(incorrect edge) 50% error rate FP

22 Neighbor joining has poor performance on large diameter trees [Nakhleh et al. ISMB 2001]
Simulation study based upon fixed edge lengths, K2P model of evolution, sequence lengths fixed to 1000 nucleotides. Error rates reflect proportion of incorrect edges in inferred trees. 0.8 NJ 0.6 Error Rate 0.4 0.2 400 800 1200 1600 No. Taxa

23 Theoretical convergence rates
Atteson: Let T be a model tree defining additive matrix D. Then Neighbor Joining will reconstruct the true tree with high probability from sequences that are of length at least O(lg n emax Dij). Proof: Show NJ accurate on input matrix d such that max{|Dij-dij|}<f/2, for f the minimum edge length.

24 Absolute fast convergence vs. exponential convergence

25 Other standard polynomial time methods don’t improve substantially on NJ (and have the same problem with large diameter datasets). What about trying to “solve” maximum parsimony or maximum likelihood?

26 Problems with NJ Theory: The convergence rate is exponential: the number of sites needed to obtain an accurate reconstruction of the tree with high probability grows exponentially in the evolutionary diameter. Empirical: NJ has poor performance on datasets with some large leaf-to-leaf distances.

27 Solving NP-hard problems exactly is … unlikely
#leaves #trees 4 3 5 15 6 105 7 945 8 10395 9 135135 10 20 2.2 x 1020 100 4.5 x 10190 1000 2.7 x Number of (unrooted) binary trees on n leaves is (2n-5)!! If each tree on 1000 taxa could be analyzed in seconds, we would find the best tree in 2890 millennia

28 How good an MP analysis do we need?
Our research shows that we need to get within 0.01% of optimal (or better even, on large datasets) to return reasonable estimates of the true tree’s “topology”

29 Problems with current techniques for MP
Average MP scores above optimal of best methods at 24 hours across 10 datasets Best current techniques fail to reach 0.01% of optimal at the end of 24 hours, on large datasets

30 Problems with current techniques for MP
Shown here is the performance of a heuristic maximum parsimony analysis on a real dataset of almost 14,000 sequences. (“Optimal” here means best score to date, using any method for any amount of time.) Acceptable error is below 0.01%. Performance of TNT with time

31 Empirical problems with existing methods
Heuristics for Maximum Parsimony (MP) and Maximum Likelihood (ML) cannot handle large datasets (take too long!) – we need new heuristics for MP/ML that can analyze large datasets Polynomial time methods have poor topological accuracy on large diameter datasets – we need better polynomial time methods

32 Using divide-and-conquer
Conjecture: better (more accurate) solutions will be found if we analyze a small number of smaller subsets and then combine solutions Note: different “base” methods will need potentially different decompositions. Alert: the subtree compatibility problem is NP-complete!

33 Using divide-and-conquer
Conjecture: better (more accurate) solutions will be found if we analyze a small number of smaller subsets and then combine solutions Note: different “base” methods will need potentially different decompositions. Alert: the subtree compatibility problem is NP-complete!

34 Using divide-and-conquer
Conjecture: better (more accurate) solutions will be found if we analyze a small number of smaller subsets and then combine solutions Note: different “base” methods will need potentially different decompositions. Alert: the subtree compatibility problem is NP-complete!

35 Strict Consensus Merger (SCM)

36 DCMs: Divide-and-conquer for improving phylogeny reconstruction

37 “Boosting” phylogeny reconstruction methods
DCMs “boost” the performance of phylogeny reconstruction methods. DCM Base method M DCM-M

38 DCMs (Disk-Covering Methods)
DCMs for polynomial time methods improve topological accuracy (empirical observation), and have provable theoretical guarantees under Markov models of evolution DCMs for hard optimization problems reduce running time needed to achieve good levels of accuracy (empirically observation)

39 DCM1-boosting distance-based methods [Nakhleh et al. ISMB 2001]
DCM1-boosting makes distance-based methods more accurate Theoretical guarantees that DCM1-NJ converges to the true tree from polynomial length sequences 0.8 NJ DCM1-NJ 0.6 Error Rate 0.4 0.2 400 800 1200 1600 No. Taxa

40 DCM-Boosting [Warnow et al. 2001]
DCM+SQS is a two-phase procedure which reduces the sequence length requirement of methods. Exponentially converging method Absolute fast converging method DCM SQS (Will present this on Monday in Applied Math colloquium.)

41 DCM1 Decompositions Input: Set S of sequences, distance matrix d, threshold value 1. Compute threshold graph 2. Perform minimum weight triangulation DCM1 decomposition : compute the maximal cliques

42 Major challenge: MP and ML
Maximum Parsimony (MP) and Maximum Likelihood (ML) remain the methods of choice for most systematists The main challenge here is to make it possible to obtain good solutions to MP or ML in reasonable time periods on large datasets

43 Maximum Parsimony Input: Set S of n aligned sequences of length k
Output: A phylogenetic tree T leaf-labeled by sequences in S additional sequences of length k labeling the internal nodes of T such that is minimized.

44 Maximum parsimony (example)
Input: Four sequences ACT ACA GTT GTA Question: which of the three trees has the best MP scores?

45 Maximum Parsimony ACT GTA ACA ACT GTT ACA GTT GTA GTA ACA ACT GTT

46 Maximum Parsimony ACT GTA ACA ACT GTT GTA ACA ACT 2 1 1 2 GTT 3 3 GTT
MP score = 7 MP score = 5 GTA ACA ACA GTA 2 1 1 ACT GTT MP score = 4 Optimal MP tree

47 Maximum Parsimony: computational complexity
ACT ACA GTT GTA 1 2 MP score = 4 Finding the optimal MP tree is NP-hard Optimal labeling can be computed in linear time O(nk)

48 Problems with current techniques for MP
Best methods are a combination of simulated annealing, divide-and-conquer and genetic algorithms, as implemented in the software package TNT. However, they do not reach 0.01% of optimal on large datasets in 24 hours. Performance of TNT with time

49 Observations The best MP heuristics cannot get acceptably good solutions within 24 hours on most of these large datasets. Datasets of these sizes may need months (or years) of further analysis to reach reasonable solutions. Apparent convergence can be misleading.

50 How can we improve upon existing techniques?

51 Our objective: speed up the best MP heuristics
Fake study Performance of hill-climbing heuristic MP score of best trees Desired Performance Time

52 Divide-and-conquer technique for speeding up MP/ML searches

53 DCM Decompositions Input: Set S of sequences, distance matrix d, threshold value 1. Compute threshold graph 2. Perform minimum weight triangulation DCM2 decomposition: Clique-separator plus component DCM1 decomposition :

54 But: it didn’t work! A simple divide-and-conquer was insufficient for the best performing MP heuristics -- TNT by itself was as good as DCM(TNT).

55 Empirical observation
DCM1 not as good as DCM2 for MP DCM2 decompositions too large, too slow to compute. Neither improved the best MP heuristics.

56 How can we improve upon existing techniques?

57 Tree Bisection and Reconnection (TBR)

58 Tree Bisection and Reconnection (TBR)
Delete an edge

59 Tree Bisection and Reconnection (TBR)

60 Tree Bisection and Reconnection (TBR)
Reconnect the trees with a new edge that bifurcates an edge in each tree

61 A conjecture as to why current techniques are poor:
Our studies suggest that trees with near optimal scores tend to be topologically close (RF distance less than 15%) from the other near optimal trees. The standard technique (TBR) for moving around tree space explores O(n3) trees, which are mostly topologically distant. So TBR may be useful initially (to reach near optimality) but then more “localized” searches are more productive.

62 Using DCMs differently
Observation: DCMs make small local changes to the tree New algorithmic strategy: use DCMs iteratively and/or recursively to improve heuristics on large datasets However, the initial DCMs for MP produced large subproblems and took too long to compute We needed a decomposition strategy that produces small subproblems quickly.

63 Using DCMs differently
Observation: DCMs make small local changes to the tree New algorithmic strategy: use DCMs iteratively and/or recursively to improve heuristics on large datasets However, the initial DCMs for MP produced large subproblems and took too long to compute We needed a decomposition strategy that produces small subproblems quickly.

64 Using DCMs differently
Observation: DCMs make small local changes to the tree New algorithmic strategy: use DCMs iteratively and/or recursively to improve heuristics on large datasets However, the initial DCMs for MP produced large subproblems and took too long to compute We needed a decomposition strategy that produces small subproblems quickly.

65 New DCM3 decomposition Input: Set S of sequences, and guide-tree T
1. Compute short subtree graph G(S,T), based upon T 2. Find clique separator in the graph G(S,T) and form subproblems DCM3 decompositions can be obtained in O(n) time (2) yield small subproblems (3) can be used iteratively

66 Iterative-DCM3 T DCM3 Base method T’

67 New DCMs DCM3 Recursive-DCM3 Iterative DCM3 Recursive-Iterative DCM3
Compute subproblems using DCM3 decomposition Apply base method to each subproblem to yield subtrees Merge subtrees using the Strict Consensus Merger technique Randomly refine to make it binary Recursive-DCM3 Iterative DCM3 Compute a DCM3 tree Perform local search and go to step 1 Recursive-Iterative DCM3

68 Datasets Obtained from various researchers and online databases
1322 lsu rRNA of all organisms 2000 Eukaryotic rRNA 2594 rbcL DNA 4583 Actinobacteria 16s rRNA 6590 ssu rRNA of all Eukaryotes 7180 three-domain rRNA 7322 Firmicutes bacteria 16s rRNA 8506 three-domain+2org rRNA 11361 ssu rRNA of all Bacteria 13921 Proteobacteria 16s rRNA

69 Comparison of DCM decompositions (Maximum subset size)
DCM2 subproblems are almost as large as the full dataset size on datasets 1 through 4. On datasets 5-10 DCM2 was too slow to compute a decomposition within 24 hours.

70 Comparison of DCMs (4583 sequences)
Base method is the TNT-ratchet. DCM2 tree takes almost 10 hours to produce a tree and is too slow to run on larger datasets. Rec-I-DCM3 is the best method at all times.

71 Comparison of DCMs (13,921 sequences)
Base method is the TNT-ratchet. Note the improvement in DCMs as we move from the default to recursion to iteration to recursion+iteration. On very large datasets Rec-I-DCM3 gives significant improvements over unboosted TNT.

72 Rec-I-DCM3 significantly improves performance
Current best techniques DCM boosted version of best techniques Comparison of TNT to Rec-I-DCM3(TNT) on one large dataset

73 Rec-I-DCM3(TNT) vs. TNT (Comparison of scores at 24 hours)
BASE METHOD???? Base method is the default TNT technique, the current best method for MP. Rec-I-DCM3 significantly improves upon the unboosted TNT by returning trees which are at most 0.01% above optimal on most datasets.

74 Observations Rec-I-DCM3 improves upon the best performing heuristics for MP. The improvement increases with the difficulty of the dataset.

75 Other DCMs DCM for NJ and other distance methods produces absolute fast converging (afc) methods, which improve upon NJ in simulation studies DCMs have been used to scale GRAPPA (software for whole genome phylogenetic analysis) from its maximum of about genomes to 1000 genomes. Current projects: DCM development for maximum likelihood and multiple sequence alignment.

76 Questions Tree shape (including branch lengths) has an impact on phylogeny reconstruction - but what model of tree shape to use? What is the sequence length requirement for Maximum Likelihood? (Result by Szekely and Steel is worse than that for Neighbor Joining.) Why is MP not so bad?

77 General comments There is interesting computer science research to be done in computational phylogenetics, with a tremendous potential for impact. Algorithm development must be tested on both real and simulated data. The interplay between data, stochastic models of evolution, optimization problems, and algorithms, is important and instructive.

78 Acknowledgements NSF The David and Lucile Packard Foundation
The Program in Evolutionary Dynamics at Harvard The Institute for Cellular and Molecular Biology at UT-Austin Collaborators: Usman Roshan, Bernard Moret, and Tiffani Williams

79 Reconstructing the “Tree” of Life
Handling large datasets: millions of species The “Tree of Life” is not really a tree: reticulate evolution


Download ppt "The Disk-Covering Method for Phylogenetic Tree Reconstruction"

Similar presentations


Ads by Google