Bayesian Estimators of Time to Most Recent Common Ancestry Ecology and Evolutionary Biology Adjunct Appointments Molecular and Cellular Biology Plant Sciences.

Slides:



Advertisements
Similar presentations
Estimation of Means and Proportions
Advertisements

Evaluating Classifiers
Bayesian inference of normal distribution
Week 11 Review: Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution.
CHAPTER 8 More About Estimation. 8.1 Bayesian Estimation In this chapter we introduce the concepts related to estimation and begin this by considering.
Bayesian inference “Very much lies in the posterior distribution” Bayesian definition of sufficiency: A statistic T (x 1, …, x n ) is sufficient for 
Psychology 290 Special Topics Study Course: Advanced Meta-analysis April 7, 2014.
Uncertainty and confidence intervals Statistical estimation methods, Finse Friday , 12.45–14.05 Andreas Lindén.
Bayesian Estimators of Time to Most Recent Common Ancestry Ecology and Evolutionary Biology Adjunct Appointments Molecular and Cellular Biology Plant Sciences.
Chap 8: Estimation of parameters & Fitting of Probability Distributions Section 6.1: INTRODUCTION Unknown parameter(s) values must be estimated before.
Basics of Linkage Analysis
DATA ANALYSIS Module Code: CA660 Lecture Block 6: Alternative estimation methods and their implementation.
ECIV 201 Computational Methods for Civil Engineers Richard P. Ray, Ph.D., P.E. Error Analysis.
BCOR 1020 Business Statistics Lecture 17 – March 18, 2008.
2: Population genetics break.
Basics of Statistical Estimation. Learning Probabilities: Classical Approach Simplest case: Flipping a thumbtack tails heads True probability  is unknown.
Evaluating Hypotheses
Approximate Bayesian Methods in Genetic Data Analysis Mark A. Beaumont, University of Reading,
G. Cowan Lectures on Statistical Data Analysis 1 Statistical Data Analysis: Lecture 8 1Probability, Bayes’ theorem, random variables, pdfs 2Functions of.
Bayesian Estimators of Time to Most Recent Common Ancestry Ecology and Evolutionary Biology BIO 5 Adjunct Appointments Molecular and Cellular Biology Plant.
Lehrstuhl für Informatik 2 Gabriella Kókai: Maschine Learning 1 Evaluating Hypotheses.
G. Cowan Lectures on Statistical Data Analysis Lecture 10 page 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem 2Random variables and.
8-1 Introduction In the previous chapter we illustrated how a parameter can be estimated from sample data. However, it is important to understand how.
5-3 Inference on the Means of Two Populations, Variances Unknown
Inferential Statistics
Chapter Two Probability Distributions: Discrete Variables
The paired sample experiment The paired t test. Frequently one is interested in comparing the effects of two treatments (drugs, etc…) on a response variable.
ECE 8443 – Pattern Recognition LECTURE 06: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Bias in ML Estimates Bayesian Estimation Example Resources:
880.P20 Winter 2006 Richard Kass 1 Confidence Intervals and Upper Limits Confidence intervals (CI) are related to confidence limits (CL). To calculate.
10.2 Estimating a Population Mean when σ is Unknown When we don’t know σ, we substitute the standard error of for its standard deviation. This is the standard.
1 CSI5388: Functional Elements of Statistics for Machine Learning Part I.
Estimation of Statistical Parameters
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Slides by JOHN LOUCKS St. Edward’s University.
Section Copyright © 2014, 2012, 2010 Pearson Education, Inc. Lecture Slides Elementary Statistics Twelfth Edition and the Triola Statistics Series.
Prof. Dr. S. K. Bhattacharjee Department of Statistics University of Rajshahi.
G. Cowan Lectures on Statistical Data Analysis Lecture 3 page 1 Lecture 3 1 Probability (90 min.) Definition, Bayes’ theorem, probability densities and.
Biostatistics IV An introduction to bootstrap. 2 Getting something from nothing? In Rudolph Erich Raspe's tale, Baron Munchausen had, in one of his many.
Bayesian inference review Objective –estimate unknown parameter  based on observations y. Result is given by probability distribution. Bayesian inference.
9 Mar 2007 EMBnet Course – Introduction to Statistics for Biologists Nonparametric tests, Bootstrapping
ECE 8443 – Pattern Recognition Objectives: Error Bounds Complexity Theory PAC Learning PAC Bound Margin Classifiers Resources: D.M.: Simplified PAC-Bayes.
ECE 8443 – Pattern Recognition ECE 8423 – Adaptive Signal Processing Objectives: Deterministic vs. Random Maximum A Posteriori Maximum Likelihood Minimum.
Estimating a Population Proportion
9-1 MGMG 522 : Session #9 Binary Regression (Ch. 13)
ECE 8443 – Pattern Recognition LECTURE 07: MAXIMUM LIKELIHOOD AND BAYESIAN ESTIMATION Objectives: Class-Conditional Density The Multivariate Case General.
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
ELEC 303 – Random Signals Lecture 18 – Classical Statistical Inference, Dr. Farinaz Koushanfar ECE Dept., Rice University Nov 4, 2010.
Section 10.1 Confidence Intervals
Lecture 4: Statistics Review II Date: 9/5/02  Hypothesis tests: power  Estimation: likelihood, moment estimation, least square  Statistical properties.
Example: Bioassay experiment Problem statement –Observations: At each level of dose, 5 animals are tested, and number of death are observed.
Lecture 12: Linkage Analysis V Date: 10/03/02  Least squares  An EM algorithm  Simulated distribution  Marker coverage and density.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition LECTURE 07: BAYESIAN ESTIMATION (Cont.) Objectives:
Population genetics. coalesce 1.To grow together; fuse. 2.To come together so as to form one whole; unite: The rebel units coalesced into one army to.
Bayes Theorem. Prior Probabilities On way to party, you ask “Has Karl already had too many beers?” Your prior probabilities are 20% yes, 80% no.
1 Introduction to Statistics − Day 4 Glen Cowan Lecture 1 Probability Random variables, probability densities, etc. Lecture 2 Brief catalogue of probability.
1 Introduction to Statistics − Day 3 Glen Cowan Lecture 1 Probability Random variables, probability densities, etc. Brief catalogue of probability densities.
- 1 - Matlab statistics fundamentals Normal distribution % basic functions mew=100; sig=10; x=90; normpdf(x,mew,sig) 1/sig/sqrt(2*pi)*exp(-(x-mew)^2/sig^2/2)
Lecture 1: Basic Statistical Tools. A random variable (RV) = outcome (realization) not a set value, but rather drawn from some probability distribution.
Univariate Gaussian Case (Cont.)
G. Cowan Lectures on Statistical Data Analysis Lecture 9 page 1 Statistical Data Analysis: Lecture 9 1Probability, Bayes’ theorem 2Random variables and.
Fixed Parameters: Population Structure, Mutation, Selection, Recombination,... Reproductive Structure Genealogies of non-sequenced data Genealogies of.
Chapter 9: Introduction to the t statistic. The t Statistic The t statistic allows researchers to use sample data to test hypotheses about an unknown.
G. Cowan Lectures on Statistical Data Analysis Lecture 10 page 1 Statistical Data Analysis: Lecture 10 1Probability, Bayes’ theorem 2Random variables and.
Learning Theory Reza Shadmehr Distribution of the ML estimates of model parameters Signal dependent noise models.
Two-Sample-Means-1 Two Independent Populations (Chapter 6) Develop a confidence interval for the difference in means between two independent normal populations.
Lecture 17: Model-Free Linkage Analysis Date: 10/17/02  IBD and IBS  IBD and linkage  Fully Informative Sib Pair Analysis  Sib Pair Analysis with Missing.
Bayesian Estimators of Time to Most Recent Common Ancestry
More about Posterior Distributions
Where did we stop? The Bayes decision rule guarantees an optimal classification… … But it requires the knowledge of P(ci|x) (or p(x|ci) and P(ci)) We.
Parametric Methods Berlin Chen, 2005 References:
Lecture Slides Elementary Statistics Twelfth Edition
Presentation transcript:

Bayesian Estimators of Time to Most Recent Common Ancestry Ecology and Evolutionary Biology Adjunct Appointments Molecular and Cellular Biology Plant Sciences Epidemiology & Biostatistics Animal Sciences Bruce Walsh

Problem: Given some genetic marker information for a pair of individuals, what can we say about how recently they are related? Application in Genealogy -- how closely are (say) Bill Walsh (former football coach) and I related? Basic concept: The more closely related, the more genetic markers in common we should share For forensic applications: 11 highly polymorphic Automosomal marker loci sufficient for individualization Measure relatedness by TMRCA, the time to the most Recent common ancestor (MRCA)

Often we are very interested in TMRCAs that are modest (5-10 generations) or large (100’s to 10,000’s of generations) Unlinked autosomal markers simply don’t work over these time scales. Reason: With autosomes, unlinked markers assort each generation leaving only a small amount of IBD information on each marker, which we must then multiply together. IBD information decays on the order of 1/2 each generation. To extract the signal for 5-10 generations would require a very large number of automosomal markers (order of 100’s)

Y Chromosomes How then can we accurately estimate TMRCA for modest to large number of generations? Answer: Use a set of completely linked markers, such as those on the nonrecombiming arm of the Y chromosome. With completely linked marker loci, information on IBD does not assort away via recombination. IBD information decay is on the order of the mutation rate, roughly 1/ /500 per generation.

Infinite Alleles Model The first step in estimating TMRCA requires us to assume a particular mutational model Our (initial) assumption will be the infinite alleles model (IAM) The key assumption of this model (originally due to Kimura and Crow, 1964) is that each new mutation gives rise to a new allele.

Key: Under the infinite alleles, two alleles that are identical in state that are also ibd have not experienced any mutations since their MRCA. Let q(t) = Probability two alleles with a MRCA t generations ago are identical in state If u = per generation mutation rate, then q(t) = (1-u) 2t MRCA (1-u) t A B MRCA Pr(No mutation from MRCA->A) = (1-u) t Pr(No mutation from MRCA->B) = (1-u) t

q(t) = (1-u) 2t ≈ e -2ut = e - ,  = 2ut Building the Likelihood Function for n Loci For any single marker locus, the probability of IBS given a TMRCA of t generations is The probability that k of n marker loci are IBS is just a Binomial distribution with success parameter q(t) ( ) Likelihood function for t given k of n matches

ML Analysis of TMRCA ( ) It would seem that we now have all the pieces in hand for a likelihood analysis of TMRCA given the marker data (k of n matches) Likelihood function (  = 2ut) MLE for t is solution of ∂ L/∂t = 0 p = fraction of matches - () () ^ ^

In particular, the MLE for t becomes ( ) - Likewise, the precision of this estimator follows for the (negative) inverse of the 2nd derivative of the log-likelihood function evaluated at the MLE, Var( t ) = - - ( ) ^ ^ -

Trouble in Paradise The ML machinery has seem to have done its job, giving us an estimate, its approximate sampling error, approximate confidence intervals, and a scheme for hypothesis testing. Hence, all seems well. Problem: Look at k=n (= complete match at all markers). MLE (TMRCA) = 0 (independent of n) Var(MLE) = 0 (ouch!) The one-LOD support interval is from t=0 to t = (1/2n) [ -Ln(10)/Ln(1-u) ] = (0, 575/n)

With n=k, likelihood function reduces to L(t) = (1-u) 2tn ≈ e -2tun t L(t) (Plots for u = 0.002) MLE(t) = 0 for all values on n n=5 n=10 n= of max value (1) of likelihood function 1 LOD ≈ t = 29 1 LOD ≈ t = 58 1 LOD ≈ t = 115

The problem(s) with ML The expressions developed for the sampling variance, approximate confidence intervals, and hypothesis testing are all large-sample approximations Problem 1: Here our sample size is the number of markers scored in the two individuals. Not likely to be large (typically 10-30). Problem 2: These expressions are obtained by taking appropriate limits of the likelihood function. If the ML is exactly at the boundary of the admissible space on the likelihood surface, this limit may not formally exist, and hence the above approximations are incorrect.

The Solution? Go Bayesian p(  | x) = C * l(x |  ) p(  ) Instead of simply estimating a point estimate (e.g., the MLE), the goal is the estimate the entire distribution for the unknown parameter  given the data x posterior distribution of  given x Likelihood function for  Given the data x prior distribution for  The appropriate constant so that the posterior integrates to one. Why Bayesian? Exact for any sample size Marginal posteriors Efficient use of any prior information

The Prior on TMRCA The first step in any Bayesian analysis is choice of an appropriate prior distribution p(t) -- our thoughts on the distribution of TMRCA in the absence of any of the marker data Standard approach: Use a flat or uninformative prior, with p(t) = a constant over the admissible range of the parameter. Can cause problems if the likelihood function is unbounded (integrates to infinity) In our case, population-genetic theory provides the prior: under very general settings, the time to MRCA for a pair of individuals follows a geometric distribution

In particular, for a haploid gene, TMRCA follows a geometric distribution with mean 1/N e. Hence, our prior is just p(t) = (1- ) t ≈ e - t, where = 1/N e Hence, we can use an exponential prior with hyperparameter (the parameter fully characterizing the distribution) = 1/N e. The posterior thus becomes Previous likelihood function (ignoring constants that cancel when we compute the normalizing factor C) Prior Prior hyperparameter = 1/N e

The Normalizing constant where I ensures that the posterior distribution integrates to one, and hence is formally a probability distribution

What is the effect of the hyperparameter? If 2uk >>, then essentially no dependence on the actual value of chosen. Hence, if 2N e uk >> 1, essentially no dependence on (hyperparameter) assumptions of the prior. For a typical microsatellite rate of u = 0.002, this is just N e k >> 250, which is a very weak assumption. For example, with k =10 matches, N e >> 25. Even with only 1 match (k=1), just require N e >> 250.

Closed-form Solutions for the Posterior Distribution Complete analytic solutions to the prior can be obtained by using a series expansion (of the (1-e x ) n term) to give ( = Each term is just a * e bt, which is easily integrated

With the assumption of a flat prior, = 0, this reduces to - -

Hence, the complete analytic solution of the posterior is Suppose k = n (no mismatches) ( In this case, the prior is simply an exponential distribution with mean 2un +, -

Analysis of n = k case Mean TMRCA and its variance: < -- Cumulative probability: In particular, the time T  satisfying P(t < T  ) =  is - -

For a flat prior ( = 0), the 95% (one-side) confidence interval is thus given by -ln(0.5)/(2nu) ≈ 1.50/(nu) Hence, under a Bayesian analysis for u = 0.002, the 95% upper confidence interval is given by ≈ 749/n Recall that the one-LOD support interval (approximate 95% CI) under an ML analysis is ≈ 575/n The ML solution’s asymptotic approximation significantly underestimates the true interval relative to the exact analysis under a full Bayesian model

Posteriors for n = 10 Sample Posteriors for u = Posteriors for n = 20 Posteriors for n = Time t to MRCA p( t | k ) n = Time t to MRCA p( t | k ) n = 100

Key points By using the appropriate number of markers we can get accurate estimates for TMRCA for even just a few generations markers will do. By using markers on a non recombining chromosomal section, we can estimate TMRCA over much, much longer time scales than with unlinked autosomal markers Hence, we have a fairly large window of resolution for TMRCA when using a modest number of completely linked markers.

Stepwise Mutation Model (SMM) The Infinite alleles model (IAM) is not especially realistic with microsatellite data, unless the fraction of matches is very high. Under IAM, score as a match, and hence no mutations In reality, there are two mutations Microsatellite allelic variants are scored by their number of repeat units. Hence, two “matching” alleles can actually hide multiple mutations (and hence more time to the MRCA) Mutation 1 Mutation 2

Formally, the SMM model assumes the following transition probabilities > Note that two alleles can match only if they have experienced an even number of mutations in total between them. In such cases, the match probabilities become ()

() Number of mutations Prob(Match) ( The zero-order modifed Type I Bessel Function Hence, -

Under the SMM model, the prior hyperparameter can now become important. This is the case when the number n of markers is small and/or k/n is not very close to one Why? Under the prior, TMRCA is forced by a geometric with 1/Ne. Under the IAM model for most values this is still much more time than the likelihood function predicts from the marker data Under the SMM model, the likelihood alone predicts a much longer value so that the forcing effect of the initial geometric can come into play

n =5, k = 3, u = 0.02 Time, t Pr(TMRCA < t) IAM, both flat prior and Ne = 5000 SSMO, N e = 5000 SSMO, flat prior Prior with N e =5000

An Exact Treatment: SMME With a little work we can show that the probability two sites differ by j steps is just - > The resulting likelihood thus becomes … … Where n j is the number of sites that differ by k (observed) steps The jth-order modifed Type I Bessel Function

With this likelihood, the resulting prior becomes … - This rearranges to give the general posterior under the Exact SMM model (SMME) as - - … Number of exact matchesNumber of k steps differences

Example Consider comparing haplotypes 1 and 3 from Thomas et al.’s (2000) study on the Lemba and Cohen Y chromosome modal haplotypes. Here six markers used, four exactly match, one differs by one repeat, the other by two repeats Hence, n = 6, k = 4 for IAM and SMM0 models n 0 = 4, n 1 = 1, n 2 = 1, n = 6 under SMME model Assume Hammer’s value of N e =5000 for the prior

IAM SMM0 SMME Time to MRCA, t P(t | markers) TMRCA for Lemba and Cohen Y Model usedMeanMedium2.5%97.5% IAM SMM SMME

Time to MRCA, t Pr(TMRCA < t) IAM SMM0 SMME