Lecture 4: Likelihoods and Inference

Slides:



Advertisements
Similar presentations
STA291 Statistical Methods Lecture 23. Difference of means, redux Default case: assume no natural connection between individual observations in the two.
Advertisements

Inferences based on TWO samples
Point Estimation Notes of STAT 6205 by Dr. Fan.
Previous Lecture: Distributions. Introduction to Biostatistics and Bioinformatics Estimation I This Lecture By Judy Zhong Assistant Professor Division.
Sampling Distributions (§ )
Statistical Inference Chapter 12/13. COMP 5340/6340 Statistical Inference2 Statistical Inference Given a sample of observations from a population, the.
Lecture 7 1 Statistics Statistics: 1. Model 2. Estimation 3. Hypothesis test.
Simple Linear Regression and Correlation
Sampling Distributions  A statistic is random in value … it changes from sample to sample.  The probability distribution of a statistic is called a sampling.
The Neymann-Pearson Lemma Suppose that the data x 1, …, x n has joint density function f(x 1, …, x n ;  ) where  is either  1 or  2. Let g(x 1, …,
The paired sample experiment The paired t test. Frequently one is interested in comparing the effects of two treatments (drugs, etc…) on a response variable.
Inference for a Single Population Proportion (p).
Estimates and Sample Sizes Lecture – 7.4
Random Sampling, Point Estimation and Maximum Likelihood.
Estimating parameters in a statistical model Likelihood and Maximum likelihood estimation Bayesian point estimates Maximum a posteriori point.
Stochastic Models Lecture 2 Poisson Processes
1 Section 9-4 Two Means: Matched Pairs In this section we deal with dependent samples. In other words, there is some relationship between the two samples.
HSRP 734: Advanced Statistical Methods July 17, 2008.
STT : BIOSTATISTICS ANALYSIS Dr. Cuixian Chen Chapter 7: Parametric Survival Models under Censoring STT
Lecture 4: Likelihoods and Inference Likelihood function for censored data.
M.Sc. in Economics Econometrics Module I Topic 4: Maximum Likelihood Estimation Carol Newman.
Lecture 4 Confidence Intervals. Lecture Summary Last lecture, we talked about summary statistics and how “good” they were in estimating the parameters.
Week 111 Review - Sum of Normal Random Variables The weighted sum of two independent normally distributed random variables has a normal distribution. Example.
Chapter 9: Introduction to the t statistic. The t Statistic The t statistic allows researchers to use sample data to test hypotheses about an unknown.
DURATION ANALYSIS Eva Hromádková, Applied Econometrics JEM007, IES Lecture 9.
Section Copyright © 2014, 2012, 2010 Pearson Education, Inc. Lecture Slides Elementary Statistics Twelfth Edition and the Triola Statistics Series.
Independent Samples: Comparing Means Lecture 39 Section 11.4 Fri, Apr 1, 2005.
Inference for a Single Population Proportion (p)
STAT 312 Chapter 7 - Statistical Intervals Based on a Single Sample
Sampling and Sampling Distributions
Inference about the slope parameter and correlation
Introduction For inference on the difference between the means of two populations, we need samples from both populations. The basic assumptions.
Statistical Estimation
Making inferences from collected data involve two possible tasks:
STATISTICS POINT ESTIMATION
STATISTICAL INFERENCE
12. Principles of Parameter Estimation
3. The X and Y samples are independent of one another.
Lecture Slides Essentials of Statistics 5th Edition
Virtual COMSATS Inferential Statistics Lecture-26
Lecture Slides Elementary Statistics Twelfth Edition
STAT 312 Chapter 7 - Statistical Intervals Based on a Single Sample
Chapter 8: Inference for Proportions
Parameter, Statistic and Random Samples
Tests for Two Means – Normal Populations
STAT120C: Final Review.
Elementary Statistics
Elementary Statistics
Parametric Survival Models (ch. 7)
EC 331 The Theory of and applications of Maximum Likelihood Method
Lecture 2 Interval Estimation
Mai Zhou Dept. of Statistics, University of Kentucky
Two Sample T-Tests AP Statistics.
Independent Samples: Comparing Means
Comparing two Rates Farrokh Alemi Ph.D.
EMIS 7300 SYSTEMS ANALYSIS METHODS FALL 2005
EM for Inference in MV Data
Simple Linear Regression
Stat Lab 9.
Lecture Slides Elementary Statistics Twelfth Edition
EM for Inference in MV Data
Estimates and Sample Sizes Lecture – 7.4
Sampling Distributions (§ )
CS723 - Probability and Stochastic Processes
12. Principles of Parameter Estimation
Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
Lecture 4: Likelihoods and Inference
Chapter 9 Lecture 4 Section: 9.4.
Fractional-Random-Weight Bootstrap
Presentation transcript:

Lecture 4: Likelihoods and Inference Likelihood function for censored data

Likelihood Function Start simple Assumptions: All times are observed (i.e. NO censoring) What does the likelihood look like? Assumptions: Sample size is N pdf denoted by:

Exponential…

That was Easy… So how do we handle censoring? What do we know if the actual time is not observed? Right censored data Some patients have observed times Some patients have censored times Only know that the haven’t failed by time t Include partial information

First Some Notation… Exact lifetimes: Right-censored: Left-censored: Interval censored:

Likelihood for Right-Censored Data From our previous slide Exact lifetime Right censored The likelihood

Other Censoring Generalized form of the likelihood What about truncation? Left: Right:

Left-Truncated Right Censored Data

Type I Right-Censoring Up to this point we have been working with event and censoring times X and Cr However, when we sample from a population we observe either the event or censoring time What we actually observe is a random variable T and a censoring indicator, d, yielding the r.v. pair {T, d} Thus within a dataset, we have two possibilities…

Type I Right Censoring Scenario 1: d = 0

Type I Right Censoring Scenario 2: d = 1

Back to our Exponential Example With right-censoring

What if X and Cr are random variables… Assume we have a random censoring process So now each person has a lifetime X and a censoring time Cr that are random variables How does this effect the likelihood? We still observe the r.v. pair {T, d} Again we have two possible scenarios Observe the subjects censoring time Observe the subjects event time

X and Cr are random Scenario 1: d = 0

X and Cr are random Scenario 2: d = 1

X and Cr are random Likelihood:

What If X and Cr are Not Independent These likelihoods are invalid Instead assume there is some joint survival distribution, S(X, Cr ) that describes these event times The resulting likelihood: Results may be very different from the independent likelihood

MLEs Recall the MLE is found by maximizing the likelihood Recall likelihood setup under right censoring

MLE Example Consider our exponential example What is the MLE for l?

MLE Example

More on MLEs? What else might we want to know? MLE variance? Confidence Intervals? Hypothesis testing?

MLE Variance Recall, I(q) denotes the Fisher’s information matrix with elements The MLE has large sample propertied

Confidence Intervals for q The (1-a)*100% CI for q

Examples Data x1, x2,…, xn ~Exp(l) (iid)

Test Statistics Testing for fixed q0 Wald Statistic Score Statistic LRT (Neyman-Pearson/Wilks)

Examples: Weibull, no censoring Data x1, x2,…, xn ~Weib(a, l) (iid)

Fisher Information

Wald Test for Weibull From this we can construct the Wald Test:

Next Time We begin discussing nonparametric methods Homework 1 will be posted today