Presentation is loading. Please wait.

Presentation is loading. Please wait.

Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / §❶ Review of Likelihood Inference Robert J. Tempelman 1.

Similar presentations


Presentation on theme: "Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / §❶ Review of Likelihood Inference Robert J. Tempelman 1."— Presentation transcript:

1 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / §❶ Review of Likelihood Inference Robert J. Tempelman 1

2 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Likelihood Inference Necessary prerequisites to understanding Bayesian inference – Distribution theory – Calculus – Asymptotic theory (e.g, Taylor expansions) – Numerical Methods/Optimization – Simulation-based analyses – Programming Skills SAS PROC ???? or R package ???? is only really a start to understanding data analysis. I don’t think that SAS PROC MCMC (version 9.3)/WinBuGs is a fix to all of your potential Bayesian inference problems. Data Analysts: Don’t throw away that Math Stats text just yet!!! Meaningful computing skills is a plus! 2

3 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / The “simplest” model Basic mean model: – Common distributional assumption: What does this mean? Think pdf!!! pdf: probability density function joint pdf is product of independent pdf’s 3 Conditional independence

4 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Likelihood function Simplify joint pdf further Regard joint pdf as function of parameters ‘proportional to’ 4

5 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Maximum likelihood estimation Maximize with respect to unknowns. – Well actually, we directly maximize log likelihood – One strategy: Use first derivatives: i.e., determine and and set to 0. – Result? 5

6 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Example Data, Log Likelihood & Maximum Likelihood estimates 6

7 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Likelihood inference for discrete data Consider the binomial distribution: Set to zero → 7

8 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Sometimes iterative solutions are required First derivative based methods can be slow for some problems. Second-derivative methods are often desirable, e.g. Newton-Raphson – Generally faster – Provide asymptotic standard errors as useful by- product 8

9 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Plant Genetics Example (Rao, 1971) y 1, y 2, y 3, and y 4 are the observed numbers of 4 different phenotypes involving genotypes different at two loci from the progeny of self- fertilized heterogygotes (AaBb). It is known that under genetic theory that the distribution of four different phenotypes (with complete dominance at each loci) is multinomial. 9

10 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Probabilities ProbabilityGenotypeData (Counts) Prob(A_B_)y 1 =1997 Prob(aaB_)y 2 =906 Prob(A_bb)y 3 =904 Prob(aabb)y 4 =32 0    1  → 0: close linkage in repulsion  → 1: close linkage in coupling 10

11 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Genetic Illustration of Coupling/Repulsion Coupling Repulsion A B a b A b a B 11  = 1  = 0

12 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Likelihood function Given: 12

13 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / First and second derivatives First derivative: Second derivative: Recall Newton Raphson algorithm: 13

14 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Newton Raphson: SAS data step and output data newton; y1 = 1997; y2 = 906; y3 = 904; y4 = 32; theta = 0.01; /* try starting value of 0.50 too */ do iterate = 1 to 5; loglike = y1*log(2+theta) + (y2+y3)*log(1-theta) + y4*log(theta); firstder = y1/(2+theta) - (y2+y3)/(1-theta) + y4/theta; secndder = (-y1/(2+theta)**2 - (y2+y3)/(1-theta)**2 - y4/theta**2); theta = theta + firstder/(-secndder); output; end; asyvar = 1/(-secndder); /* asymptotic variance of theta_hat at convergence */ output; run; proc print data=newton; var iterate theta loglike; run; iteratethetaloglike 10.0340391228.62 20.0356081247.07 30.0357061247.10 40.0357121247.10 50.0357121247.10 14

15 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Asymptotic standard errors Given: 15 Observed information proc print data=newton; var asyvar; run;

16 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Alternative to Newton Raphson Fisher’s scoring – Substitute for in Newton Raphson. – Now – Then 16 Expected information

17 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Fisher scoring: SAS data step and output: data newton; y1 = 1997; y2 = 906; y3 = 904; y4 = 32; theta = 0.01; /* try starting value of 0.50 too */ do iterate = 1 to 5; loglike = y1*log(2+theta) + (y2+y3)*log(1-theta) + y4*log(theta); firstder = y1/(2+theta) - (y2+y3)/(1-theta) + y4/theta; secndder = (n/4)*(-1/(2+theta) - 2/(1-theta) - 1/theta); theta = theta + firstder/(-secndder); output; end; asyvar = 1/(-secndder); /* asymptotic variance of theta_hat at convergence */ output; run; proc print data=newton; var iterate theta loglike; run; iteratethetaloglike 10.0340391228.62 20.0356081247.07 30.0357061247.10 40.0357121247.10 50.0357121247.10 17 In some applications, Fisher’s scoring is easier than Newton Raphson…but observed information probably more reliable than expected information (Efron and Hinckley, 1978 )

18 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Extensions to multivariate . Suppose that  is p x 1 vector. Newton Raphson Fisher’s scoring or 18

19 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Generalized linear models For multifactorial analysis of non-normal (binary, count) data. Consider the probit link binary model. – Implies the existence of normally distributed latent (underlying) variables ( i ). – Could do something similarly for logistic link binary model Consider a simple population mean model: – i =  + e i ; e i ~ N(0,  e 2 ) – Let  = 10 and  e = 2 19

20 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / The liability (latent variable) concept  =12 (“THRESHOLD”)  = 10  e = 2 20 i.e. probability of “success” = 15.87% i pdf( i ) Y=1 (“success”) Y=0 (“failure”)

21 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Inferential utopia! Suppose we’re able to measure the liabilities directly – Also suppose a more general multi-population(trt) model = X  + e; e ~ N(0, R); typically R = I  2 = ML(  ) = OLS(  ): But (sigh…), we of course don’t generally observe 21

22 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Suppose there are 3 subclasses Mean liabilities Use “corner parameterization”: = X  + e Herd 1 Herd 2 Herd 3 22

23 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Probability of success as function of effects (can’t observe liabilities…just observed binary data) Shaded areas 23

24 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Reparameterize model Let  and x i '  = (  + x i '*  *) cannot be estimated separately from  2 e ….i.e.,  2 e not identifiable. Herd 1 Herd 2 Herd 3 24

25 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Reparameterize the model again. consider the remaining parameters as standardized ratios:  =  /  e,  =  /  e, and  =  */  e -> same as constraining  e = 1. Notice that the new threshold is now 12/2 = 6, whereas the mean responses for the three herds are now 9/2, 10/2 and 11/2 25

26 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / There is still another identifiability problem Between  and  One solution? –  “zero out” . 26 Notice that the new threshold is now 0, whereas the mean responses for the three herds are now -1.5, -1 and -0.5

27 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Note that higher values of translate into lower probabilities of disease 27

28 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Deriving likelihood function Given: i.e., Suppose you have second animal (i’) Suppose animals i and i’ are conditionally independent Example y = 0,1 28

29 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Deriving likelihood function More general case – conditional independence So…likelihood function for probit model: Alternative: logistic model: 29 →

30 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Small probit regression example Data YiYi 110010001000 XiXi 40 43 47 50 Link function = probit 30

31 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Log likelihood Newton Raphson equations can be written as: Fisher’s scoring: 31

32 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / A SAS program data example_binary; input x y; cards; 40 1 40 0 43 0 43 1 43 0 47 0 47 1 50 0 ; proc genmod data=example_binary descending; class y; model y = x /dist=bin link=probit; contrast 'slope ' x 1; run; 32

33 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Key output Criteria For Assessing Goodness Of Fit CriterionDFValueValue/DF Log Likelihood-6.2123 Analysis Of Maximum Likelihood Parameter Estimates Paramet er DFEstimateStandard Error Wald 95% Confidence Limits Wald Chi- Square Pr > ChiSq Intercept17.82335.2657-2.497418.14392.210.1374 x1-0.18600.1194-0.41990.04802.430.1192 Scale01.00000.00001.0000 Contrast Results ContrastDFChi-SquarePr > ChiSqType slope12.850.0913LR 33

34 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Wald test Asymptotic inference: – Reported standard errors are square roots of diagonals. Hypothesis test: on K’  = 0: When is n “large enough” for this to be trustworthy???? 34

35 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Likelihood ratio test proc genmod data=example_binary descending; class y; model y = /dist=bin link=probit; run; Criteria For Assessing Goodness Of Fit CriterionDFValueValue/DF Log Likelihood-7.6382 -2 (logL reduced - logL full ) = -2(-7.63 - -6.210) =2.84 H o :  1 = 0 is Prob(  2 1 >2.84) =.09. Reduced Model: 35 Again..asymptotic

36 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / A PROC GLIMMIX “fix” for uncertainty: use asymptotic F-tests rather than  2 -tests proc glimmix data=example_binary ; model y = x /dist=bin link=probit; contrast 'slope ' x 1; run; Type III Tests of Fixed Effects EffectNum DFDen DFF ValuePr > F x1102.430.1503 Contrasts LabelNum DFDen DFF ValuePr > F slope1102.430.1503 “less asymptotic?” 36

37 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Ordinal Categorical Data How I learned this? – “Sire evaluation for ordered categorical data with a threshold model” by Dan Gianola and Jean Louis Foulley (1983) in Genetics, Selection, Evolution 15:201-224. (GF83) – See also Harville and Mee (1984) Biometrics (HM84) Application: – Calving ease scores (0= unassisted, 5 = Caesarean) – Determined by underlying continuous liability relative to set of thresholds: 37

38 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Liabilities: Consider three different herds/subclasses:  e = 2 38

39 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Underlying normal densities for each of three herds. Probabilities highlighted for Herd 2 39

40 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Constraints Not really possible to separately estimate  e from  1,  2,  1,  2, and  3. Define then L* = L/  e,  1 * =  1 /  e,  1 * =  1 /  e,  2 * =  2 /  e, and  3 * =  3 /  e. 40 2468

41 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Yet another constraint required Suppose we use the corner parameterization: when expressed as a ratio over  e is Such that  1 * or  2 * are not separately identifiable from  *  1 **=  1 * -  * = 4.0- 5.5 = -1.5  2 **=  2 * -  * = = 6.0- 5.5 = +0.5 41 i.e., zero out  *

42 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ /  1 **=  1 * -  * = 4.0- 5.5 = -1.5  2 **=  2 * -  * = = 6.0- 5.5 = +0.5 42  **

43 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Alternative constraint. Estimate  but “zero out” one of  1 or  2,say  1 Start with and  1 * = 4.0 and  2 * = 6.0. Then:  **=  *-  1 * = 5.5-4.0 = 1.5  2 ** =  2 * -  1 * = 6.0 - 4.0 = 2.0 43

44 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / One last constraint possibility Setting  1 = 0 and  2 to arbitrary value >  1 and infer upon  e Say  e = 2.  1 fixed to 0;  2 fixed to 4 44

45 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Likelihood function for Ordinal Categorical Data Based on the multinomial (m categories) where and Likelihood: Log Likelihood: 45

46 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Hypothetical small example Ordinal outcome having 3 possible categories: Two subjects in the dataset: – first subject has a response of 1 whereas the second has a response of 3. – Their contribution to the log likelihood: 46

47 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Solving for ML Let’s use Fisher’s scoring: – For a three+ category problem: Now 47

48 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Setting up Fisher’s scoring 2 nd derivatives(see GF83 or HM84 for details) now 48

49 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Setting up Fisher’s scoring 1 nd derivatives (see GF83 for details) Now with 49

50 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Fisher’s scoring algorithm So 50

51 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Data from GF (1983) H A G S Y H A G S Y H A G S Y 1 2 M 1 1 1 2 F 1 1 1 3 M 1 1 1 2 F 2 2 1 3 M 2 1 1 3 M 2 3 1 3 F 2 1 1 3 F 2 1 1 3 F 2 1 1 2 M 3 1 1 2 M 3 2 1 3 F 3 2 1 3 M 3 1 2 2 F 1 1 2 2 F 1 1 2 2 M 1 1 2 3 M 1 3 2 2 F 2 1 2 2 F 2 3 2 3 M 2 1 2 2 F 3 2 2 3 M 3 3 2 2 M 4 2 2 2 F 4 1 2 3 F 4 1 2 3 F 4 1 2 3 M 4 1 2 3 M 4 1 H: Herd (1 or 2) A: Age of Dam (2 = Young heifer, 3 = Older cow) G: Gender or sex (M and F) S: Sire of calf (1, 2, 3, or 4) Y: Ordinal Response (1,2, or 3) 51

52 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / SAS code: Let’s just consider sex in model proc glimmix data = gf83 ; model y = sex /dist=mult link=cumprobit solutions; estimate 'Category 1 Female ' intercept 1 0 sex 1 /ilink; estimate 'Category 1 Male ' intercept 1 0 sex 0 /ilink; estimate 'Category <=2 Female ' intercept 0 1 sex 1 /ilink; estimate 'Category <=2 Male ' intercept 0 1 sex 0 /ilink; run; Subtle difference in parameterization: Gianola &Foulley, 1983 PROC GLIMMIX = 1 if females, 0 if males 52

53 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Parameter Estimates EffectyEstimate Standard Error DFt ValuePr > |t| Intercept  1 -  10.30070.3373250.890.3812 Intercept  2 -  20.91540.3656252.500.0192 Sex  1 0.32900.4738250.690.4938 Type III Tests of Fixed Effects EffectNum DFDen DFF ValuePr > F sex1250.480.4938 53

54 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Estimated Cumulative Probabilities LabelEstimateStandard Error DFt ValuePr > |t|MeanStandard Error Mean Category 1 Female 0.62970.3478251.810.08220.73550.1138 Category 1 Male 0.30070.3373250.890.38120.61820.1286 Category <=2 Female 1.24440.3930253.170.00400.89330.07228 Category <=2 Male 0.91540.3656252.500.01920.82000.09594 Asymptotics? 54

55 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / PROC NLINMIXED (fix  0,  e ) proc nlmixed data=gf83 ; parms beta1=0 thresh1=-1.5 thresh2 = 0.5; eta = beta1*sex ; if (y=1) then p = probnorm(thresh1-eta) - 0; else if (y=2) then p = probnorm(thresh2-eta) - probnorm(thresh1-eta); else if (y=3) then p = 1 - probnorm(thresh2-eta); if (p > 1e-8) then ll = log(p); else ll = -1e100; model y ~ general(ll); estimate 'Category 1 Female ' probnorm(thresh1-beta1); estimate 'Category 1 Male ' probnorm(thresh1-0); estimate 'Category <=2 Female ' probnorm(thresh2-beta1); estimate 'Category <=2 Male ' probnorm(thresh2-0); run; 55 Estimate  1,  1,  2

56 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Key output from PROC NLINMIXED ParameterEstimateStandard Error DFt ValuePr > |t| beta1-0.32900.473828-0.690.4931 thresh10.30070.3373280.890.3803 thresh20.91540.3656282.500.0184 Additional Estimates LabelEstimateStandard Error Category 1 Female0.73550.1138 Category 1 Male0.61820.1286 Category <=2 Female0.89330.07228 Category <=2 Male0.82000.09594 56

57 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Yet another alternative (fix  1,  2 ) proc nlmixed data=gf83 ; parms beta1=0 sigmae= 1 mu = 0; thresh1 = 0; thresh2 = 0.5; eta = mu + beta1*sex ; if (y=1) then p = probnorm((thresh1-eta)/sigmae); else if (y=2) then p = probnorm((thresh2-eta)/sigmae) - probnorm((thresh1-eta)/sigmae); else if (y=3) then p = 1 - probnorm((thresh2-eta)/sigmae); if (p > 1e-8) then ll = log(p); else ll = -1e100; model y ~ general(ll); estimate 'Category 1 Female ' probnorm((thresh1-(mu+beta1))/sigmae); estimate 'Category 1 Male ' probnorm((thresh1-mu)/sigmae); estimate 'Category <=2 Female ' probnorm((thresh2-(mu+beta1))/sigmae); estimate 'Category <=2 Male ' probnorm((thresh2-mu)/sigmae); run; 57 Estimate  1,  e,  0 (  )

58 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Parameter Estimates Paramet er EstimateStandard Error beta1-0.26760.3946 sigmae0.81340.3327 mu-0.24460.3151 Additional Estimates LabelEstimateStandard Error Category 1 Female 0.73560.1138 Category 1 Male 0.61820.1286 Category <=2 Female 0.89330.07228 Category <=2 Male 0.82000.09594 This is not inference on overdispersion!!… it’s merely a reparameterization 58

59 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / What is overdispersion from an experimental design perspective? No overdispersion identifiable for binary data…then why possible overdispersion for binomial data? – It’s merely a cluster (block) effect. Binomial responses. – Consists of y/n response. – Actually each “response” is a combined total for cluster with n contributing binary responses; y of them being successes, n-y being failures. Similar arguments hold for overdispersion in Poisson and n=1 vs. n>1 multinomials. 59

60 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Hessian Fly Data Example (Gotway and Stroup, 1997) ObsYnblockentrylatlngrep 1281141111 2191161212 39131713 499161414 5291132121 67141152222 768182323 8811152424 97121113131 108111123232 60 Available from SAS PROC GLIMMIX documentation

61 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / PROC GLIMMIX code title "G side independence"; proc glimmix data=HessianFly; class block entry rep; model y/n = entry ; random rep /subject =intercept ; run; 61 Much richer (e.g. spatial) analysis provided by Gotway and Stroup (1997); Stroup’s workshop (2011)

62 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Key portions of output Number of Observations Read64 Number of Observations Used64 Number of Events396 Number of Trials736 Covariance Parameter Estimates Cov ParmSubjectEstimateStandard Error repIntercept0.68060.2612 62

63 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Hessian Fly Data in “individual” binary form: Obsentryrepz 114111 214111 314110 414110 514110 614110 714110 814110 916121 1016120 1116120 16120 1316120 1416120 1516120 16 120 1716120 2/8 1/9 63

64 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / PROC GLIMMIX code for “individual” data title "G side independence"; proc glimmix data=HessianFlyindividual ; class rep entry ; model z = entry / dist=bin; random intercept /subject =rep ; run; 64 random rep ;

65 Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / Key portions of output Number of Observations Read736 Number of Observations Used736 Covariance Parameter Estimates Cov ParmSubjectEstimateStandard Error Interceptrep0.68060.2612 65


Download ppt "Applied Bayesian Inference, KSU, April 29, 2012 §. ❶ / §❶ Review of Likelihood Inference Robert J. Tempelman 1."

Similar presentations


Ads by Google