Presentation is loading. Please wait.

Presentation is loading. Please wait.

Model Identification & Model Selection

Similar presentations


Presentation on theme: "Model Identification & Model Selection"— Presentation transcript:

1 Model Identification & Model Selection
With focus on Mark/Recapture Studies

2 Overview Basic inference from an evidentialist perspective
Model selection tools for mark/recapture AICc & SIC/BIC Overdispersed data Model set size Multimodel inference

3 DATA /* 01 */ ; /* 04 */ ; /* 05 */ ; /* 06 */ ; /* 07 */ ; /* 08 */ ; /* 09 */ ; /* 10 */ ; /* 11 */ ; /* 12 */ ; /* 13 */ ; /* 14 */ ; /* 15 */ ; /* 16 */ ; /* 17 */ ; /* 18 */ ; /* 19 */ ; /* 20 */ ; /* 21 */ ; /* 22 */ ; /* 23 */ ; /* 24 */ ; /* 25 */ ; /* 26 */ ; /* 27 */ ; /* 28 */ ; /* 29 */ ; /* 30 */ ; /* 31 */ ; /* 32 */ ; /* 33 */ ; /* 34 */ ; /* 35 */ ; /* 36 */ ; /* 37 */ ;

4 Models carry the meaning in science
Organized thought Parameterized Model Organized thought connected to reality

5 Science is a cyclic process of model reconstruction and model reevaluation
Comparison of predictions with observations/data Relative comparisons are evidence

6 All models are false, but some are useful.
George Box

7 Statistical Inferences
Quantitative measures of the validity and utility of models Social control on the behavior of scientists

8 Scientific Model Selection Criteria
Illuminating Communicable Defensible Transferable

9 Common Information Criteria

10 Statistical Methods are Tools
All statistical methods exist in the mind only, but some are useful. Mark Taper

11 Classes of Inference Frequentist Statistics - Bayesian Statistics
Error Statistics – Evidential Stats – Bayesian Stats

12 Two key frequencies in frequentist statistics
Frequency definition of probability Frequency of error in a decision rule

13 Null H tests with Fisherian P-values
Single model only P-value= Prob of discrepancy at least as great as observed by chance. Not terribly useful for model selection

14 Neyman-Pearson Tests 2 models
Null model test along a maximally sensitive axis. Binary response: Accept Null or reject Null Size of test (α) describes frequency of rejecting null in error. Not about the data, it is about the test. You support your decision because you have made it with a reliable procedure. N-P test tell you very little about relative support for alternative models.

15 Decisions vs. Conclusions
Decision based inference reasonable within a regulatory framework. Not so appropriate for science John Tukey (1960) advocated seeking to reach conclusions not making decisions. Accumulate evidence until a conclusion is very strongly supported. Treat as true. Revise if new evidence contradicts.

16 All are tools for aiding scientific thought
In conclusion framework, multiple statistical metrics not “incompatible” All are tools for aiding scientific thought

17 Statistical Evidence Data based estimate of the relative distance between two models and “truth”

18 Common Evidence Functions
Likelihood ratios Differences in information criteria Others available E.g. Log(Jackknife prediction likelihood ratio)

19 Model Adequacy Bruce Lindsay The discrepancy of a model from truth
Truth represented by an empirical distribution function, A model is “adequate” if the estimated discrepancy is less than some arbitrary but meaningful level.

20 Model Adequacy and Goodness of Fit
Estimation framework rather than testing framework Confidence intervals rather than testing Rejection of “true model formalism”

21 Model Adequacy, Goodness of Fit, and Evidence
Adequacy does not explicitly compare models Implicit comparison Model adequacy interpretable as bound on strength of evidence for any better model Unifies Model Adequacy and Evidence in a common framework

22 Model adequacy interpreted as a bound on evidence for a possibly better model
Empirical Distribution - “Truth” Model 1 Potentially better model Model adequacy measure Evidence measure

23 Goodness of fit misnomer
Badness of fit measures & goodness of fit tests Comparison of model to a nonparametric estimate of true distribution. G2-Statistic Helinger Distance Pearson χ2 Neyman χ2

24 Points of interest Badness of fit is the scope for improvement
Evidence for one model relative to another model is the difference of badness of fit.

25 ΔIC estimates differences of Kullback-Leibler Discrepancies
ΔIC = log(likelihood ratio) when # of parameters are equal Complexity penalty is a bias correction to adjust of increase in apparent precision with an increase in # parameters.

26 Evidence Scales L/R Log2 ln Log10 Weak <8 <3 <2 <1 Strong
8 - <32 3 - <5 2 - <7 1 - <2 Very Strong > 32 > 5 > 7 > 2 Note cutoff are arbitrary and vary with scale

27 Which Information Criterion?
AIC? AICc ? SIC/BIC? Don’t use AIC 5.9 of one versus 6.1 of the other

28 What is sample size for complexity penalty?
Mark/Recapture based on multinomial likelihoods Observation is a capture history not a session

29 To Q or not to Q? IC based model selection assumes a good model in set. Over-dispersion is common in Mark/Recapture data Don’t have a good model in set Due to lack of independence of observations Parameter estimate bias generally not influenced But fit will appear too good! Model selection will choose more highly parameterized models than appropriate

30 Quasi likelihood approach
χ2 goodness of fit test for most general model If reject H0 estimate variance inflation c^ = χ2 /df Correct fit component of IC & redo selection

31 QICs

32 Problems with Quasilikelihood correction
C^ is essentially a variance estimate. Variance estimates unstable without a lot of data lnL/c^ is a ratio statistic Ratio statistics highly unstable if the uncertainty in the denominator is not trivial Unlike AICc, bias correction is estimated. Estimating a bias correction inflates variance!

33 Fixes Explicitly include random component in model
Then redo model selection Bootstrapped median c^ Model selection with Jackknifed prediction likelihood

34 Large or small model sets?
Problem: Model Selection Bias When # of models large relative to data size some models will have a good fit just by chance Small Burnham & Anderson strongly advocate small model sets representing well thought out science Large model sets = “data dredging” Large The science may not be mature Small model sets may risk missing important factors

35 Model Selection from Many Candidates Taper(2004)
SIC(x) = -2In(L) + (In(n) + x)k.

36 Performance of SIC(X) with small data set.
N=50, true covariates=10, spurious covariates=30, all models of order <=20, X 1014 candidate models '

37 Chen & Chen 2009 M subset size, P= # of possible terms

38 Explicit Tradeoff Small model sets Large model sets
Allows exploration of fine structure and small effects Risks missing unanticipated large effects Large model sets Will catch unknown large effects Will miss fine structure Large or small model sets is a principled choice that data analysts should make based on their background knowledge and needs

39 Akaike Weights & Model Averaging
Beware, there be dragons here!

40 Akaike Weights “Relative likelihood of model i given the data and model set” “Weight of evidence that model i most appropriate given data and model set”

41 Model Averaging “Conditional” Variance “Unconditional” Variance.
Conditional on selected model “Unconditional” Variance. Actually conditional on entire model set

42 Good impulse with Huge Problems
I do not recommend Akaike weights I do not recommend model averaging in this fashion Importance of good models is diminished by adding bad models Location of average influenced by adding redundant models

43 Model Redundancy Model Space is not filled uniformly
Models tend to be developed in highly redundant clusters. Some points in model space allow few models Some points allow many

44 Redundant models do not add much information
Model dimension Model adequacy Model dimension Model adequacy

45 A more reasonable approach
Bootstrap Data Fit model set & select best model Estimate derived parameter θ from best model Accumulate θ Repeat Within Time Constraints Mean or median θ with percentile confidence intervals


Download ppt "Model Identification & Model Selection"

Similar presentations


Ads by Google