Presentation is loading. Please wait.

Presentation is loading. Please wait.

Statistical model building Marian Scott Dept of Statistics, University of Glasgow Glasgow, Sept 2007.

Similar presentations


Presentation on theme: "Statistical model building Marian Scott Dept of Statistics, University of Glasgow Glasgow, Sept 2007."— Presentation transcript:

1

2 Statistical model building Marian Scott Dept of Statistics, University of Glasgow Glasgow, Sept 2007

3 Outline of presentation Statistical models- what are the principles – describing variation – empiricism Fitting models- calibration Testing models- validation or verification Quantifying and apportioning variation in model and data. Stochastic and deterministic models. Model choice

4 All models are wrong but some are useful (and some are more useful than others) (All data are useful, but some are more varied than others.) a quote from a famous statistician, George Box

5 Step 1 why do you want to build a model- what is your objective? what data are available and how were they collected? is there a natural response or outcome and other explanatory variables or covariates?

6 Modelling objectives explore relationships make predictions improve understanding test hypotheses

7 Conceptual system Data Model Policy inputs & parameters model results feedbacks

8 Why model? Purposes of modelling: – Describe/summarise – Predict - what if…. – Test hypotheses – Manage What is a good model? – Simple, realistic, efficient, reliable, valid

9 Value judgements Different criteria of unequal importance key comparison often comparison to observational data but such comparisons must include the model uncertainties and the uncertainties on the observational data (touched on later).

10 Questions we ask about models Is the model valid? Are the assumptions reasonable? Does the model make sense based on best scientific knowledge? Is the model credible? Do the model predictions match the observed data? How uncertain are the results?

11 Statistical models Always includes an term to describe random variation Empirical Descriptive and predictive Model building goal: simplest model which is adequate used for inference

12 Physical/process based models Uses best scientific knowledge May not explicitly include, or any random variation Descriptive and predictive Goal may not be simplest model Not used for inference

13 Models Mathematical (deterministic/process based) models tend to be complex to ignore important sources of uncertainty Statistical models tend to be empirical To ignore much of the biological/physical/chemical knowledge

14 Stages in modelling Design and conceptualisation: – Visualisation of structure – Identification of processes (variable selection) – Choice of parameterisation Fitting and assessment – parameter estimation (calibration) – Goodness of fit

15 a visual model- atmospheric flux of pollutants Atmospheric pollutants dispersed over Europe In the 1970 considerable environmental damage caused by acid rain International action Development of EMEP programme, models and measurements

16 The mathematical flux model L: Monin-Obukhov length u*: Friction velocity of wind c p : constant (=1.01) : constant (=1246 gm -3 ) T: air temperature (in Kelvin) k: constant (=0.41) g: gravitational force (=9.81m/s) H: the rate of heat transfer per unit area gasht: Current height that measurements are taken at. d: zero plane displacement

17 what would a statistician do if confronted with this problem? ask what the objective of modelling is look at the data, and data quality try and understand the measurement processes think about how the scientific knowledge, conceptual model relates to what we have measured think about uncertainty

18 Step 2- understand your data study your data learn its properties tools- graphical

19 measured atmospheric fluxes for 1997 measured fluxes for 1997 are still noisy. Is there a statistical signal and at what timescale?

20 Sulphur and NitrogenEMEP: 15 stations in United Kingdom and 4 in Republic of Ireland from 1978 to 1998. (aggregated to monthly means). The Monitoring Networks UK National Air Quality Information Archive: 8 stations in the United Kingdom corresponding to some of EMEP stations from 1983 to 2007. (hourly data, subsequently aggregated to monthly day and night means). Ozone (O 3 ) (SO 2, SO 4, NO 2, NO 3, NH 4, HNO 3 +NO 3, NH 3 +NH 4 )

21 GB02 Eskdalemuir GB03 Goonhilly GB04 Stoke ferry GB05 Ludlow GB06 Lough Navar GB07 Barcombe Mills GB13 Yarner Wood GB14 High Muffles GB15 Starth Vaich Dam GB16 Glen Dye GB36 Harwell GB37 Ladybower GB38 Lullington Heath GB43 Narberth GB45 Wicken Fen IE01 Valentia Obs. IE02 Turlough Hill IE03 The Burren IE04 Ridge of Capard

22 THE DATA What evidence is there of a trend in the atmospheric concentrations? OutliersMissing valuesDiscontinuities

23

24

25 Loch Leven

26

27

28 Key properties of any measurement Accuracy refers to the deviation of the measurement from the true value Precision refers to the variation in a series of replicate measurements (obtained under identical conditions)

29 Accurate Imprecise Inaccurate Precise Accuracy and precision

30 Evaluation of precision Analysis of the instrumentation method to make a single measurement, and the propagation of any errors Repeat measurements (true replicates) – using homogeneous material, repeatedly subsampling, etc…. Precision is linked to Variance (standard deviation)

31 The nature of measurement All measurement is subject to uncertainty Analytical uncertainty reflects that every time a measurement is made (under identical conditions), the result is different. Sampling uncertainty represents the natural variation in the organism within the environment.

32 The error and uncertainty in a measurement The error is a single value, which represents the difference between the measured value and the true value The uncertainty is a range of values, and describes the errors which might have been observed were the measurement repeated under IDENTICAL conditions Error (and uncertainty) includes a combination of variance and bias

33 Effect of uncertainties Lack of observations contribute to – uncertainties in input data – uncertainty in model parameter values Conflicting evidence contributes to – uncertainty about model form – uncertainty about validity of assumptions

34 Step 3- build the statistical model Outcomes or Responses sometimes referred to as dependent variables. Causes or Explanations these are the conditions or environment within which the outcomes or responses have been observed and are sometimes referred to asindependent variables, but more commonly known as covariates.

35 Statistical models In experiments many of the covariates have been determined by the experimenter but some may be aspects that the experimenter has no control over but that are relevant to the outcomes or responses. In observational studies, these are usually not under the control of the experimenter but are recorded as possible explanations of the outcomes or responses. we may not know which covariates are important. recognise that we may build several models before making the final choice

36 Specifying a statistical models Models specify the way in which outcomes and causes link together, eg. Metabolite = Temperature The = sign does not indicate equality in a mathematical sense and there should be an additional item on the right hand side giving a formula:- Metabolite = Temperature + Error

37 Specifying a statistical model Metabolite = Temperature + Error In mathematical terms, there will be some unknown parameters to be estimated, and some assumptions will be made about the error distribution Metabolite = + temperature + ~ N(0, σ 2 )- appropriate perhaps? σ,, are model parameters and are unknown

38 Model calibration Statisticians tend to talk about model fitting, calibration means something else to them. Methods- least squares or maximum likelihood least squares:- find the parameter estimates that minimise the sum of squares (SS) SS= (observed y- model fitted y )2 maximum likelihood- find the parameter estimates that maximise the likelihood of the data

39 Calibration-using the data A good idea, if possible to have a training and a test set of data-split the data (e.g. 90%/10%) Fit the model using the training set, evaluate the model using the test set. why? because if we assess how well the model performs on the data that were used to fit it, then we are being over optimistic

40 How good is my statistical model? What criteria do we use to judge the value of our model? – may depend on what the model was built to do Obvious ones – Closeness to the observed data – Goodness of predictions at previously unobserved covariate values – % variation in response explained (R 2 )

41 Model validation what is validation? Fit the model using the training set, evaluate the model using the test set. why? because if we assess how well the model performs on the data that were used to fit it, then we are being over optimistic assessment of goodness of fit? – residual sums of squares, mean square error for prediction

42 Model validation splitting the data set, is it possible? cross-validation – leave – one-out, leave-k-out split at random, a small % kept aside for testing other methods: bootstrap and jack-knife

43 an aside- how well should models agree? 6 physical-deterministic ocean models (process based- transport, sedimentary processes, numerical solution scheme, grid size) used to predict the dispersal of a pollutant Results to be used to determine a remediation policy for an illegal dumping of radioactive waste The what if scenario investigation The models differ in their detail and also in their spatial scale

44 Predictions of levels of cobalt-60 Different models, same input data Predictions vary by considerable margins Magnitude of variation a function of spatial distribution of sites

45 model ensembles becoming increasingly common in climate, meteorology, to make many model runs, different models, different starting conditions and then to average the results. why would we do this?

46 the statistical approach to model building and selection in a regression situation, we may have many potential explanatory variables, how do we choose which to include in the final model? – answer may depend on purpose, on how many explanatory variables there may be identify variables that can be omitted on statistical grounds (no evidence of effect) (see regression sessions with Adrian for testing and CI approaches)- is an effect statistically significant?

47 the statistical approach to model building and selection automatic selection procedures can be useful but also potentially dangerous (e.g. stepwise regression, best subset-regression) they often identify the best under a defined criterion in a family of models (like smallest residual sum of squares). but this best model could in an absolute sense be poor.

48 the statistical approach to model building and selection other statistical criteria exist for model choice-AIC, DIC, BIC- based on likelihood approaches, can be used to compare non-nested models (ie parameter set of one model is not contained within parameter set of the larger model) need to be careful of dredging for significance remember statistical significance is not always equal to practical importance

49 Information criterion In the general case, the AIC is – AIC=2k-2ln(L) where k is the number of parameters, and L is the likelihood function.parameterslikelihood if we assume that the model errors are normally and independently distributed. Let n be the number of observations and RSS be the residual sum of squares. Then AIC becomesobservations residual sum of squares – AIC=2k+nln(RSS/n)

50 AIC Increasing the number of free parameters to be estimated improves the goodness of fit. Hence AIC not only rewards goodness of fit, but also includes a penalty that is an increasing function of the number of estimated parameters. This penalty discourages overfitting. The preferred model is the one with the lowest AIC value. The AIC methodology attempts to find the model that best explains the data with a minimum of free parameters.overfitting

51 blending the statistical modelling approach to deterministic models relatively new area (at least for statisticians) phrased in a Bayesian framework (see later session on Bayesian methods) makes use of data (very important) and data modelling still at the research stage (probably most used on climatology)

52 in summary model building is iterative should combine statistical skills and scientific knowledge think about your objectives, think about the data model selection- many different approaches uncertainty is a factor at all stages and should be considered.


Download ppt "Statistical model building Marian Scott Dept of Statistics, University of Glasgow Glasgow, Sept 2007."

Similar presentations


Ads by Google