458 Fitting models to data – III (More on Maximum Likelihood Estimation) Fish 458, Lecture 10.

Slides:



Advertisements
Similar presentations
Bayes rule, priors and maximum a posteriori
Advertisements

Modeling of Data. Basic Bayes theorem Bayes theorem relates the conditional probabilities of two events A, and B: A might be a hypothesis and B might.
A Model to Evaluate Recreational Management Measures Objective I – Stock Assessment Analysis Create a model to distribute estimated landings (A + B1 fish)
CHAPTER 21 Inferential Statistical Analysis. Understanding probability The idea of probability is central to inferential statistics. It means the chance.
458 Quantifying Uncertainty using Classical Methods (Likelihood Profile, Bootstrapping) Fish 458, Lecture 12.
Flipping A Biased Coin Suppose you have a coin with an unknown bias, θ ≡ P(head). You flip the coin multiple times and observe the outcome. From observations,
CHAPTER 13: Binomial Distributions
Lecture 8 review Options for estimating population size –Direct census (visual, acoustic, etc.) –Density expansion (time, area) –Change in index methods.
The current status of fisheries stock assessment Mark Maunder Inter-American Tropical Tuna Commission (IATTC) Center for the Advancement of Population.
Hypothesis testing Week 10 Lecture 2.
Maximum likelihood estimates What are they and why do we care? Relationship to AIC and other model selection criteria.
Binary Response Lecture 22 Lecture 22.
Resampling techniques Why resampling? Jacknife Cross-validation Bootstrap Examples of application of bootstrap.
458 More on Model Building and Selection (Observation and process error; simulation testing and diagnostics) Fish 458, Lecture 15.
Maximum likelihood Conditional distribution and likelihood Maximum likelihood estimations Information in the data and likelihood Observed and Fisher’s.
458 Fitting models to data – II (The Basics of Maximum Likelihood Estimation) Fish 458, Lecture 9.
458 Fitting models to data – IV (Yet more on Maximum Likelihood Estimation) Fish 458, Lecture 11.
458 Lumped population dynamics models Fish 458; Lecture 2.
Maximum likelihood (ML) and likelihood ratio (LR) test
Basics of Statistical Estimation. Learning Probabilities: Classical Approach Simplest case: Flipping a thumbtack tails heads True probability  is unknown.
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Statistical inference (Sec. )
Maximum Likelihood We have studied the OLS estimator. It only applies under certain assumptions In particular,  ~ N(0, 2 ) But what if the sampling distribution.
CSE 221: Probabilistic Analysis of Computer Systems Topics covered: Statistical inference.
Estimating a Population Proportion
An Introduction to Logistic Regression
Hui-Hua Lee 1, Kevin R. Piner 1, Mark N. Maunder 2 Evaluation of traditional versus conditional fitting of von Bertalanffy growth functions 1 NOAA Fisheries,
458 Fitting models to data – I (Sum of Squares) Fish 458, Lecture 7.
Maximum likelihood (ML)
Sampling Distributions & Point Estimation. Questions What is a sampling distribution? What is the standard error? What is the principle of maximum likelihood?
381 Discrete Probability Distributions (The Binomial Distribution) QSCI 381 – Lecture 13 (Larson and Farber, Sect 4.2)
Chapter 5 Sampling Distributions
The maximum likelihood method Likelihood = probability that an observation is predicted by the specified model Plausible observations and plausible models.
880.P20 Winter 2006 Richard Kass 1 Confidence Intervals and Upper Limits Confidence intervals (CI) are related to confidence limits (CL). To calculate.
EM and expected complete log-likelihood Mixture of Experts
The Triangle of Statistical Inference: Likelihoood
HW adjustment Q70 solve only parts a, b, and d. Q76 is moved to the next homework and added-Q2 is moved to the next homework as well.
Random Sampling, Point Estimation and Maximum Likelihood.
381 Discrete Probability Distributions (The Poisson and Exponential Distributions) QSCI 381 – Lecture 15 (Larson and Farber, Sect 4.3)
Biostat. 200 Review slides Week 1-3. Recap: Probability.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
The Triangle of Statistical Inference: Likelihoood Data Scientific Model Probability Model Inference.
Lecture 2 Review Probabilities Probability Distributions Normal probability distributions Sampling distributions and estimation.
Estimation Chapter 8. Estimating µ When σ Is Known.
The Stock Synthesis Approach Based on many of the ideas proposed in Fournier and Archibald (1982), Methot developed a stock assessment approach and computer.
- 1 - Bayesian inference of binomial problem Estimating a probability from binomial data –Objective is to estimate unknown proportion (or probability of.
© Department of Statistics 2012 STATS 330 Lecture 20: Slide 1 Stats 330: Lecture 20.
Example: Bioassay experiment Problem statement –Observations: At each level of dose, 5 animals are tested, and number of death are observed.
Trophic Ecosystem Models. Overview Logistic growth model Lotka volterra predation models Competition models Multispecies production models MSVPA Size.
Two Main Uses of Statistics: 1)Descriptive : To describe or summarize a collection of data points The data set in hand = the population of interest 2)Inferential.
CIA Annual Meeting LOOKING BACK…focused on the future.
381 More on Continuous Probability Distributions QSCI 381 – Lecture 20.
Generalized Linear Models (GLMs) and Their Applications.
Inference: Probabilities and Distributions Feb , 2012.
Université d’Ottawa - Bio Biostatistiques appliquées © Antoine Morin et Scott Findlay :32 1 Logistic regression.
1 ES Chapter 18 & 20: Inferences Involving One Population Student’s t, df = 5 Student’s t, df = 15 Student’s t, df = 25.
381 Goodness of Fit Tests QSCI 381 – Lecture 40 (Larson and Farber, Sect 10.1)
Selectivity and two biomass measures in an age-based assessment of Antarctic krill Doug Kinzey, George Watters NOAA/NMFS/SWFSC/AERD CAPAM Workshop, March.
Statistical NLP: Lecture 4 Mathematical Foundations I: Probability Theory (Ch2)
UALG Statistical catch at age models Einar Hjörleifsson.
Review. Common probability distributions Discrete: binomial, Poisson, negative binomial, multinomial Continuous: normal, lognormal, beta, gamma, (negative.
Nonlinear function minimization (review). Newton’s minimization method Ecological detective p. 267 Isaac Newton We want to find the minimum value of f(x)
Bayesian Estimation and Confidence Intervals Lecture XXII.
Statistical Modelling
Maximum Likelihood Estimation
Maximum Likelihood Find the parameters of a model that best fit the data… Forms the foundation of Bayesian inference Slide 1.
Econometric Models The most basic econometric model consists of a relationship between two variables which is disturbed by a random error. We need to use.
Statistical NLP: Lecture 4
Confidence Intervals for Proportions and Variances
Mathematical Foundations of BME Reza Shadmehr
Presentation transcript:

458 Fitting models to data – III (More on Maximum Likelihood Estimation) Fish 458, Lecture 10

458 A Cod Example (model assumptions) The catch is taken in the middle of the year. The catch-at-age and M are known exactly. We can therefore compute all the numbers-at- age given those for the oldest age:

458 A Cod Example (data assumptions) We have survey data for ages 2-14 (the catch data start in 1959): A trawl survey index ( ) – surveys are conducted at the end of January and at the end of March. A gillnet index ( ) – surveys are conducted at the start of the year. We need to account for when the surveys occur (because fishing mortality can be very high). We assume that the age-specific indices are log- normally distributed about the model predictions (indices can’t be negative) and  is assumed to differ between the two survey series but to be the same for each age within a survey index.

458 Calculation details – the model Terminal numbers-at-age The “terminal” numbers-at-age determine the whole N matrix Oldest-age Ns Most-recent- year Ns (year y max )

458 Calculation details – the likelihood The likelihood function:

458 Fitting this Model The parameters: We reduce the number of parameters that are included in the Solver search by using analytical solutions for the qs and the  s.

458 Analytical Solution for q-I Being able to find analytical solutions for q and  is a key skill when fitting fisheries population dynamics models.

458 Analytical Solution for q-II Repeat this calculation for

458 The Binomial Distribution The density function : Z is the observed number of outcomes; N is the number of trials; and p is the probability of the event happening on a given trial. This density function is used when we have observed a number of events given a fixed number of trials (e.g. annual deaths in a population of known size). Note that the outcome, Z, is discrete (an integer between 0 and N).

458 The Multinomial Distribution Here we extend the binomial distribution to consider multiple possible events: Note: We use this distribution when we age a sample of the population / catch (N is the sample size) and wish to compare the model prediction of the age distribution of the population / catch with the sample.

458 An Example of The Binomial Distribution-I 10 animals in each of 17 size-classes have been assessed for maturity. Fit the following logistic function to these data.

458 An Example of The Binomial Distribution-II We should assume a binomial distribution (because each animal is either mature or immature). The likelihood function is: The negative log-likelihood function is: is the number mature in size-class i

458 An Example of The Binomial Distribution-III

458 An Example of The Binomial Distribution-III An alternative to the binomial distribution is the normal distribution. The negative log-likelihood function for this case is: Why is the normal distribution inappropriate for this problem?

458 The Beta distribution The density function: The mean of this distribution is:

458 The Shapes of the Beta Distribution

458 Recap Time To apply Maximum Likelihood we: Find a model for the underlying process. Identify how the data relate to this model (i.e. which error / sampling distribution to use). Write down the likelihood function. Write down the negative log-likelihood. Minimize the negative log-likelihood.