Lecture 2: Parameter Estimation and Evaluation of Support.

Slides:



Advertisements
Similar presentations
Design of Experiments Lecture I
Advertisements

Computational Statistics. Basic ideas  Predict values that are hard to measure irl, by using co-variables (other properties from the same measurement.
Simulated Annealing General Idea: Start with an initial solution
Monte Carlo Methods and Statistical Physics
Lecture 2: Parameter Estimation and Evaluation of Support.
Uncertainty and confidence intervals Statistical estimation methods, Finse Friday , 12.45–14.05 Andreas Lindén.
CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
Likelihood Ratio, Wald, and Lagrange Multiplier (Score) Tests
Estimation  Samples are collected to estimate characteristics of the population of particular interest. Parameter – numerical characteristic of the population.
Chapter 8 Linear Regression © 2010 Pearson Education 1.
Nonlinear Regression Ecole Nationale Vétérinaire de Toulouse Didier Concordet ECVPT Workshop April 2011 Can be downloaded at
Models with Discrete Dependent Variables
Gizem ALAGÖZ. Simulation optimization has received considerable attention from both simulation researchers and practitioners. Both continuous and discrete.
Classification and Prediction: Regression Via Gradient Descent Optimization Bamshad Mobasher DePaul University.
Tabu Search for Model Selection in Multiple Regression Zvi Drezner California State University Fullerton.
x – independent variable (input)
Topics: Inferential Statistics
Lecture 5: Learning models using EM
MAE 552 – Heuristic Optimization Lecture 6 February 6, 2002.
Evaluating Hypotheses
Stat 112 – Notes 3 Homework 1 is due at the beginning of class next Thursday.
Parametric Inference.
Analysis of Simulation Input.. Simulation Machine n Simulation can be considered as an Engine with input and output as follows: Simulation Engine Input.
Nonlinear Stochastic Programming by the Monte-Carlo method Lecture 4 Leonidas Sakalauskas Institute of Mathematics and Informatics Vilnius, Lithuania EURO.
Optimization via Search CPSC 315 – Programming Studio Spring 2008 Project 2, Lecture 4 Adapted from slides of Yoonsuck Choe.
Principles of the Global Positioning System Lecture 10 Prof. Thomas Herring Room A;
Bootstrap spatobotp ttaoospbr Hesterberger & Moore, chapter 16 1.
Simulated Annealing G.Anuradha. What is it? Simulated Annealing is a stochastic optimization method that derives its name from the annealing process used.
Chapter 13: Inference in Regression
Efficient Model Selection for Support Vector Machines
Sociology 5811: Lecture 7: Samples, Populations, The Sampling Distribution Copyright © 2005 by Evan Schofer Do not copy or distribute without permission.
Alignment and classification of time series gene expression in clinical studies Tien-ho Lin, Naftali Kaminski and Ziv Bar-Joseph.
The Triangle of Statistical Inference: Likelihoood
Model Inference and Averaging
Prof. Dr. S. K. Bhattacharjee Department of Statistics University of Rajshahi.
Linear Functions 2 Sociology 5811 Lecture 18 Copyright © 2004 by Evan Schofer Do not copy or distribute without permission.
CJT 765: Structural Equation Modeling Class 7: fitting a model, fit indices, comparingmodels, statistical power.
Stochastic Linear Programming by Series of Monte-Carlo Estimators Leonidas SAKALAUSKAS Institute of Mathematics&Informatics Vilnius, Lithuania
Modern Navigation Thomas Herring
Simulated Annealing.
1 Lecture 16: Point Estimation Concepts and Methods Devore, Ch
Mathematical Models & Optimization?
2005MEE Software Engineering Lecture 11 – Optimisation Techniques.
Lecture 4: Statistics Review II Date: 9/5/02  Hypothesis tests: power  Estimation: likelihood, moment estimation, least square  Statistical properties.
PROBABILITY AND STATISTICS FOR ENGINEERING Hossein Sameti Department of Computer Engineering Sharif University of Technology Principles of Parameter Estimation.
VI. Regression Analysis A. Simple Linear Regression 1. Scatter Plots Regression analysis is best taught via an example. Pencil lead is a ceramic material.
Maximum Likelihood Estimation Psych DeShon.
Lecture 12: Linkage Analysis V Date: 10/03/02  Least squares  An EM algorithm  Simulated distribution  Marker coverage and density.
Simulated Annealing G.Anuradha.
559 Fish 559; Lecture 5 Non-linear Minimization. 559 Introduction Non-linear minimization (or optimization) is the numerical technique that is used by.
MCMC (Part II) By Marc Sobel. Monte Carlo Exploration  Suppose we want to optimize a complicated distribution f(*). We assume ‘f’ is known up to a multiplicative.
Local Search and Optimization Presented by Collin Kanaley.
Chapter 8: Simple Linear Regression Yang Zhenlin.
Optimization Problems
1 Introduction to Statistics − Day 4 Glen Cowan Lecture 1 Probability Random variables, probability densities, etc. Lecture 2 Brief catalogue of probability.
1 Introduction to Statistics − Day 3 Glen Cowan Lecture 1 Probability Random variables, probability densities, etc. Brief catalogue of probability densities.
Logistic regression. Recall the simple linear regression model: y =  0 +  1 x +  where we are trying to predict a continuous dependent variable y from.
Chapter 2-OPTIMIZATION G.Anuradha. Contents Derivative-based Optimization –Descent Methods –The Method of Steepest Descent –Classical Newton’s Method.
Diversity Loss in General Estimation of Distribution Algorithms J. L. Shapiro PPSN (Parallel Problem Solving From Nature) ’06 BISCuit 2 nd EDA Seminar.
SUPA Advanced Data Analysis Course, Jan 6th – 7th 2009 Advanced Data Analysis for the Physical Sciences Dr Martin Hendry Dept of Physics and Astronomy.
INTRO TO OPTIMIZATION MATH-415 Numerical Analysis 1.
Lecture 18, CS5671 Multidimensional space “The Last Frontier” Optimization Expectation Exhaustive search Random sampling “Probabilistic random” sampling.
Review of Statistical Inference Prepared by Vera Tabakova, East Carolina University.
Parameter Estimation. Statistics Probability specified inferred Steam engine pump “prediction” “estimation”
Anders Nielsen Technical University of Denmark, DTU-Aqua Mark Maunder Inter-American Tropical Tuna Commission An Introduction.
Model Comparison. Assessing alternative models We don’t ask “Is the model right or wrong?” We ask “Do the data support a model more than a competing model?”
Lecture 2: Parameter Estimation and Evaluation of Support
CJT 765: Structural Equation Modeling
Statistical Methods For Engineers
Stochastic Methods.
Presentation transcript:

Lecture 2: Parameter Estimation and Evaluation of Support

Parameter Estimation “The problem of estimation is of more central importance, (than hypothesis testing).. for in almost all situations we know that the effect whose significance we are measuring is perfectly real, however small; what is at issue is its magnitude.” (Edwards, 1992, pg. 2) “An insignificant result, far from telling us that the effect is non- existent, merely warns us that the sample was not large enough to reveal it.” (Edwards, 1992, pg. 2)

Parameter Estimation l Finding Maximum Likelihood Estimates (MLEs) - Local optimization ( optim ) »Gradient methods »Simplex (Nelder-Mead) - Global optimization »Simulated Annealing ( annea l) »Genetic Algorithms ( rgenoud ) l Evaluating the strength of evidence (“support”) for different parameter estimates - Support Intervals »Asymptotic Support Intervals »Simultaneous Support Intervals - The shape of likelihood surfaces around MLEs

Parameter estimation: finding peaks on likelihood “surfaces”... The variation in likelihood for any given set of parameter values defines a likelihood “surface”... The goal of parameter estimation is to find the peak of the likelihood surface.... (optimization)

Local vs Global Optimization l “Fast” local optimization methods - Large family of methods, widely used for nonlinear regression in commercial software packages l “Brute force” global optimization methods - Grid search - Genetic algorithms - Simulated annealing local optimum global optimum

Local Optimization – Gradient Methods l Derivative-based (Newton-Raphson) methods: Likelihood surface General approach: Vary parameter estimate systematically and search for zero slope in the first derivative of the likelihood function...(using numerical methods to estimate the derivative, and checking the second derivative to make sure it is a maximum, not a minimum)

Local Optimization – No Gradient l The Simplex (Nelder Mead) method - Much simpler to program - Does not require calculation or estimation of a derivative - No general theoretical proof that it works, (but lots of happy practitioners…)

Global Optimization “Virtually nothing is known about finding global extrema in general.” “There are tantalizing hints that so-called “annealing methods” may lead to important progress on global (optimization)...” Quote from Press et al. (1986) Numerical Recipes

Global Optimization – Grid Searches l Simplest form of optimization (and rarely used in practice) - Systematically search parameter space at a grid of points l Can be useful for visualization of the broad features of a likelihood surface

Global Optimization – Genetic Algorithms l Based on a fairly literal analogy with evolution - Start with a reasonably large “population” of parameter sets - Calculate the “fitness” (likelihood) of each individual set of parameters - Create the next generation of parameter sets based on the fitness of the “parents”, and various rules for recombination of subsets of parameters (genes) - Let the population evolve until fitness reaches a maximum asymptote

Global optimization - Simulated Annealing l Analogy with the physical process of annealing: - Start the process at a high “temperature” - Gradually reduce the temperature according to an annealing schedule l Always accept uphill moves (i.e. an increase in likelihood) l Accept downhill moves according to the Metropolis algorithm: p = probability of accepting downhill move  lh = magnitude of change in likelihood t = temperature

Effect of temperature (t)

Simulated Annealing in practice... REFERENCES: Goffe, W. L., G. D. Ferrier, and J. Rogers Global optimization of statistical functions with simulated annealing. Journal of Econometrics 60: Corana et al Minimizing multimodal functions of continuous variables with the simulated annealing algorithm. ACM Transactions on Mathematical Software 13: A version with automatic adjustment of range... Lower boundUpper bound Current value Search range (step size)

Constraints – setting limits for the search... l Biological limits - Values that make no sense biologically (be careful...) l Algebraic limits - Values for which the model is undefined (i.e. dividing by zero...) Bottom line: global optimization methods let you cast your net widely, at the cost of computer time...

Simulated Annealing - Initialization l Set - Annealing schedule »Initial temperature (t) (3.0) »Rate of reduction in temperature (rt) (0.95)N »Interval between drops in temperature (nt) (100) »Interval between changes in range (ns) (20) - Parameter values »Initial values (x) »Upper and lower bounds (lb,ub) »Initial range (vm) Typical values in blue...

Begin {a single iteration} {copy the current parameter array (x) to a temporary holder (xp) for this iteration} xp := x; {choose a new value for the parameter in use (puse)} xp[puse] := x[puse] + ((random*2 - 1)*vm[puse]); {check if the new value is out of bounds } if xp[puse] < lb[puse] then xp[puse] := x[puse] - (random * (x[puse]-lb[puse])); if xp[puse] > ub[puse] then xp[puse] := x[puse] + (random * (ub[puse]-x[puse])); Simulated Annealing – Step 1 Pick a new set of parameter values (by varying just 1 parameter) vm is the range lb is the lower bound ub is the upper bound

Simulated Annealing – Step 2 {call the likelihood function with the new set of parameter values} likeli(xp,fp); {fp = new likelihood} {accept the new values if likelihood increases or at least stays the same} if (fp >= f) then begin x := xp; f := fp; nacp[puse] := nacp[puse] + 1; if (fp > fopt) then {if this is a new maximum, update the maximum likelihood} begin xopt := xp; fopt := fp; opteval := eval; BestFit; {update display of maximum r} end; end Accept the step if it leads uphill...

Simulated Annealing – Step 3 else {use Metropolis criteria to determine whether to accept a downhill move } begin try {fp < f, so the code below is a shortcut for exp(-1.0(abs(f-fp)/t)} p := exp((fp-f)/t); {t = current temperature} except on EUnderflow do p := 0; end; pp := random; if pp < p then begin x := xp; f := fp; nacp[puse] := nacp[puse] + 1; end; Use the Metropolis algorithm to decide whether to accept a downhill step...

Simulated Annealing – Step 4 {after nused * ns cycles, adjust VM so that half of evaluations are accepted} If eval mod (nused*ns) = 0 then begin for i := 0 to npmax do if xvary[i] then begin ratio := nacp[i]/ns; { C controls the adjustment of VM (range) - references suggest setting at 2.0} if (ratio > 0.6) then vm[i] := vm[i]*(1.0+c[i]*((ratio - 0.6)/0.4)) else if ratio < 0.4 then vm[i] := vm[i]/(1.0+c[i]*((0.4 - ratio)/0.4)); if vm[i] > (ub[i]-lb[i]) then vm[i] := ub[i] - lb[i]; end; { reset nacp[i]} for i := 1 to npmax do nacp[i] := 0; end; Periodically adjust the range (VM) within which new steps are chosen... ns is typically ~ 20 This part is strictly ad hoc...

Effect of C on Adjusting Range...

Simulated Annealing Code – Final Step {after nused * ns * nt cycles, reduce temperature t } If eval mod (nused*ns*nt) = 0 then begin t := rt * t; {store current maximum lhood in history list} lhist[eval div (nused*ns*nt)].iter := eval; lhist[eval div (nused*ns*nt)].lhood := fopt; end; Reduce the “temperature” according to the annealing schedule rt = fractional reduction in temperature at each drop in temperature: I typically set nt = 100 (a very slow annealing) NOTE: Goffe et al. restart the search at the previous MLE estimates each time the temperature drops... (I don’t)

How many iterations?... Red maple leaf litterfall (6 parameters) 500,000 is way more than necessary! Logistic regression of windthrow susceptibility (188 parameters) 5 million is not enough! What would constitute convergence?...

Optimization - Summary l No hard and fast rules for any optimization – be willing to explore alternate options. l Be wary of initial values used in local optimization when the model is at all complicated l How about a hybrid approach? Start with simulated annealing, then switch to a local optimization…

Evaluating the strength of evidence for the MLE Now that you have an MLE, how should you evaluate it? (Hint: think about the shape of the likelihood function, not just the MLE)

Strength of evidence for particular parameter estimates – “Support” l Likelihood provides an objective measure of the strength of evidence for different parameter estimates... Log-likelihood = “Support” (Edwards 1992)

Fisher’s “Score” and “Information” l “Score” (a function) = First derivative (slope) of the likelihood function - So, S(θ) = 0 at the maximum likelihood estimate of θ l “Information” (a number) = -1 * Second derivative (acceleration) of the likelihood function, evaluated at the MLE.. - So this is a number: a measure of how steeply likelihood drops off as you move away from the MLE - In general cases, “information” is equivalent to the variance of the parameter…

Profile Likelihood l Evaluate support (information) for a range of values of a given parameter by treating all other parameters as “nuisance” and holding them at their MLEs… Parameter 1 Parameter 2

Asymptotic vs. Simultaneous M-Unit Support Limits l Asymptotic Support Limits (based on Profile Likelihood): - Hold all other parameters at their MLE values, and systematically vary the remaining parameter until likelihood declines by a chosen amount (m)... What should “m” be? (2 is a good number, and is roughly analogous to a 95% CI)

Asymptotic vs. Simultaneous M-Unit Support Limits l Simultaneous: - Resampling method: draw a very large number of random sets of parameters and calculate log- likelihood. M-unit simultaneous support limits for parameter x i are the upper and lower limits that don’t differ by more than m units of support... In practice, it can require an enormous number of iterations to do this if there are more than a few parameters

Asymptotic vs. Simultaneous Support Limits Parameter 1 Parameter 2 2-unit drop in support A hypothetical likelihood surface for 2 parameters... Asymptotic 2-unit support limits for P1 Simultaneous 2-unit support limits for P1

Other measures of strength of evidence for different parameter estimates l Edwards (1992; Chapter 5) - Various measures of the “shape” of the likelihood surface in the vicinity of the MLE... How pointed is the peak?...

Bootstrap methods l Bootstrap methods can be used to estimate the variances of parameter estimates - In simple terms: »generate many replicates of the dataset by sampling with replacement (bootstraps) »Estimate parameters for each of the datasets »Use the variance of the parameter estimates as a bootstrap estimate of the variance

Evaluating Support for Parameter Estimates: A Frequentist Approach l Traditional confidence intervals and standard errors of the parameter estimates can be generated from the Hessian matrix - Hessian = matrix of second partial derivatives of the likelihood function with respect to parameters, evaluated at the maximum likelihood estimates - Also called the “Information Matrix” by Fisher - Provides a measure of the steepness of the likelihood surface in the region of the optimum - Can be generated in R using optim and fdHess

Example from R The Hessian matrix (when maximizing a log likelihood) is a numerical approximation for Fisher's Information Matrix (i.e. the matrix of second partial derivatives of the likelihood function), evaluated at the point of the maximum likelihood estimates. Thus, it's a measure of the steepness of the drop in the likelihood surface as you move away from the MLE. > res$hessian a b sd a b sd (sample output from an analysis that estimates two parameters and a variance term)

More from R now invert the negative of the Hessian matrix to get the matrix of parameter variance and covariance > solve(-1*res$hessian) a b sd a e e e-06 b e e e-07 sd e e e-03 the square roots of the diagonals of the inverted negative Hessian are the standard errors* > sqrt(diag(solve(-1*res$hessian))) a b sd (*and 1.96 * S.E. is a 95% C.I….)