Geo479/579: Geostatistics Ch12. Ordinary Kriging (1)

Slides:



Advertisements
Similar presentations
The Maximum Likelihood Method
Advertisements

Estimation of Means and Proportions
Managerial Economics in a Global Economy
The Simple Regression Model
Kriging.
CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
Estimation  Samples are collected to estimate characteristics of the population of particular interest. Parameter – numerical characteristic of the population.
Chap 8-1 Statistics for Business and Economics, 6e © 2007 Pearson Education, Inc. Chapter 8 Estimation: Single Population Statistics for Business and Economics.
Cost of surrogates In linear regression, the process of fitting involves solving a set of linear equations once. For moving least squares, we need to.
SOLVED EXAMPLES.
Instrumental Variables Estimation and Two Stage Least Square
Econ 140 Lecture 61 Inference about a Mean Lecture 6.
Basic geostatistics Austin Troy.
Geo479/579: Geostatistics Ch14. Search Strategies.
Visual Recognition Tutorial
Simple Linear Regression
Point estimation, interval estimation
Maximum likelihood Conditional distribution and likelihood Maximum likelihood estimations Information in the data and likelihood Observed and Fisher’s.
Chapter 4 Multiple Regression.
Chapter 8 Estimation: Single Population
Chapter 7 Estimation: Single Population
Ordinary Kriging Process in ArcGIS
Maximum likelihood (ML)
Applications in GIS (Kriging Interpolation)
Method of Soil Analysis 1. 5 Geostatistics Introduction 1. 5
Simple Linear Regression. Introduction In Chapters 17 to 19, we examine the relationship between interval variables via a mathematical equation. The motivation.
Mathematical Statistics Lecture Notes Chapter 8 – Sections
Geo479/579: Geostatistics Ch13. Block Kriging. Block Estimate  Requirements An estimate of the average value of a variable within a prescribed local.
Chapter 7 Estimation: Single Population
Geo479/579: Geostatistics Ch17. Cokriging
1 SAMPLE MEAN and its distribution. 2 CENTRAL LIMIT THEOREM: If sufficiently large sample is taken from population with any distribution with mean  and.
Prof. Dr. S. K. Bhattacharjee Department of Statistics University of Rajshahi.
PROBABILITY (6MTCOAE205) Chapter 6 Estimation. Confidence Intervals Contents of this chapter: Confidence Intervals for the Population Mean, μ when Population.
Geo597 Geostatistics Ch9 Random Function Models.
Explorations in Geostatistical Simulation Deven Barnett Spring 2010.
Geog. 579: GIS and Spatial Analysis - Lecture 21 Overheads 1 Point Estimation: 3. Methods: 3.6 Ordinary Kriging Topics: Lecture 23: Spatial Interpolation.
Geo479/579: Geostatistics Ch16. Modeling the Sample Variogram.
: Chapter 3: Maximum-Likelihood and Baysian Parameter Estimation 1 Montri Karnjanadecha ac.th/~montri.
Chapter 7 Point Estimation of Parameters. Learning Objectives Explain the general concepts of estimating Explain important properties of point estimators.
Review Normal Distributions –Draw a picture. –Convert to standard normal (if necessary) –Use the binomial tables to look up the value. –In the case of.
Short course on space-time modeling Instructors: Peter Guttorp Johan Lindström Paul Sampson.
Chapter 5 Sampling Distributions. The Concept of Sampling Distributions Parameter – numerical descriptive measure of a population. It is usually unknown.
Machine Learning 5. Parametric Methods.
Geostatistics GLY 560: GIS for Earth Scientists. 2/22/2016UB Geology GLY560: GIS Introduction Premise: One cannot obtain error-free estimates of unknowns.
Geo479/579: Geostatistics Ch12. Ordinary Kriging (2)
Geo597 Geostatistics Ch11 Point Estimation. Point Estimation  In the last chapter, we looked at estimating a mean value over a large area within which.
CWR 6536 Stochastic Subsurface Hydrology
Chapter 8 Estimation ©. Estimator and Estimate estimator estimate An estimator of a population parameter is a random variable that depends on the sample.
R. Kass/W03 P416 Lecture 5 l Suppose we are trying to measure the true value of some quantity (x T ). u We make repeated measurements of this quantity.
MathematicalMarketing Slide 5.1 OLS Chapter 5: Ordinary Least Square Regression We will be discussing  The Linear Regression Model  Estimation of the.
Computacion Inteligente Least-Square Methods for System Identification.
Statistics for Business and Economics 8 th Edition Chapter 7 Estimation: Single Population Copyright © 2013 Pearson Education, Inc. Publishing as Prentice.
Statistics 350 Lecture 2. Today Last Day: Section Today: Section 1.6 Homework #1: Chapter 1 Problems (page 33-38): 2, 5, 6, 7, 22, 26, 33, 34,
Part 3: Estimation of Parameters. Estimation of Parameters Most of the time, we have random samples but not the densities given. If the parametric form.
CWR 6536 Stochastic Subsurface Hydrology Optimal Estimation of Hydrologic Parameters.
Statistics for Business and Economics 7 th Edition Chapter 7 Estimation: Single Population Copyright © 2010 Pearson Education, Inc. Publishing as Prentice.
The simple linear regression model and parameter estimation
Chapter 19: Unbiased estimators
Probability Theory and Parameter Estimation I
Ch3: Model Building through Regression
CH 5: Multivariate Methods
Ch9 Random Function Models (II)
Behavioral Statistics
Inference for Geostatistical Data: Kriging for Spatial Interpolation
Random Sampling Population Random sample: Statistics Point estimate
Paul D. Sampson Peter Guttorp
The Regression Model Suppose we wish to estimate the parameters of the following relationship: A common method is to choose parameters to minimise the.
The Simple Linear Regression Model: Specification and Estimation
CHAPTER 15 SUMMARY Chapter Specifics
Probabilistic Surrogate Models
Presentation transcript:

Geo479/579: Geostatistics Ch12. Ordinary Kriging (1)

Ordinary Kriging  Objective of the Ordinary Kriging (OK) Best: minimize the variance of the errors Linear: weighted linear combinations of the data Unbiased: mean error equals to zero Estimation

Ordinary Kriging  Since the actual error values are unknown, the random function model are used instead  A model tells us the possible values of a random variable, and the frequency of these values  The model enables us to express the error, its mean, and its variance  If normal, we only need two parameters to define the model, and

Unbiased Estimates  In ordinary kriging, we use a probability model in which the bias and the error variance can be calculated  We then choose weights for the nearby samples that ensure that the average error for our model is exactly 0, and the modeled error variance is minimized     n j j vwv 1 ˆ

The Random Function and Unbiasedness  A weighted linear combination of the nearby samples  Error of ith estimate =  Average error = 0  This is not useful because we do not know the actual     n j j vwv 1 ˆ

The Random Function and Unbiasedness …  Solution to error problem involves conceptualizing the unknown value as the outcome of a random process and solving for a conceptual model  For every unknown value, a stationary random function model is used that consists of several random variables  One random variable for the value at each sample locations, and one for the unknown value at the point of interest     n j j vwv 1 ˆ

The Random Function and Unbiasedness …  Each random variable has the expected value of  Each pair of random variables has a joint distribution that depends only on the separation between them, not their locations  The covariance between pairs of random variables separated by a distance h, is

The Random Function and Unbiasedness …  Our estimate is also a random variable since it is a weighted linear combination of the random variables at sample locations  The estimation error is also a random variable  The error at is an outcome of the random variable     n i ii xVwxV 1 0 )()( ˆ

The Random Function and Unbiasedness …  For an unbiased estimation If stationary

The Random Function and Unbiasedness …  We set error at as 0:

The Random Function Model and Error Variance  The error variance  We will not go very far because we do not know

Unbiased Estimates …  The random function model (Ch9) allows us to express the variance of a weighted linear combination of random variables  We then develop ordinary kriging by minimizing the error variance  Refer to the “Example of the Use of a Probabilistic Model” in Chapter 9

The Random Function Model and Error Variance …  We will turn to random function models     n i ii xVwxV 1 0 )()( ˆ

The Random Function Model and Error Variance …  Ch9 gives a formula for the variance of a weighted linear combination ( Eq 9.14, p216 ): (12.6)

The Random Function Model and Error Variance …  We now express the variance of the error as the variance of a weighted linear combination of other random variables Stationarity condition

The Random Function Model and Error Variance …

The Random Function Model and Error Variance  If we have,, and, we can estimate the  To solve (12.8)

The Random Function Model and Error Variance  Minimizing the variance of error requires to set n partial first derivatives to 0. This produces a system of n simultaneous linear equations with n unknowns  In our case, we have n unknowns for the n sample locations, but n+1 equations. The one extra equation is the unbiasedness condition

The Lagrange Parameter  To avoid this awkward problem, we introduce another unknown into the equation, , the Lagrange parameter, without affecting the equality (12.9)

Minimization of the Error Variance  The set of weights that minimize the error variance under the unbiasedness condition satisfies the following n+1 equations - ordinary kriging system: (12.11) (12.12)

Minimization of the Error Variance  The ordinary kriging system expressed in matrix (12.14) (12.13)

Ordinary Kriging Variance  Calculate the minimized error variance by using the resulting to plug into equation (12.8)

Ordinary Kriging Using or Refer to Ch9 (12.20)

Ordinary Kriging Using or … (12.22)

An Example of Ordinary Kriging

 We can compute and based on data in order to solve (12.11) (12.12)

nugget effect, range, sill

Estimation

Error Variance (12.15)