CLASS: B.Sc.II PAPER-I ELEMENTRY INFERENCE. TESTING OF HYPOTHESIS.

Slides:



Advertisements
Similar presentations
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 6 Point Estimation.
Advertisements

Point Estimation Notes of STAT 6205 by Dr. Fan.
Previous Lecture: Distributions. Introduction to Biostatistics and Bioinformatics Estimation I This Lecture By Judy Zhong Assistant Professor Division.
Bayesian inference “Very much lies in the posterior distribution” Bayesian definition of sufficiency: A statistic T (x 1, …, x n ) is sufficient for 
Chapter 7. Statistical Estimation and Sampling Distributions
Chapter 7 Title and Outline 1 7 Sampling Distributions and Point Estimation of Parameters 7-1 Point Estimation 7-2 Sampling Distributions and the Central.
Estimation  Samples are collected to estimate characteristics of the population of particular interest. Parameter – numerical characteristic of the population.
Chap 8: Estimation of parameters & Fitting of Probability Distributions Section 6.1: INTRODUCTION Unknown parameter(s) values must be estimated before.
Copyright © Cengage Learning. All rights reserved.
EPIDEMIOLOGY AND BIOSTATISTICS DEPT Esimating Population Value with Hypothesis Testing.
Maximum likelihood (ML) and likelihood ratio (LR) test
Hypothesis testing Some general concepts: Null hypothesisH 0 A statement we “wish” to refute Alternative hypotesisH 1 The whole or part of the complement.
Today Today: Chapter 9 Assignment: Recommended Questions: 9.1, 9.8, 9.20, 9.23, 9.25.
Statistical Inference Chapter 12/13. COMP 5340/6340 Statistical Inference2 Statistical Inference Given a sample of observations from a population, the.
Today Today: Chapter 9 Assignment: 9.2, 9.4, 9.42 (Geo(p)=“geometric distribution”), 9-R9(a,b) Recommended Questions: 9.1, 9.8, 9.20, 9.23, 9.25.
2. Point and interval estimation Introduction Properties of estimators Finite sample size Asymptotic properties Construction methods Method of moments.
Chapter 8 Estimation: Single Population
1 STATISTICAL INFERENCE PART I EXPONENTIAL FAMILY & POINT ESTIMATION.
STATISTICAL INFERENCE PART II SOME PROPERTIES OF ESTIMATORS
Copyright © Cengage Learning. All rights reserved. 6 Point Estimation.
Lecture 7 1 Statistics Statistics: 1. Model 2. Estimation 3. Hypothesis test.
Maximum likelihood (ML)
Business Statistics: Communicating with Numbers
Estimation Basic Concepts & Estimation of Proportions
Chapter 6. Point Estimation Weiqi Luo ( 骆伟祺 ) School of Software Sun Yat-Sen University : Office : # A313
STATISTICAL INFERENCE PART I POINT ESTIMATION
Prof. Dr. S. K. Bhattacharjee Department of Statistics University of Rajshahi.
Random Sampling, Point Estimation and Maximum Likelihood.
Lecture 12 Statistical Inference (Estimation) Point and Interval estimation By Aziza Munir.
Copyright © Cengage Learning. All rights reserved. 10 Inferences Involving Two Populations.
Random Regressors and Moment Based Estimation Prepared by Vera Tabakova, East Carolina University.
Maximum Likelihood Estimator of Proportion Let {s 1,s 2,…,s n } be a set of independent outcomes from a Bernoulli experiment with unknown probability.
Chapter 7 Point Estimation
1 Lecture 16: Point Estimation Concepts and Methods Devore, Ch
- 1 - Bayesian inference of binomial problem Estimating a probability from binomial data –Objective is to estimate unknown proportion (or probability of.
Lecture 4: Statistics Review II Date: 9/5/02  Hypothesis tests: power  Estimation: likelihood, moment estimation, least square  Statistical properties.
Lecture 3: Statistics Review I Date: 9/3/02  Distributions  Likelihood  Hypothesis tests.
Chapter 7 Point Estimation of Parameters. Learning Objectives Explain the general concepts of estimating Explain important properties of point estimators.
1 Standard error Estimated standard error,s,. 2 Example 1 While measuring the thermal conductivity of Armco iron, using a temperature of 100F and a power.
Confidence Interval & Unbiased Estimator Review and Foreword.
Brief Review Probability and Statistics. Probability distributions Continuous distributions.
Week 41 How to find estimators? There are two main methods for finding estimators: 1) Method of moments. 2) The method of Maximum likelihood. Sometimes.
Point Estimation of Parameters and Sampling Distributions Outlines:  Sampling Distributions and the central limit theorem  Point estimation  Methods.
Statistics Sampling Distributions and Point Estimation of Parameters Contents, figures, and exercises come from the textbook: Applied Statistics and Probability.
n Point Estimation n Confidence Intervals for Means n Confidence Intervals for Differences of Means n Tests of Statistical Hypotheses n Additional Comments.
Week 21 Order Statistics The order statistics of a set of random variables X 1, X 2,…, X n are the same random variables arranged in increasing order.
Chapter 8 Estimation ©. Estimator and Estimate estimator estimate An estimator of a population parameter is a random variable that depends on the sample.
Parameter Estimation. Statistics Probability specified inferred Steam engine pump “prediction” “estimation”
Statistical Inference for the Mean Objectives: (Chapter 8&9, DeCoursey) -To understand the terms variance and standard error of a sample mean, Null Hypothesis,
Conditional Expectation
Week 21 Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
Copyright (c) 2004 Brooks/Cole, a division of Thomson Learning, Inc. Chapter 7 Inferences Concerning Means.
Virtual University of Pakistan
Virtual University of Pakistan
Inference about the slope parameter and correlation
Copyright © Cengage Learning. All rights reserved.
Statistical Estimation
ESTIMATION.
STATISTICS POINT ESTIMATION
STATISTICAL INFERENCE
6 Point Estimation.
Point and interval estimations of parameters of the normally up-diffused sign. Concept of statistical evaluation.
STATISTICAL INFERENCE PART I POINT ESTIMATION
CONCEPTS OF HYPOTHESIS TESTING
CONCEPTS OF ESTIMATION
Statistical Assumptions for SLR
6 Point Estimation.
Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution that produced.
Applied Statistics and Probability for Engineers
Presentation transcript:

CLASS: B.Sc.II PAPER-I ELEMENTRY INFERENCE

TESTING OF HYPOTHESIS

THEORY OF ESTIMATION

DEFINITION

1.UNBIASEDNESS 2.CONSISTENCY 3.EFFICIENCY 4.SUFFICIENCY

METHOD OF ESTIMATION

MAXIMUM LIKLIHOOD ESTIMATION

26 Definition Let X 1, X 2, …, X n have joint p.m.f. or p.d.f. f(x 1, x 2, … x n ; θ 1,θ 2,…,θ n ), Where the parameters θ 1,θ 2,…,θ n have unknown values. When x 1, x 2, … x n are observed samples values, the f( ) function is regarded as a function of θ 1,θ 2,…,θ n, it is called the likelihood function or L(θ) = p 1 (X 1 )p 2 (X 2 ) …p n (X n )

27 Then maximum likelihood estimates (m.l.e.’s) are those values of the i ’s with maximize the likelihood function, so that f(x 1, x 2, … x n ; ) > f(x 1, x 2, … x n ; 1, 2,…, m ) for all 1, 2,…, m.When the X i ’s are substituted in place of the x i ’s, the maximum likelihood estimators result. Maximum Likelihood Estimation

28 Example 1: Illustrating the MLE Method Using the Exponential Distribution Suppose that X 1,…, X n is a random sample from an exponential distribution with parameter λ. Because of independence, the likelihood function is a product of the individual p.d.f.’s

29 The ln(likelihood) is Thus the MLE is (This is identical to the method of moment estimator but is not an unbiased estimator, since MLE Example 1 (cont’d)

Notes on lambda Note that the value of λ is an estimate because if we obtain another sample from the same population and re-estimate λ, the new value would differ from the one previously calculated. In plain language, is an estimate of the true value of λ. How close is the value of our estimate to the true value? To answer this question, one must first determine the distribution of the parameter, in this case λ. This methodology introduces a new term, confidence level, which allows us to specify a range for our estimate with a certain confidence level. The treatment of confidence intervals is integral to simulation, reliability engineering and to all of statistics (e.g., coefficients in regression models). 30

31 Example 2: The Binomial Distribution A sample of 10 new CD-ROM’s pressed by a manufacturer: 1 st, 3 rd and 10 th are warped, rest OK. Let p=P(warped CD-ROM’s) and define X 1,…, X 10 by X i =1if the ith CD-ROM is warped X i =0 if the ith CD-ROM is not warped Then the observed X i ’s are: 1,0,1,0,0,0,0,0,0,1. So the joint p.m.f. of the sample is f(x 1, x 2, … x 10 ; p) = p(1-p)p(1-p)…p = p 3 (1-p) 7

32 Q: For what value of p is the observed sample most likely to have occurred? i.e., wish to find the value of p which maximizes f(x 1, x 2, … x 10 ; p) = p 3 (1-p) 7. Equivalent to max ln{f( )] = 3ln(p) + 7ln(1-p) p=3/10=x/n Where x is the observed # of success (warped CD- ROM). The estimate of p is now p=3/10 — the MLE because for fixed x 1, x 2, … x 10, it is the parameter value which maximizes the likelihood (joint p.m.f.) of the observed sample. Example 2 (cont’d)

33 Example 2 (cont’d) Note: if we were told only that among the 10 CD- ROMs there were 3 which are warped we could write as a binomial p.m.f., which is also maximized for p=3/10

34 Desirable Properties 1.For most of the common distributions, the MLE is unique; that is, is strictly greater than L( ) for any other value of θ. 2.Although MLE’s need not be unbiased, in general, the asymptotic distribution (as n  infinity) of has mean equal to θ (see property 4 below). 3.MLE’s are invariant; that is, if Φ=h( ) for some function h, then the MLE of Φ is.(unbiased is not invariant). Fro instance the variance of an expo(β) random variable is β 2, so the MLE of this variance is

35 Desirable Properties (cont’d) 4.MLEs are asymptotically normally distributed, that is where (the expectation is with respect to Xi, assuming that Xi has the hypothesized distribution) and  denotes convergence in distribution. Further, if is any other estimator such that then (Thus MLEs are called best aymptotic normal). 5.MLEs are strongly consistent.

METHOD OF MOMENTS

ASSIGNMENT 1.EXPLAIN: i)NULL HYPOTHYSIS ii)ALTERNATIVE HYPOTHSIS iii)CRITICAL REGION Iv)LEVEL OF SIGNIFICANCE 2.EXPLAIN METHOD OF MAXIMUM LIKLIHOOD 3.WRITE IN DETAIL WITH PROCEDURE ABOUT METHOD OF MOMENT

TEST 1.EXPLAIN: i)NULL HYPOTHYSIS ii)ALTERNATIVE HYPOTHSIS iii)WHAT ARE TYPE ONE AND TYPE TWO ERRORS 2.EXPLAIN METHOD OF MAXIMUM LIKLIHOOD