Sampling distributions BPS chapter 11 © 2006 W. H. Freeman and Company.

Slides:



Advertisements
Similar presentations
CHAPTER 11: Sampling Distributions
Advertisements

BPS - 5th Ed. Chapter 111 Sampling Distributions.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 18 Sampling Distribution Models.
Copyright © 2010, 2007, 2004 Pearson Education, Inc. Chapter 18 Sampling Distribution Models.
Sampling Distributions and Sample Proportions
McGraw-Hill Ryerson Copyright © 2011 McGraw-Hill Ryerson Limited. Adapted by Peter Au, George Brown College.
BPS - 5th Ed. Chapter 111 Sampling Distributions.
Sampling Distributions
CHAPTER 11: Sampling Distributions
Objectives (BPS chapter 11) Sampling distributions  Parameter versus statistic  The law of large numbers  What is a sampling distribution?  The sampling.
Section 9.3 Sample Means.
A P STATISTICS LESSON 9 – 1 ( DAY 1 ) SAMPLING DISTRIBUTIONS.
Chapter 7 Sampling Distributions
1 Sampling Distributions Presentation 2 Sampling Distribution of sample proportions Sampling Distribution of sample means.
CHAPTER 11: Sampling Distributions
Chapter 5 Sampling Distributions
Essential Statistics Chapter 101 Sampling Distributions.
Objectives (BPS chapter 11) Sampling distributions  Parameter versus statistic  The law of large numbers  What is a sampling distribution?  The sampling.
Sampling distributions BPS chapter 11 © 2006 W. H. Freeman and Company.
STA Lecture 161 STA 291 Lecture 16 Normal distributions: ( mean and SD ) use table or web page. The sampling distribution of and are both (approximately)
Sampling distributions for sample means IPS chapter 5.2 © 2006 W.H. Freeman and Company.
BPS - 5th Ed. Chapter 111 Sampling Distributions.
Lecture 3 Sampling distributions. Counts, Proportions, and sample mean.
AP Statistics Chapter 9 Notes.
AP STATISTICS LESSON SAMPLE MEANS. ESSENTIAL QUESTION: How are questions involving sample means solved? Objectives:  To find the mean of a sample.
Sampling distributions BPS chapter 11 © 2006 W. H. Freeman and Company.
Sampling distributions for sample means
BPS - 5th Ed. Chapter 11 1 Sampling Distributions.
Copyright © 2009 Pearson Education, Inc. Chapter 18 Sampling Distribution Models.
CHAPTER 11: Sampling Distributions ESSENTIAL STATISTICS Second Edition David S. Moore, William I. Notz, and Michael A. Fligner Lecture Presentation.
CHAPTER 15: Sampling Distributions
Check your Skills – Chapter 12 BPS - 5TH ED.CHAPTER 12 1 FemaleMaleTotal Accidents Homicide Suicide 
Stat 1510: Sampling Distributions
The Practice of Statistics Chapter 9: 9.1 Sampling Distributions Copyright © 2008 by W. H. Freeman & Company Daniel S. Yates.
Chapter 7: Sampling Distributions Section 7.1 How Likely Are the Possible Values of a Statistic? The Sampling Distribution.
Sampling Distributions Chapter 18. Sampling Distributions A parameter is a measure of the population. This value is typically unknown. (µ, σ, and now.
An opinion poll asks, “Are you afraid to go outside at night within a mile of your home because of crime?” Suppose that the proportion of all adults who.
Reminder: What is a sampling distribution? The sampling distribution of a statistic is the distribution of all possible values of the statistic when all.
Chapter 13 Sampling distributions
Sampling Distributions & Sample Means Movie Clip.
SAMPLING DISTRIBUTION OF MEANS & PROPORTIONS. SAMPLING AND SAMPLING VARIATION Sample Knowledge of students No. of red blood cells in a person Length of.
Reminder: What is a sampling distribution? The sampling distribution of a statistic is the distribution of all possible values taken by the statistic when.
Chapter 7: Sampling Distributions Section 7.2 Sample Proportions.
Parameter versus statistic  Sample: the part of the population we actually examine and for which we do have data.  A statistic is a number summarizing.
Sampling Distributions Chapter 18. Sampling Distributions If we could take every possible sample of the same size (n) from a population, we would create.
Sampling Distribution Models and the Central Limit Theorem Transition from Data Analysis and Probability to Statistics.
Statistics for Business and Economics Module 1:Probability Theory and Statistical Inference Spring 2010 Lecture 3: Continuous probability distributions.
Sampling Distributions Chapter 18. Sampling Distributions A parameter is a number that describes the population. In statistical practice, the value of.
Sampling distributions BPS chapter 10 © 2006 W. H. Freeman and Company.
Sampling Distributions
13. Sampling distributions
Parameter versus statistic
Distribution of the Sample Means
Chapter 5 Sampling Distributions
Chapter 5 Sampling Distributions
The Practice of Statistics in the Life Sciences Fourth Edition
Basic Practice of Statistics - 3rd Edition Sampling Distributions
Basic Practice of Statistics - 3rd Edition Sampling Distributions
Chapter 5 Sampling Distributions
Chapter 5 Sampling Distributions
Sampling distributions
Essential Statistics Sampling Distributions
Chapter 7: Sampling Distributions
CHAPTER 11: Sampling Distributions
Chapter 5 Sampling Distributions
Essential Statistics Sampling Distributions
Basic Practice of Statistics - 5th Edition Sampling Distributions
Cherish our mother earth; be kind to other beings
A statistic from a random sample or randomized experiment is a random variable. The probability distribution of this random variable is called its sampling.
Chapter 5: Sampling Distributions
Presentation transcript:

Sampling distributions BPS chapter 11 © 2006 W. H. Freeman and Company

Objectives (BPS chapter 11) Sampling distributions  Parameter versus statistic  The law of large numbers  What is a sampling distribution?  The sampling distribution of xBar.  The central limit theorem

Reminder: Parameter versus statistic  Sample: the part of the population we actually examine and for which we do have data.  A statistic is a number describing a characteristic of a sample. We often use a statistic to estimate an unknown population parameter.  Population: the entire group of individuals in which we are interested but can’t usually assess directly.  A parameter is a number describing a characteristic of the population. Parameters are usually unknown. Population Sample

Statistics are Random Variables  Recall: A random variable is a variable whose value is a numerical outcome of a random phenomenon.  Therefore when we compute a statistic such as x Bar or s from a random sample, the numerical result is a random variable.  It’s random, because it depends on the sample, which is random.  If we took a new random sample, the x Bar and s would be different.  Even though statistics are random, at least we can know what they are: just take a sample, and compute!  Parameters are not random – they are fixed numbers.  But parameters are generally unknown in practical problems. Why?  Because in order to find the value of a parameter, you need to know the whole population. This is usually not feasible. Statistics are random, but knowable Parameters are fixed, but unknown We’d like to use the statistics to estimate the unknown parameters

Example  Suppose we want to know the true percentage of adult Americans who support a national system of health insurance.  We can’t survey all adult Americans. So we take an SRS of (say) 1000 adult Americans, and ask these 1000 whether they support a national system of health insurance.  What is the parameter? What is the statistic?  Population: all adult Americans  Sample: the n = 1000 adult Americans surveyed (assuming that all respond)  Parameter: p = true percentage of all adult Americans who support national health insurance.  Statistic: p Hat = percentage of the 1000 people sampled who support national health insurance We expect the statistic ( p Hat in this case) to be a reasonable estimate of the parameter ( p, the true percentage), although probably not exactly equal to it. How good is the estimate? If we wanted a better estimate, what could we do?

Sampling distribution of (the sample mean) We take many random samples of a given fixed size n from a population with mean  and standard deviation  Some sample means will be above the population mean  and some will be below, making up the sampling distribution. Sampling distribution of “x bar” Histogram of some sample averages

What is a sampling distribution? (page 276)  The sampling distribution of a statistic is the distribution of all possible values taken by the statistic when all possible samples of a fixed size n are taken from the population. The Big Ideas:  Averages (xBar) are less variable than individual observations.  The Law of Large Numbers says that as the sample size n gets larger and larger, it becomes highly likely that xBar is close to the population mean .  Averages are more normal than individual observations.  The Central Limit Theorem says that as the sample size n gets larger and larger, the distribution of xBar becomes more and more normal. Note: When sampling randomly from a given population,  The sampling distribution describes what happens when we take all possible random samples of a fixed size n.  The Law of Large Numbers and the Central Limit Theorem describe what happens when the sample size n is gradually increased.

Example: 11.6 page student score Population: Distribution: Mean = 71.4, Std. Dev = , Median = 72.5, IQR = 15 Are these parameters or statistics? Histogram of the 10 individual scores

Example: 11.6 page 277 Sample number SRS ( n = 4) Sample mean 11, 4, 5, , 6, 0, , 3, 1, , 4, 8, , 7, 1, , 1, 0, , 2, 5, , 0, 4, , 6, 1, , 3, 8, Choose 10 samples of size n = 4, and calculate xBar for each:

Example: 11.6 page 277 Sample number SRS ( n = 4) Sample mean 11, 4, 5, , 6, 0, , 3, 1, , 4, 8, , 7, 1, , 1, 0, , 2, 5, , 0, 4, , 6, 1, , 3, 8, Frequency Table = Sample Mean Range Count Histogram of the 10 sample means Note: Population mean:  = 71.4 Avg. sample mean: On average, the sample means are close to the true mean Look at the Sampling Distribution Mean = , Std. Dev = , Median = , IQR = 8.5 Note: the distribution of the sample means has a smaller spread than the population.

Sampling distribution of  √n√n Mean and Std. Dev. Of xBar

In English: 1. The mean of the sample means is the population mean. 2. The standard deviation of the sample means is the population standard deviation divided by the square root of the sample size. What do 1. and 2. say about the sampling distribution of xBar? 1 2

 Mean of the sampling distribution of x Bar : The what the equation  ( x Bar) =  says that the sampling distribution of x Bar is centered on the population mean . Thus, on average, we expect x Bar to be equal to the population mean . Not the we expect x Bar to equal  in individual instances– sometimes it will be larger, sometimes smaller. But since the average value of x Bar is , we say that x Bar is an unbiased estimate of the population mean  —it will be “correct on average” in many samples..  Standard deviation of the sampling distribution of x Bar: The standard deviation of the sampling distribution measures how much the sample statistic x Bar varies from sample to sample. It is smaller than the standard deviation of the population by a factor of √n. This means that averages are less variable than individual observations. More Discussion:

The law of large numbers (page 273) Law of large numbers: As the number of randomly-drawn observations (n) in a sample increases: the mean of the sample (xBar) gets closer and closer to the population mean  the sample proportion (pHat) gets closer and closer to the population proportion p for quantitative data for categorical data

For normally distributed populations: When a variable in a population is normally distributed, then the sampling distribution of x Bar for all possible samples of size n is also normally distributed. If the population is N(  ), then the sample means distribution is N(  /√n). Suppose X = “odor threshold” is normally distributed in some population Sampling distribution for xBar Population Sample means Amazingly, the sampling distribution of xBar is approximately normal regardless of whether the population is normal or not. This remarkable fact is known as the the Central Limit Theorem:

The central limit theorem (page 281) Central Limit Theorem: When randomly sampling from any population with mean  and standard deviation , when n is large enough, the sampling distribution of is approximately normal: N(  /√n). Population with strongly skewed distribution (Figure 11.5 page 283) Sampling distribution of for n = 25 observations Sampling distribution of for n = 10 observations Sampling distribution of for n = 2 observations Averages are more normal than individual measurements!

In many cases, n = 25 isn’t a huge sample. Thus, even for strange population distributions we can assume a normal sampling distribution of the mean, and work with it to solve problems. How large is “large enough” for the CLT? It depends on the population distribution. More observations are required if the population distribution is far from normal.  A sample size of 25 is generally enough to obtain a normal sampling distribution from a strong skewness or even mild outliers.  A sample size of 40 will typically be good enough to overcome extreme skewness and outliers.

IQ scores: population vs. sample In a large population of adults, IQ scores have mean 112 with standard deviation 20. Suppose 200 adults are randomly selected for a market research campaign.  The distribution of the sample mean IQ is A) exactly normal, mean 112, standard deviation 20. B) approximately normal, mean 112, standard deviation 20. C) approximately normal, mean 112, standard deviation D) approximately normal, mean 112, standard deviation 0.1. C) approximately normal, mean 112, standard deviation Population distribution: N (  = 112;  = 20) Sampling distribution for n = 200 is N (  = 112;  /√n = 1.414) What if IQ scores are normally distributed for the population??

Application Hypokalemia is diagnosed when blood potassium levels are low, below 3.5mEq/dl. Let’s assume that we know a patient whose measured potassium levels vary daily according to a normal distribution N(  = 3.8,  = 0.2). If only one measurement is made, what's the probability that this patient will be misdiagnosed hypokalemic? If instead measurements are taken on four separate days and then averaged, what is the probability of such a misdiagnosis?

Let’s Work Some Problems!  Problem 11.9, page 280  Problem 11.11, page 285  Problem 11.13, page 286

Practical note  Large samples are not always attainable.  Sometimes the cost, difficulty, or preciousness of what is studied limits drastically any possible sample size.  Blood samples/biopsies: no more than a handful of repetitions acceptable. Often we even make do with just one.  Opinion polls have a limited sample size due to time and cost of operation. During election times, though, sample sizes are increased for better accuracy.  Not all variables are normally distributed.  Income is typically strongly skewed for example.  Is still a good estimator of  then?

Income distribution Let’s consider the very large database of individual incomes from the Bureau of Labor Statistics as our population. It is strongly right-skewed.  We take 1000 SRSs of 100 incomes, calculate the sample mean for each, and make a histogram of these 1000 means.  We also take 1000 SRSs of 25 incomes, calculate the sample mean for each, and make a histogram of these 1000 means. Which histogram corresponds to the samples of size 100? 25?

The Central Limit Theorem is valid as long as we are sampling many small random events, even if the events have different distributions (as long as no one random event has an overwhelming influence). Why is this important? It explains why so many variables are normally distributed. Further properties So height is very much like our sample mean. The “individuals” are genes and environmental factors. Your height is a mean. Now we have a better idea of why the density curve for height has this shape. Example: Height seems to be determined by a large number of genetic and environmental factors, like nutrition.

Statistical process control Industrial processes tend to have normally distributed variability, in part as a consequence of the central limit theorem applying to the sum of many small influential factors. Random samples taken over time can thus be used to easily verify that a given process is not getting out of “control.” What is statistical control? A variable that continues to be described by the same distribution when observed over time is said to be in statistical control, or simply in control.

Process-monitoring What are the required conditions? We measure a quantitative variable x that has a normal distribution. The process has been operating in control for a long period, so that we know the process mean µ and the process standard deviation σ that describe the distribution of x as long as the process remains in control. An control chart displays the average of samples of size n taken at regular intervals from such a process. It is a way to monitor the process and alert us when it has been disturbed so that it is now out of control. This is a signal to find and correct the cause of the disturbance.

control charts For a process with known mean µ standard deviation σ, we calculate the mean of samples of constant size n taken at regular intervals.  Plot (vertical axis) against time (horizontal axis).  Draw a horizontal center line at µ.  Draw two horizontal control limits at µ ± 3σ/√n (UCL and LCL).

An value that does not fall within the two control limits is evidence that the process is out of control.

A machine tool cuts circular pieces. A sample of four pieces is taken hourly, giving these average measurements (in inches from the specified diameter). Because measurements are made from the specified diameter, we have a given target µ = 0 for the process mean. The process standard deviation σ = What is going on? Sample 1 − − The process mean has drifted. Maybe the cutting blade is getting dull, or a screw got a bit loose. For the chart, the center line is 0 and the control limits are ±3σ/√4 = ± x x x x x x

Summary: the Big Ideas from Ch. 11 PopulationSample Random sampling Numerical description parameterstatistic Parameters are fixed, but unknown (usually).Statistics are random, but known. We want to know the parameters.We use statistics to estimate the parameters.

Summary: the Big Ideas from Ch. 11  Chapter 11 focuses on the following problem:  Three Big Ideas give the answer:  Law of Large Numbers : as the sample size n gets larger and larger, the sample mean gets closer and closer to the population mean.  2-number summary for the sample mean : Let X be the basic measurement, with a given (population) mean  (X) and standard deviation  (X). Then the 2-number summary for the sample mean (with sample size n ) is  Central Limit Theorem : as the sample size n gets larger and larger, the distribution of the sample mean becomes normal: