Presentation is loading. Please wait.

Presentation is loading. Please wait.

Psyc 235: Introduction to Statistics

Similar presentations


Presentation on theme: "Psyc 235: Introduction to Statistics"— Presentation transcript:

1 Psyc 235: Introduction to Statistics
DON’T FORGET TO SIGN IN FOR CREDIT! Open demo before class: 1

2 About the Graded Assessment…
Number One Predictor of Performance on Assessment: How much of the content you’ve covered. Importance of time on ALEKS Help provide a measure to pace yourself Keep on track for option of extra credit final However! Your grade is based on how much of the content you’ve learned. You need to keep up with the content goals! events have outcomes in common, but they don’t affect each other

3 Trouble meeting content goals?
All content goals are listed on the syllabus. (Available on course webpage) Please attend office hours and lab. We are here to help! Special Invited Lectures: Mandatory for invited students Will cover topics that are giving folks trouble Expect notices in the next couple weeks

4 Concerned about assessment grade?
Catch up on content as soon as possible Remember the final extra credit option Feel free to contact us for more specific advice.

5 Moving Forward: Mid-course evaluation forms soon.
Suggestions for course, lecture, lab format.

6 Data World vs. Theory World
Theory World: Idealization of reality (idealization of what you might expect from a simple experiment) POPULATION parameter: a number that describes the population. fixed but usually unknown Data World: data that results from an actual simple experiment SAMPLE statistic: a number that describes the sample (ex: mean, standard deviation, sum, ...)

7 Last Week… Binomial: n: # of independent trials
p: probability of “success” q: probability of failure (1-p) X = # of the n trials that are “successes” x = np x = √np(1-p)

8 Binomial Probability Formula
specific # of successes you could get probability of success specific # of failures Binomial Random Variable probability of failure work out example? n=10, k=7, p=.5 P(X=k)=.117 combination called the Binomial Coefficient Note for p (X ≥ k) Sum p for each k in range.

9 Jason’s Coin Toss Demo:
Population: Outcomes of all possible coin tosses (for a fair coin) Bernoulli Trial: one coin toss Success=Heads p=.5 10 tosses n=10 (sample size) Sample: X = .... Sampling Distribution

10 Jason’s Coin Toss Demo:
Population: Outcomes of all possible coin tosses (for a fair coin) And, we can use the formulas we’ve learned to calculate the population parameters for the sampling distribution: x = np=10 * .5 = 5 x = √np(1-p)≈1.58 Sample: X = .... Sampling Distribution

11 With different sample sizes, you all discovered something interesting…
With large n, the binomial distribution starts to look like a normal distribution!

12 What is a Normal Distribution?
Class of distributions with the same overall shape Continuous probability distribution defined by two parameters: mean:  stdev:  Special: Standard Normal Distribution

13 Standard Normal Distribution
A distribution of z-scores (standardized scores). Scores derived by: Note:  = 0  = 1 Allows comparisons of scores from different normal distributions Note: Link between area and p(x) Note also: +1 unit equals +1  Area = probability

14 Probability & Standardizing Scores
The standard normal distribution allows us to easily calculate probabilities for any normal distribution: Example: Say that we know that the average checking account balance for a UIUC student is normally distributed with an average balance of $150 and a standard deviation of $125. What is the probability of a randomly selected student having a balance of… more than $250? Less than $0 Between $100 and $200? ( )/125 = .6667=z p(z>.67)=.252 (Note ALEKS button only does <, so must do 1-p. (0-150)/125 = p(z<-1.2)=.115 ( )/125= ( )/125=.4 p(z<.4)-p(z>.4)=.31

15 Why do we care so much about Normal Distributions?
What happened to the binomial distribution as n increased? Central Limit Theorem As the sample size n increases, the distribution of the sample average approaches the normal distribution with a mean µ and variance 2/n irrespective of the shape of the original distribution.

16 Wait. What? Example: Rolling one die, multiple dice…
So, just like flipping the coin, multiple samples of the sum of the n observations, approaches the normal. Since the mean of a sample is the sum of all observations over n (where n is constant for all samples), this same principle applies to the sample mean.

17 Hmm. Ok… But, does the underlying distribution really not matter?
Note that the size of n slightly changed the shape of the normal distribution. Also, note that the central limit theorem stated the mean was µ and variance 2/n (so stdev = /√n ) The variance is a little different than before isn’t it?

18 T distributions To adjust for the fact that the normal distribution is a better approximation for a sampling distribution as n increases, we have the T distribution… So, the t distribution varies depending on the number of degrees of freedom (n-1) With lower n, the t distribution is more spread out. This means that getting more extreme values is more probable with low n.

19 So what good does that do us, anyway?
Because we can assume that a sampling distribution will be approximately normal with a large n, we can use this distribution to estimate the probability of obtaining a given sample.

20 Example: (aka excuse to show pictures of my dog)
A large dog shelter in Chicago wants to increase awareness of the adorable pups they have for adoption by bringing some dogs to a local festival. They have 50 people who have volunteered to walk the dogs around the festival. In the shelter there are several hundred dogs. The shelter knows that on average their dogs have a 14 point adoptability score (combination of things like behavior, training, breeding, cuteness, etc.), and the scores tend to vary by about 3. The shelter would prefer to show dogs that have an average of at least a 16 adoptability score. Should they go through all the dogs and select 50 by hand, or are they likely to get a group with this average by chance? Notice that we don’t know what the underlying distribution of adoptability scores looks like at this shelter, but because of CLT we can still come up with an answer.

21 Example: (aka excuse to show pictures of my dog)
A large dog shelter in Chicago wants to increase awareness of the adorable pups they have for adoption by bringing some dogs to a local festival. They have 50 people who have volunteered to walk the dogs around the festival. In the shelter there are several hundred dogs. The shelter knows that on average their dogs have a 14 point adoptability score (combination of things like behavior, training, breeding, cuteness, etc.), and the scores tend to vary by about 3. The shelter would prefer to show dogs that have an average of at least a 16 adoptability score. Should they go through all the dogs and select 50 by hand, or are they likely to get a group with this average by chance? What information is important here? T=(16-14)/3/√50= p(t>4.714)=.00001)-- Better hand pick the dogs. µ = 14  = 3 X = 16 N = 50

22 A couple more distributions
There are 2 more distributions that we will need later. ALEKS is familiarizing them with you now so that you know how to use the calculators etc. when it comes up. Generally, you should know: Shape of the distribution How to use the distribution practically (at this point this means using the ALEKS calculator to find the probability of a given value in a distribution)-- so don’t worry Vague concept of what the distribution means

23 Chi Square (2) Distribution
Distribution of the sum of 2+ squared normal distributions This is useful because later when we’re comparing multiple distributions, we will want to determine whether two distributions are the same thing added together or are actually two separate distributions. Where k is number of groups

24 F distribution Distribution of the variance of one sample from a normally distributed population divided by the variance of another. This will be useful later when we want to test if there is more variance within a group than across groups (ANOVA)… if there is greater within group variance, then its unlikely that the groupings are meaningful. d1 is degrees of freedom of the top (numerator) distribution d2 is degrees of freedom for the bottom (denominator) distribution

25 Next Week Keep up with the content goals
Watch for an about course evaluations/suggestions Please let us know if you want or need help If you’ve fallen behind, expect to be contacted by . Have a good week everyone!


Download ppt "Psyc 235: Introduction to Statistics"

Similar presentations


Ads by Google