Chapter 9 Mathematical Preliminaries. Stirling’s Approximation Fig. 9.2-1 by trapezoid rule take antilogs Fig. 9.2-2 by midpoint formula take antilogs.

Slides:



Advertisements
Similar presentations
Chapter 10 Shannon’s Theorem. Shannon’s Theorems First theorem:H(S) ≤ L n (S n )/n < H(S) + 1/n where L n is the length of a certain code. Second theorem:
Advertisements

1 Set #3: Discrete Probability Functions Define: Random Variable – numerical measure of the outcome of a probability experiment Value determined by chance.
Discrete Probability Distributions
Independence of random variables
15 MULTIPLE INTEGRALS.
Chapter 23 Gauss’ Law.
Lecture 12 Brownian motion, chi-square distribution, d.f. Adjusted schedule ahead Chi-square distribution (lot of supplementary material, come to class!!!)
Class notes for ISE 201 San Jose State University
Prof. Bart Selman Module Probability --- Part d)
Sections 4.1, 4.2, 4.3 Important Definitions in the Text:
16 MULTIPLE INTEGRALS.
Visual Recognition Tutorial1 Random variables, distributions, and probability density functions Discrete Random Variables Continuous Random Variables.
Stat 321- Day 13. Last Time – Binomial vs. Negative Binomial Binomial random variable P(X=x)=C(n,x)p x (1-p) n-x  X = number of successes in n independent.
Copyright © Cengage Learning. All rights reserved. 4 Continuous Random Variables and Probability Distributions.
Chapter 21 Random Variables Discrete: Bernoulli, Binomial, Geometric, Poisson Continuous: Uniform, Exponential, Gamma, Normal Expectation & Variance, Joint.
Separate multivariate observations
Richard W. Hamming Learning to Learn The Art of Doing Science and Engineering Session 9: n–Dimensional Space Learning to Learn The Art of Doing Science.
McGraw-Hill/Irwin Copyright © 2007 by The McGraw-Hill Companies, Inc. All rights reserved. Discrete Random Variables Chapter 4.
UNIVERSITI MALAYSIA PERLIS
Inner Product Spaces Euclidean n-space: Euclidean n-space: vector lengthdot productEuclidean n-space R n was defined to be the set of all ordered.
Chapter 10 Vector Calculus
Limits and the Law of Large Numbers Lecture XIII.
HAWKES LEARNING SYSTEMS math courseware specialists Copyright © 2010 by Hawkes Learning Systems/Quant Systems, Inc. All rights reserved. Chapter 8 Continuous.
Coordinate Geometry. Time is running out!!!!! The Tuesday after Thanksgiving break (11/30) is the last day to turn in any work from Unit 3 – Geometry.
Continuous Random Variables and Probability Distributions
McGraw-Hill/Irwin Copyright © 2013 by The McGraw-Hill Companies, Inc. All rights reserved.
Binomial Distributions Calculating the Probability of Success.
Prof. David R. Jackson ECE Dept. Fall 2014 Notes 6 ECE 2317 Applied Electricity and Magnetism Notes prepared by the EM Group University of Houston 1.
Sullivan – Fundamentals of Statistics – 2 nd Edition – Chapter 11 Section 1 – Slide 1 of 34 Chapter 11 Section 1 Random Variables.
Continuous Distributions The Uniform distribution from a to b.
Multiple Integrals 12.
Discrete Random Variables A random variable is a function that assigns a numerical value to each simple event in a sample space. Range – the set of real.
The Mean of a Discrete RV The mean of a RV is the average value the RV takes over the long-run. –The mean of a RV is analogous to the mean of a large population.
Probability & Statistics I IE 254 Summer 1999 Chapter 4  Continuous Random Variables  What is the difference between a discrete & a continuous R.V.?
Copyright © 2010 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Chapter 5 Discrete Random Variables.
McGraw-Hill/IrwinCopyright © 2009 by The McGraw-Hill Companies, Inc. All Rights Reserved. Chapter 5 Discrete Random Variables.
© 2005 McGraw-Hill Ryerson Ltd. 5-1 Statistics A First Course Donald H. Sanders Robert K. Smidt Aminmohamed Adatia Glenn A. Larson.
Week11 Parameter, Statistic and Random Samples A parameter is a number that describes the population. It is a fixed number, but in practice we do not know.
Chapter 10 Rotational Motion.
Math b (Discrete) Random Variables, Binomial Distribution.
CHAPTER 5 SIGNAL SPACE ANALYSIS
Expectation. Let X denote a discrete random variable with probability function p(x) (probability density function f(x) if X is continuous) then the expected.
4.2 Binomial Distributions
Exam 2: Rules Section 2.1 Bring a cheat sheet. One page 2 sides. Bring a calculator. Bring your book to use the tables in the back.
Week 121 Law of Large Numbers Toss a coin n times. Suppose X i ’s are Bernoulli random variables with p = ½ and E(X i ) = ½. The proportion of heads is.
For any function f (x), the tangent is a close approximation of the function for some small distance from the tangent point. We call the equation of the.
Numerical parameters of a Random Variable Remember when we were studying sets of data of numbers. We found some numbers useful, namely The spread The.
Copyright © Cengage Learning. All rights reserved. 3 Discrete Random Variables and Probability Distributions.
Section 10.5 Let X be any random variable with (finite) mean  and (finite) variance  2. We shall assume X is a continuous type random variable with p.d.f.
Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Chapter 5 Discrete Random Variables.
Chapter 6 Large Random Samples Weiqi Luo ( 骆伟祺 ) School of Data & Computer Science Sun Yat-Sen University :
7.2 Means & Variances of Random Variables AP Statistics.
1 Chapter 4 Mathematical Expectation  4.1 Mean of Random Variables  4.2 Variance and Covariance  4.3 Means and Variances of Linear Combinations of Random.
12.1 Discrete Probability Distributions (Poisson Distribution)
Copyright © 2011 Pearson Education, Inc. Conic Sections CHAPTER 13.1Parabolas and Circles 13.2Ellipses and Hyperbolas 13.3Nonlinear Systems of Equations.
ENE 325 Electromagnetic Fields and Waves Lecture 2 Static Electric Fields and Electric Flux density.
Inner Product Spaces Euclidean n-space: Euclidean n-space: vector lengthdot productEuclidean n-space R n was defined to be the set of all ordered.
Applied Discrete Mathematics Week 11: Relations
AP Statistics Chapter 16.
Discrete Probability Distributions
Chapter 4: Mathematical Expectation:
Do now: • Which diagram on the right illustrates which distribution?
Additional notes on random variables
Additional notes on random variables
Exam 2 - Review Chapters
Apply the distance and midpoint formula in 3D
Bernoulli Trials Two Possible Outcomes Trials are independent.
Continuous Distributions
Moments of Random Variables
Mathematical Expectation
Presentation transcript:

Chapter 9 Mathematical Preliminaries

Stirling’s Approximation Fig by trapezoid rule take antilogs Fig by midpoint formula take antilogs  9.2..

To calculate C exactly use a clever trick: Let k ≥ 0, integration by parts 9.2

substitute  9.2

Binomial Bounds Show the volume of a sphere of radius λn in the n-dimensional unit hypercube is: Assuming 0  λ  ½ (since the terms are reflected about n/2) the terms grow monotonically, and bounding the last by Stirling gives: 9.3

N.b.

The Gamma Function Idea: extend the factorial to non-integral arguments. by convention For n > 1, integrate by parts: dg = e −x dx f = x n−1 9.4

dx dy area = dxdy 9.4 r drrdθ area = rdrdθ

N – Dimensional Euclidean Space Use Pythagorean distance to define spheres: Consequently, their volume depends proportionally on r n converting to polar coordinates 9.5

just verify by substitution r2  tr2  t 9.5 nCnCn  2π  34π/  4π 2 /  58π 2 /  6π 3 /  716π 3 /  8π 4 /    2k2kπk/k!πk/k! → 0 From table on page 177 of book.

Interesting Facts about N-dimensional Euclidean Space C n → 0as n → ∞  V n (r) → 0 asn → ∞ for a fixed r Volume approaches 0 as the dimension increases! Almost all the volume is near the surface (as n → ∞) end of 9.5

What about the angle between random vectors, x and y, of the form (±1, ±1, …, ±1)? Hence, for large n, there are almost 2 n random diagonal lines which are almost perpendicular to each other! end of 9.8 By definition: length of projection along axis length of entire vector For large n, the diagonal line is almost perpendicular to each axis! Angle between the vector (1, 1, …, 1) and each coordinate axis: As n → ∞: cos θ → 0,  θ → π/2.

Chebyshev’s Inequality Let X be a discrete or continuous random variable with p(x i ) = the probability that X = x i. The mean square is  x 2 × p(x) ≥ 0 Chebyshev’s inequality 9.7

Variance The variance of X is the mean square about the mean value of X, So variance, (via linearity) is: 9.7 Note: V{1} = 0 → V{c} = 0 & V{cX} = c²V{X}

The Law of Large Numbers Suppose X and Y are independent random variables, with E{X} = a, E{Y} = b, V{X} = σ 2, V{Y} = τ 2. Then E{(X − a) ∙ (Y − b)} = E{X − a} ∙ E{Y − b} = 0 ∙ 0 = 0 And V{X + Y} = E{(X − a + Y − b) 2 } = E{(X − a) 2 } + 2E{(X − a)(Y − b)} + E{(Y − b) 2 } = V{X} + V{Y} = σ 2 + τ 2 because of independence 9.8 Consider n independent trials for X; called X 1, …, X n. The expectation of their average is (as expected!):

So, what is the probability that their average A is not close to the mean E{X} = a? Use Chebyshev’s inequality: Let n → ∞ Weak Law of Large Numbers: The average of a large enough number of independent trials comes arbitrarily close to the mean with arbitrarily high probability. 9.8 The variance of their average is (remember independence):