Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 1.1- Random walk on Z: 1.1.1- Simple random walk : Let be independent identically distributed random variables (iid) with only two possible values {1,-1},

Similar presentations


Presentation on theme: "1 1.1- Random walk on Z: 1.1.1- Simple random walk : Let be independent identically distributed random variables (iid) with only two possible values {1,-1},"— Presentation transcript:

1 1 1.1- Random walk on Z: 1.1.1- Simple random walk : Let be independent identically distributed random variables (iid) with only two possible values {1,-1}, such that:, or Define the simple random walk process, by:, Define: To be waiting time of the first visit to state 1.

2 2 A state (i) is called recurrent if the chain returns to (i) with probability 1 in a finite number of steps, other wise the state is transient. If we define the waiting time variables as, then state i is recurrent if. That is the returns to the state (i) are sure events, and the state (i) is transient if: In this case there is positive probability of never returning to state (i), and the state (i) is a positive recurrent if. Hence if then the state (i) is a null recurrent.

3 3 We say that i leads to (j) and write if for some We say i communicates with j and write if both and. A Markov chain with state space S is said to be irreducible if, If x is recurrent and then y is recurrent.

4 4 If the Markov chain is irreducible, and one of the states is recurrent then all the states are recurrent. The simple random walk on is recurrent iff 1) If and, then. 2) If and, then.

5 5 We define the state or position of random walk at time zero as, and the position at time n as, then by strong law of large number Because: are independent identically distributed and from the definition,then. Hence And This means thatand consequence of that we have The state 0 is transient if the number of visit to zero is finite almost surly, which mean that with probability 1 the number of visit is finite.

6 6 Hence zero is transient state. Now if p=q=1/2, we claim recurrence. It is enough to show that 0 is recurrent (from theorem 1.2.4), then the state 0 is recurrent iff Define Where

7 7 Now using sterling formula Hence However and 0 is recurrent, and consequently the simple random walk on Z is recurrent. Now we will show that symmetric simple random walk is null recurrent, which means that Where

8 8 The probability generating function is defined as: Then Also Hence And

9 9 Put: Hence: and Hence zero is a null recurrent, and the simple symmetric random walk on Z is null recurrent. We proved that simple random walk on Z is recurrent if. We have: Then:

10 10 If is an infinite sequence of events such that: 1. 2., provided that are independent events.

11 11 It is well known that if are independent random variables, then, but this may fail in the case of infinite product, To show this we can introduce this counter example.

12 12 * Consider to be independent random variables such that: That is we have infinite sequence of independent identically distributed random variables, then: We define anew sequence as follows: then Now define the waiting time that means the first time the sequence equal to zero. Then :

13 13 On the other hand hence: T is finite almost surly.

14 14 i. X is finite random variable iff ii. X is bounded iff Now we introduce the notion of lebesgue measure, and Borel -field

15 15 · The -field is a collection of sub-sets of such that 1. 2. 3. The elements of are called measurable sets or events. * The intersection of all -fields that contain all open intervals is called Borel -field and denoted by. It is known that a lebesgue measure on (Borel -field) is the only measure that assigns for every interval (a,b] a measure, and. Provided that EX and EY are finite.

16 16 If is finite then: Note : If are independent identically distributed random variables, and N is positive integer valued random variable independent of then: and Hence

17 17 If, and Then If random variables, then We define a new sequence then: now

18 18 ·  But is this theorem true if ? The following example shows that this theorem may fail if all. Define are independent identically distributed random variables, such that: and define : and

19 19 then: On the other hand : Because we have symmetric simple random walk, then recurrence implies that and then: Define also:

20 20 the event does occur or does not depending only on, which means that. Hence are independent random variables. then and Hence from (2) & (3) we conclude that:. The sigma-field generated by a random variable X is: where is the Borel.

21 21 A common error is repeated in some books, about linear correlation which is But this relation must be written as:  The following example shows that the formula (1) may not be true. * Consider a random variable. We define the triple where and and P=Lesbgue measure on. is the Borel -field restricted on (0,1).

22 22 Now define From (1) & (2) we get a = 0, y = b Suppose for the contrary that This Contradiction because the assumption is not true.

23 23 It is already known that if the moment generating function do exist,then all the moment do exist,but it could happen that all the moments do exist but the moment generating function does not exist, the following counterexample explains this We know if the moment generating function is defined, then Let

24 24 And define, that is Y follows the lognormal distribution. Hence all the moment of y do exist, whereas the moment generating function of Y does not exist, as we show now. Then the moment generating function does not exist.

25 25 The sequence of moments does not determine uniquely the distribution, the following example show this: We have the following two distributions with identical sequence of moments

26 26 Now: Obviously. Also

27 27 It is known from the literature that the sequence of moments of a random variable uniquely determines the distribution if it satisfies one of the following equivalent conditions:

28 28 The joint distribution of random variables determines uniquely the marginal distributions, but the converse is not necessarily true. This is a joint probability function ; 1 0 10 YXYX

29 29 These two marginal functions are corresponding to the joint function fir any Conclusion: The marginal distributions don’t determine the joint distribution. 10X The marginal distribution of X is: 10Y The marginal distribution of Y is:

30 30 The “positive correlated” is not transitive property. A matrix is said to be positive definite if it satisfies one of the following equivalent conditions: i. ii. All the eignvalues are positive. iii. The determinants of all left upper sub-matrices are positive.

31 31 Any positive definite matrix could be variance-covariance matrix of some random vector. · Now Consider the matrix We can make sure that is variance-covariance matrix of a random variable. then Then X and Y are positively correlated, and Y and Z as well, but X and Z are negatively correlated

32 32 We say that converges weakly to X if: At all continuity points of F. This is denoted by or The following example shows that a sequence of continuous random variable does not necessary converge weakly to a continuous random variable.

33 33 Let Then X is a degenerate random variable with cumulative distribution function Consider a sequence of cumulative density functions such that: Then Obviously, is the cumulative distribution function of a continuous random variable. Whereas, F is the cumulative distribution function of a discrete random variable X. Never the less,.

34 34 The median is any x such that If F is continuous random variable, then the median is unique. Let X be a random variable such that

35 35 Consider a random variable X, such that P(X) 210X thus the median is any Which means that a=0, and b=1. That is, the median is not unique. This is a disadvantage of the median as a measure of location as the mean. The cumulative density function of X

36 36 If Then : Let then The median does not satisfy the relation Mod(X+Y) = Mod(X) + Mod(Y) Assume that Y is an independent copy of X, so we can find Med(X) and Med(Y) as follows

37 37 this implies that Med(X) =Med(Y)=log2 now to calculated Med (Z) suppose for contrary that Med(X+Y) = Med(X) + Med(Y) This means that Med(Z) = 2 log 2 With probability density function

38 38. If the median of Z were 2log 2, then However, Hence

39 39 The mod is the value of X that makes the density function is maximum. The mode is not unique, it is 0 and 1 P(X) 210X A random variable X has the following distribution

40 40 Suppose that X&Y follow the distribution above and they are independent. Let Z=X+Y P(X) 10X P(Z) 210Z We show in this example that the mode is not a linear operator. This is a disadvantage of the mode.

41 41 and from (1) & (2) we conclude that the Mode is not linear operator. Let and Y is an independent copy of X, Since f (0)=1 is the maximum value of f(x), hence Now Z = X + Y, then, the probability density function of Z

42 42 We defined a simple random walk in chapter (1), and we know that the following Markov Chain is recurrent iff This walk is called simple symmetric random walk (SSRW).

43 43 Consider a state i and a state j such that: ( Transitive property). By the same way Hence simple symmetric random walk (SSRW) on Z is irreducible. Simple symmetric random walk (SSRW) on Z is irreducible

44 44 A sequence of random variables is Martingale if: and sub-martingale if and a super-martingale if :

45 45 Consider independent random variables, such that We claim that is a martingale, for

46 46 Consider independent random variables Define a sequence. We show that is a martingale.

47 47 ’s independent random Consider a family tree where, and let the number of children at generation, and let the number of children of the individual of the generation then: and,then is a doubly indexed family. Variables, for every n and k and for fixed n they are independent identically distributed random variables. We assume that

48 48 Consider, we Show that, then is a martingale. We use the fact that: If Independent identically distributed random variables and random variable and independent of then: are Integer-valued

49 49 Since variables, and for fixed n they are independent identically distributed. Also are independent random are independent of Then: And  n n n nn nn nn nnn nn n Z k nk n n n W EZ Z EdEZ EdZ EZ dEZ EdEZ dE Z E n                       1 1 1 1

50 50 (Not every martingale has an almost sure limit). In a symmetric simple random walk on Z, we have, then is a martingale since and simple random walk (SSRW), is recurrent this means that it keeps oscillating between all the integers. does not exist, because this symmetric

51 51 If is a sub-martingale such that: Then for every

52 52. is greater than or equal tofor the first time at time k. We define which means that is the event that the process, where A is the event that the process is greater than or equal to by time n. We want to show that: Since on, But sinceare disjoint. Hence

53 53 Then In the following example we can see if the sequence is not a sub-martingale, then the last theorem fail.

54 54 Consider independent identically distributed random variables such that It is clear that the sequence is not a sub-martingale since obviously, Hence: Doob’s inequality requires that for then (1) will be which is not true for large n. This validity because the sequence is not a martingale.

55 55 In this chapter we describe convergence of sequences of random variables. Almost sure convergence, because of its relationship to pointwise convergence, and convergence in distribution, because of its being the easiest to establish. Convergence in probability is significant for weak laws of large numbers, and in statistics, for certain forms of consistency. Mean convergence is used to establish convergence of moments.

56 56 be random variables on. Let We have these modes of convergence 1. Almost sure convergence 2. Convergence in probability 3. Convergence in distribution 4. Convergence in mean Almost sure convergence, also known as convergence with probability one, is the probabilistic version of pointwise convergence. The sequence converges to X almost surly, denoted by, if

57 57 The sequence converges to X in probability, for every, if denoted by The sequence converges to X in mean,, if denoted by

58 58 In distribution In mean In probability Almost sure Defining conditionMode for all at continuity points t of Table 5.1. Definition of convergence for random variables This convergence is some times called weak convergence. We summarize these modes in the following table. The sequence converges to X in distribution,, ifdenoted by

59 59 Almost sure convergence. Convergence in mean. Convergence in probability. Convergence in distribution. Figer 5.2- Implications Among Forms of Convergence

60 60 The last figure depicts the implications that are always valid. None of the other implications holds in general., then Suppose that, and for each n, let Then, convergence in probability follows: If

61 61 If, then. By Chebyshev’s inequality, for each Therefor, implies that If, then The reader can see the proof in the book of Alan F. Karr p.(141).

62 62 The implication convergence in probability does not always lead to almost sure convergence and this counter example shows us that. of independent random variables such that ; We can see that for, Consider a sequence Hence Now claim thatdoes not converge to zero almost surly, And since are independent, then from Borel-Cantelli lemma, This means that does not converge to zero almost surly. Then

63 63 If is a sequence of independent random variables and then: This theorem suggests that a convergence in probability is equivalent to almost sure convergence in case of martingales. But, this is false in general as we show in the following example.

64 64 Set Where Consider a sequence of independent random variables such that: Define a new sequence as follows:

65 65 We claim thatis a martingale which means that: Now Since is independent of Then

66 66 Notice that For But This means that Which is equivalent to saying does not converge to zero almost surly. That is, convergence in probability does not imply almost sure convergence even for martingales.

67 67 Define And Hence the cumulative density function of is Then on the other hand, does not converge to 0. Hence does not converge to X in probability. This example shows that convergence in distribution does not imply convergence in probability. Let each of 0 & 1 has mass = ½

68 68 We can see that Hence And in example (13) we show that does not converge to 0 almost surly. This example shows that convergence in mean does not imply almost sure convergence. Define

69 69 Hence does not converge to 0 in mean. On the other hand It is clear that This example shows the converse implication of the last example, is not true in general. Define on the probability space a sequence of random variables as follows: Where and P=lebesgue measure Now

70 70 in the p th mean if This example shows that convergence in probability does not imply convergence in the p th mean. Define For every,

71 71 This means And Hence for every p>0, does not converge to 0 in the pth mean.

72 72 6- The Integrals of Lebesgue Measure, by Denjoy, Perron, and Henstock/Russell A. Gordan. 1- Adventures in Stochastic Processes, by Sidney I. Resnick. 2- Introduction to Probability Theory, by Paul G.Hoel, Sidney C. Port and Charles J.Stone. 3- Markov Chains, by J.R.Norries. 4- Probability by Alan F. Karr. 5- Random Walks and Electric Networks, by Peter G.Doyle and J.Laurie Snell.


Download ppt "1 1.1- Random walk on Z: 1.1.1- Simple random walk : Let be independent identically distributed random variables (iid) with only two possible values {1,-1},"

Similar presentations


Ads by Google