Renewal processes. Interarrival times {0,T 1,T 2,..} is an i.i.d. sequence with a common distribution fct. F S i =  j=1 i T j {S i } is a nondecreasing,

Slides:



Advertisements
Similar presentations
Thrasyvoulos Spyropoulos / Eurecom, Sophia-Antipolis 1  Load-balancing problem  migrate (large) jobs from a busy server to an underloaded.
Advertisements

Lectures prepared by: Elchanan Mossel Yelena Shvets Introduction to probability Stat 134 FAll 2005 Berkeley Follows Jim Pitman’s book: Probability Section.
HMM II: Parameter Estimation. Reminder: Hidden Markov Model Markov Chain transition probabilities: p(S i+1 = t|S i = s) = a st Emission probabilities:
Lecture 7 Linear time invariant systems
Matrix Analytic methods in Markov Modelling. Continous Time Markov Models X: R -> X µ Z (integers) X(t): state at time t X: state space (discrete – countable)
1 Chapter 8 Queueing models. 2 Delay and Queueing Main source of delay Transmission (e.g., n/R) Propagation (e.g., d/c) Retransmission (e.g., in ARQ)
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
Probability and Statistics with Reliability, Queuing and Computer Science Applications: Chapter 6 on Stochastic Processes Kishor S. Trivedi Visiting Professor.
Use of moment generating functions. Definition Let X denote a random variable with probability density function f(x) if continuous (probability mass function.
Lecture 6 Power spectral density (PSD)
. Hidden Markov Model Lecture #6. 2 Reminder: Finite State Markov Chain An integer time stochastic process, consisting of a domain D of m states {1,…,m}
Copyright Robert J. Marks II ECE 5345 Random Processes - Example Random Processes.
Kakutani’s interval splitting scheme Willem R. van Zwet University of Leiden Bahadur lectures, Chicago 2005.
A gentle introduction to fluid and diffusion limits for queues Presented by: Varun Gupta April 12, 2006.
Spike-triggering stimulus features stimulus X(t) multidimensional decision function spike output Y(t) x1x1 x2x2 x3x3 f1f1 f2f2 f3f3 Functional models of.
If time is continuous we cannot write down the simultaneous distribution of X(t) for all t. Rather, we pick n, t 1,...,t n and write down probabilities.
4. Review of Basic Probability and Statistics
Hidden Markov Model Continues …. Finite State Markov Chain A discrete time stochastic process, consisting of a domain D of m states {1,…,m} and 1.An m.
Self-Similar through High-Variability: Statistical Analysis of Ethernet LAN Traffic at the Source Level Walter Willinger, Murad S. Taqqu, Robert Sherman,
Introduction to Stochastic Models GSLM 54100
Maximum Likelihood Estimation
Exponential Distribution & Poisson Process
1 Chapters 9 Self-SimilarTraffic. Chapter 9 – Self-Similar Traffic 2 Introduction- Motivation Validity of the queuing models we have studied depends on.
References for M/G/1 Input Process
Simulation Output Analysis
The Poisson Process. A stochastic process { N ( t ), t ≥ 0} is said to be a counting process if N ( t ) represents the total number of “events” that occur.
Functions of Random Variables. Methods for determining the distribution of functions of Random Variables 1.Distribution function method 2.Moment generating.
Generalized Semi-Markov Processes (GSMP)
Tch-prob1 Chap 3. Random Variables The outcome of a random experiment need not be a number. However, we are usually interested in some measurement or numeric.
Intro. to Stochastic Processes
. Parameter Estimation For HMM Lecture #7 Background Readings: Chapter 3.3 in the text book, Biological Sequence Analysis, Durbin et al., 2001.
TELECOMMUNICATIONS Dr. Hugh Blanton ENTC 4307/ENTC 5307.
1 Part 5 Response of Linear Systems 6.Linear Filtering of a Random Signals 7.Power Spectrum Analysis 8.Linear Estimation and Prediction Filters 9.Mean-Square.
Functions of Random Variables. Methods for determining the distribution of functions of Random Variables 1.Distribution function method 2.Moment generating.
0 K. Salah 2. Review of Probability and Statistics Refs: Law & Kelton, Chapter 4.
Flows and Networks Plan for today (lecture 4): Last time / Questions? Output simple queue Tandem network Jackson network: definition Jackson network: equilibrium.
GI/GI/1. Interarrival times are independent and generally distributed {t_n} is an i.i.d. sequence of interarrival times t_n is distributed according to.
Generalized Semi- Markov Processes (GSMP). Summary Some Definitions The Poisson Process Properties of the Poisson Process  Interarrival times  Memoryless.
Chapter 61 Continuous Time Markov Chains Birth and Death Processes,Transition Probability Function, Kolmogorov Equations, Limiting Probabilities, Uniformization.
© K.Cuthbertson, D. Nitzsche FINANCIAL ENGINEERING: DERIVATIVES AND RISK MANAGEMENT (J. Wiley, 2001) K. Cuthbertson and D. Nitzsche Lecture Asset Price.
Stats Probability Theory Summary. The sample Space, S The sample space, S, for a random phenomena is the set of all possible outcomes.
Robotics Research Laboratory 1 Chapter 7 Multivariable and Optimal Control.
The final exam solutions. Part I, #1, Central limit theorem Let X1,X2, …, Xn be a sequence of i.i.d. random variables each having mean μ and variance.
IE 300, Fall 2012 Richard Sowers IESE. 8/30/2012 Goals: Rules of Probability Counting Equally likely Some examples.
1 EE571 PART 4 Classification of Random Processes Huseyin Bilgekul Eeng571 Probability and astochastic Processes Department of Electrical and Electronic.
CDA6530: Performance Models of Computers and Networks Chapter 3: Review of Practical Stochastic Processes.
Performance Evaluation of Long Range Dependent Queues Performance Evaluation of Long Range Dependent Queues Chen Jiongze Supervisor: Moshe ZukermanCo-Supervisor:
Risk Analysis Workshop April 14, 2004 HT, LRD and MF in teletraffic1 Heavy tails, long memory and multifractals in teletraffic modelling István Maricza.
EE354 : Communications System I
Stochastic Optimization for Markov Modulated Networks with Application to Delay Constrained Wireless Scheduling Michael J. Neely University of Southern.
Lecture 5,6,7: Random variables and signals Aliazam Abbasfar.
1 Part Three: Chapters 7-9 Performance Modeling and Estimation.
1 1.Definitions & examples 2.Conditional intensity & Papangelou intensity 3.Models a) Renewal processes b) Poisson processes c) Cluster models d) Inhibition.
Flows and Networks Plan for today (lecture 3): Last time / Questions? Output simple queue Tandem network Jackson network: definition Jackson network: equilibrium.
1 Chapter 5 Continuous time Markov Chains Learning objectives : Introduce continuous time Markov Chain Model manufacturing systems using Markov Chain Able.
Renewal Theory Definitions, Limit Theorems, Renewal Reward Processes, Alternating Renewal Processes, Age and Excess Life Distributions, Inspection Paradox.
水分子不時受撞,跳格子(c.p. 車行) 投骰子 (最穩定) 股票 (價格是不穏定,但報酬過程略穩定) 地震的次數 (不穩定)
Flows and Networks Plan for today (lecture 3):
Basic Modeling Components
Internet Traffic Modeling
Transform Analysis.
Hidden Markov Autoregressive Models
Self-similar Distributions
Tarbiat Modares University
Queueing Theory II.
Presented by Chun Zhang 2/14/2003
EE255/CPS226 Stochastic Processes
CPSC 641: Network Traffic Self-Similarity
Presentation transcript:

Renewal processes

Interarrival times {0,T 1,T 2,..} is an i.i.d. sequence with a common distribution fct. F S i =  j=1 i T j {S i } is a nondecreasing, positive sequence of reneval times (point) The distribution of S i is F (i) F (i) = f (i) * F = F * f (i) f (i) is the i-fold convolution of f (f = d/dx F)

The counting process N(t) = max i {S i · t} N(t) counts the number of renewal point before t M(t) = E[N(t)] is the expected number of renevals before t M(t) =  n n P(N(t)=n) P(N(t)=n)=P(S n · t and S n+1 > t) = P(S n+1 > t) - P(S n > t and S n+1 > t)

The counting process P(N(t)=n)=P(S n · t and S n+1 > t) = P(S n+1 > t) - P(S n > t and S n+1 > t) Now S n > t => S n+1 > t so P(S n > t and S n+1 > t) = P(S n > t) Altogether P(S n · t and S n+1 > t) = P(S n+1 > t) - P(S n > t) = P(S n · t) - P(S n+1 · t) = F (n) (t) – F (n+1) (t)

The counting process M(t) =  n n P(N(t)=n) =  n n P(S n · t and S n+1 > t) =  n n (F (n) (t) – F (n+1) (t)) =F (1) (t) +  n=2 n F (n) (t) – (n-1)F (n) (t) =F (1) (t) +  n=2 F (n) (t) =F(t) +  n=2 f (n) * F =F(t) + f *  n=1 f (n) * F =F(t) + (f * M)(t)

The renewal density m(t)= d/dt M(t): is called the renewal density M(t+h)-M(t) is the expected number of renewals in [t,t+h] When h is small P(N(t+h)-N(t)>1)=O(h 2 ) and P(N(t+h)-N(t)=1)=O(h) Thus (M(t+h)-M(t)) ¼ P(N(t+h)-N(t)=1) ¼ h ¢ m(t) h ¢ m(t) approximates the probability of a renewal within [t,t+h]

The renewal density m(t)= d/dt M(t) M(t)=F(t) + (f * M)(t) m(t) = f(t) + d/dt (F * M)(t) = f(t) + d/dt( s 0 t M(t-u) f(u) du) = f(t) + s 0 t m(t-u) f(u) du = f(t) + (f * m)(t)

Recurrence times Backward recurrence time (age): A(t) = t – S N(t) Forward recurrence time (excess): Y(t) = S N(t)+1 –t F A,t (a) = P(A(t) · a) F Y,t (y) = P(Y(t) · y)

Distribution of age F A,t (a) = P(A(t) · a) = P(t-S N(t) · a) We condition on the first renewal, i.e. P(A(t) · a) = s 0 1 P(A(t) · a | S 1 =s) f(s) ds = s 0 t P(A(t) · a | S 1 =s) f(s) ds + s t 1 P(A(t) · a | S 1 =s) f(s) ds = s 0 t P(A(t-s) · a) f(s) ds + s t 1 P(A(t) · a | S 1 =s) f(s) ds = s 0 t F A,t-s (a) f(s) ds + s t 1 I(t · a) ¢ f(s) ds = s 0 t F A,t-s (a) f(s) ds + I (t · a} R(t) = (F A,. (a) * f)(t) + I(t · a) R(t)

Distribution of excess F Y,t (y) = P(Y(t) · y) We condition on the first renewal, i.e. P(Y(t) · y) = s 0 1 P(Y(t) · y | S 1 =s) f(s) ds = s 0 t P(Y(t) · y | S 1 =s) f(s) ds + s t 1 P(Y(t) · y | S 1 =s) f(s) ds = s 0 t P(Y(t-s) · y) f(s) ds + s t 1 P(Y(t) · y | S 1 =s) f(s) ds = s 0 t F Y,t-s (y) f(s) ds + s t 1 I(s-t · y) ¢ f(s) ds = (F Y,. (y) * f)(t) + s t 1 I(s · (t+y)) ¢ f(s) ds = (F Y,. (y) * f)(t) + F(t+y)-F(t)

General solutions Generally: Z = Q + Z * f Laplace transform Z(s) = Q(s) + Z(s)f(s) Z(s) (1-f(s)) = Q(s) Z(s) = Q(s) / (1-f(s))

Alternative solution m(t) = f(t) + (f * m)(t) Laplace transform m(s) = f(s) + f(s) m(s) m(s) (1-f(s))=f(s) 1-f(s)=f(s)/m(s) Z(s) (1-f(s)) = Q(s)  Z(s) f(s) = Q(s) m(s) Z(s) = Q(s) + Z(s) f(s) = Q(s) + Q(s) m(s) Z(t) = Q(t) + (Q * m)(t)

Example (Poisson) Poisson process: F(t)=1-exp(- ¸ t) f(t) = ¸ exp(- ¸ t) R(t) = exp(- ¸ t) m = f + m * f  m(s)=f(s)/(1-f(s)) f(s) = ¸ s exp(-st) exp(- ¸ t) dt = ¸ /(s+ ¸ ) m(s) = ¸ /(s+ ¸ )/(1- ¸ /(s+ ¸ )) = ¸ /(s+ ¸ - ¸ )) = ¸ / s m(t) = ¸ !!!

Limiting renewal density in general m(t) = f(t) + (f * m)(t)  m(s) = f(s) + f(s) m(s)  m(s)=f(s)/(1-f(s)) lim t -> 1 m(t) = lim s -> 0 s m(s) = lim s -> 0 s f(s)/(1-f(s)) = (l’Hospital) lim s -> 0 d/ds (s f(s)) / lim s -> 0 d/ds (1-f(s)) = f(0)/ ((d/ds -f(s))| s=0 ) = 1/E(T i ) !!!

Example (Poisson) m(t) = ¸ F A,t (a) = (F A,. (a) * f)(t) + I(t · a) R(t) F Y,t (y)= (F Y,. (y) * f)(t) + F(t+y)-F(t) Z = Q + Z * f  Z(s) = Q(s) / (1-f(s)) or Z(t) = Q(t) + (Q * m)(t) F A,t (a) = I(t · a) R(t) + ¸ s 0 t I(s · a) R(s) ds = I(t · a) R(t) + ¸ s 0 min(t,a) R(s) ds = I(t · a) exp(- ¸ t)+ (1-exp(-min(t,a))) F Y,t (y) = (F Y,. (y) * f)(t) + F(t+y)-F(t) = F(t+y)-F(t) + ¸ s 0 t F(s+y)-F(s) ds (husk - ¸ ) = exp(- ¸ t) - exp(- ¸ (t+y)) - exp(- ¸ t) + exp(- ¸ (t+y)) + (1- exp(- ¸ y) ) = 1-exp(- ¸ y) = F(y) !!!

Alternating renewal process Used to model random on/off processes Network traffic Power consumption S n-1 SnSn T n = Z n + Y n ZnZn YnYn ONOFF

Alternating renewal process I(t) = I(S N(t) < t · S N(t) +Z N(t) ) I(t) indicates whether t belongs to an on-period. P(ON at t) = P(I(t)=1)=O(t) We condition on the first renewal O(t) = P(I(t)=1) = s 0 1 P(I(t)=1 | S 1 =s) f(s) ds = s 0 t P(I(t)=1 | S 1 =s) f(s) ds + s t 1 P(I(t)=1 | S 1 =s) f(s) ds = s 0 t P(I(t-s)=1) f(s) ds + s t 1 P(t · Z 1 | S 1 =s) f(s) ds = (O * f)(t) + s t 1 P(Z 1 ¸ t| S 1 =s) f(s) ds

Alternating renewal process O(t) = (O * f)(t) + s t 1 P(Z 1 ¸ t| S 1 =s) f(s) ds = (O * f)(t) + s 0 1 P(Z 1 ¸ t| S 1 =s) f(s) ds = (O * f)(t) + P(Z 1 ¸ t) = (O * f)(t) + 1-F Z (t) O(s)=1-F Z (s) + O(s)*f(s)

Example (2 state Markov) 2 exponential distributions F Z (t)=1-exp(- ¸ t) F Y (t)=1-exp(- ¹ t) f(t) = ¸ ¹ s 0 t exp(- ¸ (t-s)) exp(- ¹ s) ds E(T)=E(Y)+E(Z)=1/ ¹ + 1/ ¸ lim t -> 1 =1/E(T)=1/(1/ ¹ +1/ ¸ ) O(s)=1-F Z (s) + O(s)*f(s)  O(s)=(1-F Z (s))/(1-f(s)) or O(s) = 1-F Z (s) + (1-F Z (s)) m(s) lim t -> 1 O(t) = lim s -> 0 s O(s) = lim s -> 0 s(1-F Z (s)) + s(1-F Z (s)) m(s) = lim s -> 0 (1-F Z (s)) lim s -> 0 s m(s) = s R Z (t) dt / E(T) s R Z (t) dt = s 1 ¢ R Z (t) dt = tR Z (t) + s t ¢ f Z (t) dt -> s t ¢ f Z (t) dt = E(Z) lim t -> 1 O(t) = E(Z)/E(T) !!!

Autocorrelation C II (s) = E((I t -E(I))(I t+s -E(I))) = E((I t I t+s ) – E 2 (I) E(I) = lim t -> 1 O(t) = E(Z)/E(T) E(I t I t+s )=P(I t and I t+s ) T n = Z n + Y n S N(t) =S N(t)-1 +T N(t) A(t)=t-S N(t)

Autocorrelation C II (s) = E((I t I t+s ) – E 2 (I) t lies in the 1st renewal period E(I) = E(Z)/E(T) E(I t I t+s )=P(I t and I t+s ) P(I t and I t+s ) = s P(t+s · Z 1 | S 1 =x) + P(t · Z 1 and t+s ¸ S 1 | S 1 =x) O(t+s-x) f(x) dx

Autocorrelation P(I t and I t+s ) = s P(t+s · Z 1 | S 1 =x) + P(t · Z 1 and t+s ¸ S 1 | S 1 =x) O(t+s-x) f(x) dx = s P(t+s · Z 1 | S 1 =x) + P(t · Z 1 and t+s ¸ x | S 1 =x) O(t+s-x) f(x) dx = s P(t+s · Z 1 | S 1 =x) + I(t+s ¸ x) P(t · Z 1 | S 1 =x) O(t+s-x) f(x) dx = s P(t+s · z | Z 1 =z, S 1 =x) f Z,S (z,x) dzdx + s I(t+s ¸ x) P(t · z| Z 1 =z, S 1 =x) O(t+s-x) f Z,S (z,x) dzdx = s I(t+s · z) f S|Z (z,x) f Z (z) dzdx + s I(t+s ¸ x) I(t · z) O(t+s-x) f S|Z (z,x) f Z (z) dzdx = s I(t+s · z) f Y (x-z) f Z (z) dzdx + s I(t+s ¸ x) I(t · z) O(t+s-x) f Y (x-z) f Z (z) dzdx

Autocorrelation P(I t and I t+s ) = s I(t+s · z) f Y (x-z) f Z (z) dzdx + s I(t+s ¸ x) I(t · z) O(t+s-x) f Y (x-z) f Z (z) dzdx = ss t+s x f Y (x-z) f Z (z) dzdx + s t t+s s t x O(t+s-x) f Y (x-z) f Z (z) dzdx = s t+s s z f Y (x-z) dx f Z (z) dz + s t t+s O(t+s-x) s t x f Y (x-z) f Z (z) dz dx = s t+s f Z (z) dx + s t t+s O(t+s-x) s t x f Y (x-z) f Z (z) dz dx = R Z (t+s) + s t t+s O(t+s-x) s t x f Y (x-z) f Z (z) dz dx · R Z (t+s) + s s t x f Y (x-z) f Z (z) dz dx = R Z (t+s) + P(Z 1 ¸ t) = R Z (t+s)+ R Z (t) \leq 2R Z (2s) (for large s) C II (s) = E((I t I t+s ) – E 2 (I) \leq E((I t I t+s ) · 2R Z (2s)

Example - Pareto distributions (power/heavy tails) Let f Z (z)=K z - ® I(z ¸ z 0 ) ® >1 F Z (z)= K/( ® -1) (z 0 1- ® – z 1- ® ) I(z ¸ z 0 ) K= ( ® -1)/z 0 1- ® F Z (z)= (1 – (z/z 0 ) 1- ® ) I(z ¸ z 0 ) R Z (z)=1-F Z (z) = (z/z 0 ) 1- ® + I(z · z 0 ) · (z/z 0 ) 1- ® C II (s) ¼ 2R Z (2s) = K · (2s) 2(H-1) (H - Hurst parameter) H>1/2 : Long Range Dependence (LRD) 1- ® = 2(H-1)  H=(1- ® )/2+1=3/2- ® /2 or ® =3-2H LRD  ® < 2

Sample means E T = 1/T s 0 T I(t) dt I(t) indicates on-state Var(E T )=E(E T 2 )= 1/T 2 E(( s 0 T I(t) dt) 2 ) = 1/T 2 E( s 0 T I(t) dt s 0 T I(t) dt) = 1/T 2 E( s 0 T s 0 T I(t) I(s) ds dt) = 1/T 2 s 0 T s 0 T E(I(t) I(s)) ds dt = 1/T 2 s 0 T s 0 T C II (t-s) ds dt ¼ 1/T 2 s 0 T s 0 T 2R Z (2|t-s|) ds dt = 4/T 2 s 0 T s 0 t R Z (2(t-s)) ds dt

Sample means for 2 state Markov process Var(E T ) = 4/T 2 s 0 T s 0 t R Z (2(t-s)) ds dt R Z (z) =1-F Z (t)=exp(- ¸ t) Var(E T ) · 4/T 2 s 0 T s 0 t exp(-2 ¸ (t-s)) ds dt = 4/T 2 s 0 T exp(-2 ¸ t) s 0 t exp(2 ¸ s) dx dt = 4/T 2 / ¸ s 0 T exp(-2 ¸ t) (exp(2 ¸ t)-1) dt =4/T 2 / ¸ s 0 T (1-exp(-2 ¸ t)) dt =4/T 2 / ¸ (T+1/2 ¸ (1-exp(- ¸ T)) =4/ ¸ (1/T+1/2T 2 ¸ (1-exp(- ¸ T)) ¼ 4/ ¸ /T

Sample means for white noise w is white noise B(t)= s 0 t w(t) dt B(t) is Brownian motion (Wiener process) Var(B(t))= ´ t (by definition) E T =1/T s 0 t w(t) dt = 1/T B(T) Var(E T )=1/T 2 var(B(T))=1/T 2 ´ T = ´ /T 2 state Markov like white noise

Sample means for Brownian motion B(s)=B(t)+ s t s w(x) dx = B(t)+b s ¸ t b and B(t) are independent C BB (t,s) = E(B(t)B(s)) = E(B(t) (B(t)+b)) =E(B 2 (t))= ´ t = ´ min{t,s} !!! E T =1/T s 0 t B(t) dt Var(E T )=1/T 2 s 0 T s 0 t C BB (t,s) ds dt =1/T 2 s 0 T s 0 T ´ min{t,s} ds dt =2/T 2 s 0 T s 0 t ´ s ds dt =1/T 2 s 0 T ´ t 2 dt =1/T 2 /3 ´ T 3 = 1/3 ´ T

Sample means for renewal with Pareto distributions Var(E T ) = 4/T 2 s 0 T s 0 t R Z (t-s) ds dt R Z (z) = C z 1- ® Var(E T ) = 4C/T 2 s 0 T s 0 t (t-s) 1- ® ds dt = -4C/T 2 s 0 T s t 0 x 1- ® dx dt = 4C/T 2 /(2- ® ) s 0 T t 2- ® dt = 4C/T 2 /(2- ® )/(3- ® ) T 3- ® = 4C/(2- ® )/(3- ® ) T 1- ® For ® ¼ 1 right between white noise (s 0 ) and Brownian motion (s -1 ) fractional Brownian motion (s -1/2 ) B H (t)= s 0 t (t-s) H-1/2 w(s) ds

Self similarity A process X is self similar with Hurst parameter H iff: a -H X(at) is equivalent to X(t) (up to finite joint distributions) C XX (s) = E(X(0)X(s))= (1/s) -2H E(X(0/s)X(s/s)) = s 2H C XX (0,1) C XX (t,s)= E(X(t)X(s))= (1/s) -2H E(X(t/s)X(s/s))= = s 2H C XX (t/s,1) -> s 2H C XX (0,1) for t/s -> 0 C XX (t,t+s)= E(X(t)X(t+s))= E(X(t/(t+s))X((t+s)/(t+s)))= = (t+s) 2H C XX (t/(t+s),1) -> (t+s) 2H C XX (1,1) for t -> 1

Self similarity Y(n)=X(n)-X(n-1) C YY (1,m) = E((X(1)-X(0))(X(1+m)-X(m))) =E(X(1)X(1+m))+E(X(0)X(m))-E(X(1)X(m))-E(X(0)X(1+m)) = m 2H (E(X(1/m)X(1/m+1))+E(X(0)X(1))-E(X(1/m)X(1))-E(X(0)X(1/m+1))) = m 2H (C XX (1/m,1/m+1)+ C XX (0,1)- C XX (1/m,1)- C XX (0,1/m+1)) = m 2H (C XX (1/m,1/m+1) - C XX (0,1/m+1) + C XX (0,1)- C XX (1/m,1)) ¼ m 2H (C XX (0,1)+1/m D 1 +1/m D 2 + D 12 /2 1/m 2 + D 21 /2 1/m 2 + D 11 /2 1/m 2 + D 22 /2 1/m 2 -(C XX (0,1)+1/m D 2 + D 22 /2 1/m 2 ) + C XX (0,1) -(C XX (0,1)+1/m D 1 + D 11 /2 1/m 2 )) = m 2H (D 12 /2 1/m 2 + D 21 /2 1/m 2 ) = m 2H-2 (D 12 + D 21 )/2

Frequency Domain R Z (z) = C z 1- ® log(R Z (z))=log(C) + (1- ® ) log(z) C YY (1,m) = K m 2H-2 S YY ( ! ) = C ! 1-2H log(S YY ( ! ))=log(C) + (1-2H) log( ! )

Distribution of files sizes

Time averages (aggregated)

Time averages (cont’d)

Aggregated statistics

Estimating the Hurst parameter

Miniproject Make a statistic on the filesizes of your file system. Check for power tailed behaviour. Simulate an M/G/1 queue with power tailed service times. Compare with results for an M/M/1 queue with the same load: ½ = mean service time/mean interarrival time Simulate an alternating renewal process with power tailed ”ON” distribution. Compute an autocorrelation estimate. Compute estimates of the 1-step increments of sample means. Compute a power spectrum estimate.

Summary LRD Let f Z (z)=K z - ® I(z ¸ z 0 ) ® >1 R Z (z) ¼ (z/z 0 ) 1- ® C II (s) ¼ 2R Z (2s) · (2s) 2(H-1) (H - Hurst parameter, I indicates on period) H>1/2 : Long Range Dependence (LRD) LRD  ® < 2 log(R Z (z))=log(C) + (1- ® ) log(z)

Summary M/G/1 M/M/1: Q= ½ /(1- ½ ) M/G/1: (Pollachek-Kinchine) Q= ½ + ( ½ 2 + ¸ 2 var(S))/2/(1- ½ ) f S (s)=K s - ® I(s ¸ s 0 ) ® >1 E(S 2 ) = K s s0 1 s 2 s - ® ds = K s s0 1 s 2- ® ds = [s 3- ® ] s0 1 /(3- ® )

Summary SS A process X is self similar with Hurst parameter H iff: a -H X(at) is equivalent to X(t) (up to finite joint distributions) Y(n)=X(n)-X(n-1) C YY (1,m) ¼ m 2H-2 (D 12 + D 21 )/2 C YY (1,m) = K m 2H-2 S YY ( ! ) = C ! 1-2H log(S YY ( ! ))=log(C) + (1-2H) log( ! )

Summary (Sample means) E T = 1/T s 0 T I(t) dt I(t) indicates on-state Var(E T )= 4/T 2 s 0 T s 0 t R Z (2(t-s)) ds dt ¼ 4/ ¸ /T (2 state Markov) = ´ /T (White noise) = 1/3 ´ T (Brownian motion) = 4C/(2- ® )/(3- ® ) T 1- ® (Power tail)