Presentation is loading. Please wait.

Presentation is loading. Please wait.

Chapter 5 Statistical Models in Simulation

Similar presentations


Presentation on theme: "Chapter 5 Statistical Models in Simulation"— Presentation transcript:

1 Chapter 5 Statistical Models in Simulation
Banks, Carson, Nelson & Nicol Discrete-Event System Simulation 1

2 Purpose & Overview The world the model-builder sees is probabilistic rather than deterministic. 􀂅 Some statistical models might well describe the variations. An appropriate model can be developed by sampling the phenomenon of interest: 􀂅 Select a known distribution through educated guesses 􀂅 Make estimate of the parameter(s) 􀂅 Test for goodness of fit In this chapter: 􀂅 Review several important probability distributions 􀂅 Present some typical application of these models 2

3 5.1 Review of Terminology and Concepts
In this section, we will review the following concepts: 􀂅 Discrete random variables 􀂅 Continuous random variables 􀂅 Cumulative distribution function 􀂅 Expectation 3

4 Discrete Random Variables [Probability Review]
X is a discrete random variable if the number of possible values of X is finite, or countably infinite. The possible values of X may be x1, x2, x3, ….. In the finite case, list terminates, and in the inifinite case, the list continues indefinitely. Example 5.1: Consider jobs arriving at a job shop. Let X be the number of jobs arriving each week at a job shop. The possible values of X is given by the range space of X, which is denoted by Rx. Rx = possible values of X (range space of X) = {0,1,2,…} 4

5 Let X be a random variable.
Each possible outcome xi in Rx are p(xi) = P(X = xi) (shorthand for occurance of event) Gives the Probability that the random variable equals the value xi . AND p(xi), i = 1,2,… must satisfy the following 2 conditions: The collection of pairs [xi, p(xi)], i = 1,2,…, is called the probability distribution of X, and p(xi) is called the probability mass function (pmf) of X.

6 Example 5.2 : Discrete Random variables
Consider an example of tossing a single die. Define X as the no of dots on the face of the die. Then Rx = {1,2,3,4,5,6} Assume: prob that given face lands up is prop to num of spots showing The discrete probability distribution for this random experiment is given by the table below. The conditions p(xi) >= 0 and sum of p(xi) = 1 is satisfied. Xi 1 2 3 4 5 6 p(xi) 1/21 2/21 3/21 4/21 5/21 6/21 0.0476 0.0952 0.143 0.1904 0.238 0.286 6

7 Continuous Random Variables
X is a continuous random variable if its range space Rx is an interval or a collection of intervals. For a continuous random variable, the probability that X lies in the interval [a,b] is given by: f(x) is called the prob density function pdf of X & satisfies the foll conditions : 7

8 For any specified value x0, P(X = x0) = 0, because,
And because P(X = x0) = 0, the following equations hold :

9 Continuous Random Variables :Eg.
Pbm : Life of a device used to inspect cracks in an air craft wings is given by X, where X is a continuous random variable assuming all values in the range x>= 0. X has an exponential distribution with mean 2 years. Cal for 2 to 3 years. The pdf of X, a continuous random variable is given by : Probability that the device’s life is between 2 and 3 years is: 9

10 Cumulative Distribution Function
Cumulative Distribution Function (cdf), denoted by F(x), measures the probability that the random variable X assumes a value <= x, i.e., F(x) = P(X <= x) If X is discrete, then If X is continuous, then Some properties of the cdf are: All probability question about X can be answered in terms of the cdf, e.g.: 10

11 Cumulative Distribution Function : Eg.
Consider the problem in example 5.3, which is about the life of a device used to inspect cracks in aircraft wings. 11

12 Expectation Expectation of a Random variable is an important concept.
If X is a random variable, the expected value of X, denoted by E(X) is given by : If X is discrete If X is continuous The expected value E(X) of a random variable X is also referred to as the mean or the first moment of X. E(Xn) is called the nth moment of X. 12

13 Expectation The variance of X is denoted by V(X) or var(X) or σ2
Defn : V(X) = E[(X – E[X])2] Also, V(X) = E(X2) – [E(x)]2 Variance is a measure of the spread or variation of the possible values of X around the mean E(X). The standard deviation σ is the square root of variance σ2.

14 Expectation : Example Mean and variance of die tossing experiment is given by : E(X) = 1(1/21) + 2(2/21) + …. + 6(6/21) = 91/21 = 4.33 E(X2) = 12(1/21) + 22(2/21) + …. + 62(6/21) = 21 Thus V(X) = 21 – (91/21)2 = 21 – = 2.22 σ = sqrt(V(X)) = sqrt (2.22) = 1.49

15 Expectation : Example 15

16 Useful Statistical Models
In this section, statistical models appropriate to some application areas are presented. The areas include: Queueing systems Inventory and supply-chain systems Reliability and maintainability Limited data 16

17 Queueing Systems [Useful Models]
In a queueing system, inter-arrival and service-time patterns can be probablistic (for more queueing examples, see Chapter 2). Sample statistical models for inter-arrival or service time distribution: Exponential distribution: if service times are completely random Normal distribution: fairly constant but with some random variability (either positive or negative)(ST can be const but can slightly vary) Truncated normal distribution: similar to normal distribution but with restricted value.(Rand var is restricted to be greater r less than certain value) Gamma and Weibull distribution: more general than exponential 17

18 Inventory and supply chain [Useful Models]
In realistic inventory and supply-chain systems, there are at least three random variables: 􀂅 The number of units demanded per order or per time period 􀂅 The time between demands 􀂅 The lead time Sample statistical models for lead time distribution: 􀂅 Gamma Sample statistical models for demand distribution: 􀂅 Poisson 􀂅 Negative binomial distribution 􀂅 Geometric( spcl type case of negative binomial, has its mode at unity, given that atleast one demand has occurred) They provide a range of distribution shapes that satisfy a variety of demand patterns. 18

19 Reliability and maintainability [Useful Models]
Time to failure (TTF) ; can be modeled as - 􀂅 Exponential: when failures are random 􀂅 Gamma: for standby redundancy where each component has an exponential TTF 􀂅 Weibull: When there are a number of components in a system and failure is due to the most serious of a large number of defects. 􀂅 Normal: failures are due to wear 􀂅 Lognormal: describing time to failure for some types of components. 19

20 Limited Data In many cases, simulations begin even before data collection has been completed. There are 3 distributions : Uniform : When IA or service time is known to be random, but no info is immediately available about the distribution. It is called ‘the distribution of maximum ignorance’, because it is not necessary to specify more than the continuous interval in which the random variable may occur. Triangular : When assumptions are made about the minimum, maximum and modal values of the random variable. Beta : provides a variety of distributional forms on the unit interval, ones that, on some modification, can be shifted to any desired interval. Other distributions: Bernoulli, binomial and hyperexponential. 20

21 Discrete Distributions
Discrete random variables are used to describe random phenomena in which only integer values can occur. In this section, we will learn about: 􀂅 Bernoulli trials and Bernoulli distribution 􀂅 Binomial distribution 􀂅 Geometric and negative binomial distribution 􀂅 Poisson distribution 21

22 Bernoulli trial In the theory of probability and statistics, a Bernoulli trial is an experiment whose outcome is random and can be either of two possible outcomes, "success" and "failure". For eg: these events can be phrased into "yes or no" questions: Did the coin land heads? Was the newborn child a girl? Were a person's eyes green? Did a mosquito die after the area was sprayed with insecticide?

23 Bernoulli process A Bernoulli process consists of repeatedly performing independent but identical Bernoulli trials. A Bernoulli process is a finite or infinite sequence of independent random variables X1, X2, X3, ..., such that For each i, the value of Xi is either 0 or 1; For all values of i, the probability that Xi = 1 is the same number p. Eg: repeated coin flipping. Every variable Xi in the sequence may be associated with a Bernoulli trial or experiment. In other words, a Bernoulli process is a sequence of independent identically distributed Bernoulli trials. Independence of the trials implies that the process is memoryless. Given that the probability p is known, past outcomes provide no information about future outcomes.

24 Bernoulli Trials and Bernoulli Distribution [Discrete Distr]
24

25 Binomial Distribution [Discrete Dist’n]
The random variable X, that denotes the number of successes in n Bernoulli trials has a binomial distribution given by p(x). An elementary example is this: Roll a standard die ten times and count the number of sixes. The distribution of this random number is a binomial distribution with n = 10 and p = 1/6. As another example, flip a coin three times and count the number of heads. The distribution of this random number is a binomial distribution with n = 3 and p = 1/2. 25

26 Binomial Distribution [Discrete Dist’n]
The above equation for p(x) is motivated by computing the probability of a particular outcome with all the successes, each denoted by S, occurring in the first x trials, followed by n-x failures, each denoted by an F. P(SSSS……..SS)(FFF……FF) = pxqn-x Where q = 1-p There are nCx outcomes having the required number of S’s and F’s. i.e., nCx = n! / x!(n-x)! 26

27 Binomial Distribution [Discrete Dist’n]
To calculate mean and variance, consider X as the sum of n independent variables, each with mean p and variance p(1-p) = pq. X = X1 + X2 + X3 + ……………+ Xn The mean, E(x) = p + p + … + p = n*p The variance, V(X) = pq + pq + … + pq = n*pq Exercise Pbm (page 172) = 3,4,5

28 Binomial Distribution [Discrete Dist’n]
Example 5.10 n = 50 bernoulli trials each with p = 0.02, q = 1 – p = 0.98 p(X = x) = 50Cx (0.02)x(0.98)50-x P(process will be stopped) = P(X > 2) P(process will not be stopped) = P(X<=2) Therefore, P(X > 2) = 1 – P(X <= 2) P(X <= 2) => [p(x = 0) + p(x = 1) + p(x = 2)] => C0 (0.02)0(0.98)50 + 50C1 (0.02)1(0.98)49 + 50C2 (0.02)2(0.98)48 ] Where [ nCx = n! /x!(n – x)! ] => (0.09) (0.02)(0.98) (0.02)2(0.98)48 => 0.92 Thus the probability that the sampling process will be stopped is : – 0.92 = 0.08 28

29 Geometric Distribution [Discrete Dist’n]
Geometric distribution is related to a sequence of Bernoulli trials. The random variable X is defined to be the number of trials to achieve the first success. The distribution of X is given by : The event [X = x] occurs, when there are x – 1 failures followed by a success. Each of the failures has an associated probability of q = 1 – p and each success has probability p. Thus P(FFF….FS) = qx-1p Mean is given by E(x) = 1/p, and variance is given by V(X) = q/p2

30 Negative Binomial Distribution
The Negative Binomial Distribution is the distribution of the number of trials until the kth success, for k = 1, 2, ….. If Y is a negative binomial distribution with parameters p and k, then: The mean is given by E(Y) = k/p Variance is given by V(X) = kq/p2

31 Geometric and Negative Binomial Distribution
Example 5.11 : 1st one accepted is the 3rd one inspected. We have FFS. (geometric dist.) n = 3 P(accepting the printer) = 0.6 = p P(rejecting the printer) = 0.4 = 1 – p = q P(the 1st acceptable printer is the 3rd on inspected) = P(X = x) = qx-1p = P(X = 3) = (0.4)3-1(0.6) = (0.16)(0.6) = 0.096

32 Geometric and Negative Binomial Distribution
Example 5.11 : 3rd one inspected is the second acceptable printer). We have FSS or SFS (-ve binomial) Kth success or 2nd success. Therefore, k = 2 Y = 3 (no of printers inspected. p(3) = (3 – 1) (0.4)3-2(0.6)2 (2 – 1) p(3) = (0.4)(0.6)2 1 p(3) = 2C1 (0.4)(0.6)2 = 0.288 32

33 Poisson Distribution Poisson distribution describes many random processes quite well and is mathematically quite simple. It is used with the i/p is countably infinite. Eg. poisson distribution describes the : No of cars in a service station. No of calls to a call center No of accidents on road No of child births / day etc…. It was introduced by S.D. Poisson in 1837

34 Poisson Distribution The probability mass function is given by :
The cdf if given by : Where a > 0 Mean and variance are both equal to a i.e., E(x) = V(X) = a Poisson distribution is a limiting case of binomial distribution, where probability of success -> 0

35 Poisson Distribution Example 5.12 Probability that there are 3 beeps in the next 1 hr a = 2 with x = 3 p(3) = e-223 / 3! = (0.135)(8) / 6 = 0.18 Probability that there are 2 or more beeps in the next 1 hr p(2 or more) = 1 – [p(0) + p(1)] = 0.594 [ p(0) = e-220 / 0! = p(1) = e-221 / 1! = ] Exercise Pbm (page 173) = 9

36 Continuous Distributions
Continuous random variables can be used to describe random phenomena in which the variable can take on any value in some interval. In this section, the distributions studied are: Uniform Exponential Gamma Erlang Normal Weibull Lognormal Triangular Beta

37 Uniform Distribution A random variable X is uniformly distributed on the interval (a,b), U(a,b), if its pdf and cdf are: P(x1 < X < x2) = [F(x2) – F(x1)] = [(x2-x1)/(b-a)] is proportional to the length of the interval, for all x1 and x2 satisfying a <= x1 < x2 <= b Mean is given by : E(X) = (a+b)/2 Variance is given by : V(X) = (b-a)2/12 U(0,1) provides the means to generate random numbers, from which random variates can be generated.

38 Uniform Distribution The pdf and cdf are given in the figs.

39 Uniform Distribution [Continuous Dist’n]
Example 5.16 Arrival of bus at stop is 6.40, 7.00, 7.20, 7.40…. Passenger arrives randomly b/w 7.00 to 7.30 am The passenger waits for more than 5 mins, if he arrives b/w 7.00 to 7.15am and 7.20 – 7.30 am P(the passenger waits for more than 5 mins in the busstop) = P(0 < X < 15) + P(20 < X < 30) [F(15) – F(0)] + [F(30) – F(20)] Substitute F(x) = (x – a)/(b – a) where x = 15, 30, 20; a = 0 and b = 30. [F(0) = 0, as x = 0] 5/6 (Ans)

40 Exponential Distribution [Continuous Dist’n]
A random variable X is exponentially distributed with parameter λ > 0 if its pdf and cdf are: Used to model inter-arrival times when arrivals are completely random, and to model service times that are highly variable. It is also used to model the lifetime of a component that fails instantaneously, such as a light bulb, then λ is the failure rate. E(X) = 1/λ and V(X) = 1/λ2

41 Exponential Distribution [Continuous Dist’n]
Example : failure rate λ is 1/3 in thousands of hours. Probability that the lamp will last longer than its mean life, hours is given by : P(X > 3) = 1 – P(X <= 3) = 1 – F(3) (cdf is F(x) = 1 – e-λx) = 1 – (1 – e-3/3) = 1 – (1 – e-1) = e-1 = 0.368 Probability that the lamp will last b/w 2000 and 3000 hours is : P(2 <= X <= 3) = F(3) – F(2) = (1 – e-3/3) - (1 – e-2/3) = = 0.145

42 An important property of the exponential distribution is that it is memoryless. Given by :
P(X > s+t | X > s) = P(X > t) Let X represent the life of a battery or a bulb and assume that X is exponentially distributed. Above eqn states that, the probability that the component lives for atleast s+t hours, given that it has survived s hours, is the same as the initial property that it lives for atleast t hours. Which means that the component does not remember that it has already survived or has been used for a time s. It is as good as a new component. P(X > s+t | X > s) = P(X > s+t )/ P(X > s) = e-λ(s+t)/e-λs = e-λt

43 Example 5.18 : Probability that the industrial lamp will last for another hours given that it is operating for 2500 hours is : s = 2500 or 2.5 (for thousand hours) t = 1. P(X > s+t | X > s) = P(X > 3.5 | X > 2.5) = e-λt = e-(1/3*1) = e-1/3 = 0.717 The used component that follows an exponential distribution is as good as a new component. Probability that a new component will have a life greater that 1000 hours is also equal to 0.717 Exercise Pbm (page 173) = 14

44 Gamma Distribution Gamma distribution is a two-parameter family of continuous probability distributions. It has a scale parameter θ and a shape parameter k or b. If b is an integer then the distribution represents the sum of k independent exponentially distributed random variables, each of which has a mean of θ (which is equivalent to a rate parameter of θ−1) . The gamma distribution is frequently a probability model for waiting times; for instance, In life testing, the waiting time until death is a random variable that is frequently modeled with a gamma distribution. Gamma distributions were fitted to rainfall amounts from different storms, and differences in amounts from seeded and unseeded storms were reflected in differences in estimated b and θ parameters

45 Gamma Distribution Several gamma distributions for several values of θ and b or k are shown in fig. The mean E(X) = 1/θ and Variance V(X) = 1/ bθ2

46 Erlang Distribution The Erlang distribution is a continuous probability distribution with wide applicability primarily due to its relation to the exponential and Gamma distributions. The Erlang distribution was developed by A. K. Erlang to examine the number of telephone calls which might be made at the same time to the operators of the switching stations. This work on telephone traffic engineering been expanded to consider waiting times in queueing systems in general. Consider a series of k stations that must be passed through in order to complete the servicing of a customer. An additional customer cannot enter the 1st station until the customer in process has negotiated all the stations.

47 Erlang Distribution Each station has an exponential distribution of service time with parameter kθ . The mean (1/θ) and variance (1/bθ2) of gamma distribution are valid regardless of the value of b. The expected value of the sum of the random variables is the sum of the expected value of each random variable. Thus E(X) = E(X1) + E(X2) + …… + E(Xk) The expected value of each of the exponentially distributed Xj is given by 1/kθ E(X) = 1/kθ + 1/kθ + …… + 1/kθ = 1/θ If the random variables are independent, the variance of their sum is the sum of the variances V(X) = 1/kθ2 + 1/kθ2 + …… + 1/kθ2 = 1/θ2

48 Normal Distribution It gives a good description of data that cluster around the mean. The graph of the associated probability density function is bell-shaped, with a peak at the mean, and is known as the Gaussian function or bell curve. Eg: The heights of adult males in the United States are roughly normally distributed, with a mean of about 70 inches (1.8 m). Most men have a height close to the mean, though a small number of outliers have a height significantly above or below the mean. A histogram of male heights will appear similar to a bell curve, with the correspondence becoming closer if more data are used.

49 Normal Distribution A random variable X is normally distributed has
Mean: -∞ < μ < ∞ and Variance: σ2 > 0 Denoted as X ~ N(μ,σ2) which means, X is normally distributed, with mean μ and variance σ2 Pdf : Special properties: f(μ-x)=f(μ+x); the pdf is symmetric about μ. The maximum value of the pdf occurs at x = μ; the mean and mode are equal.

50 Normal Distribution A normal curve has two characteristics: mean (µ) and standard deviation (σ). Standardized Variable-A variable is said to be standardized if it has been adjusted (or transformed) such that its mean equals 0 and its standard deviation equals 1. Standardization can be accomplished using the formula for a z-score: Z=(x-µ)/σ The z-score represents the number of standard deviations that a data value is away from the mean

51 Normal Distribution The cdf is given by : F(x) = integral(f(x))
Evaluating the distribution: It is not possible to evaluate F(x) in closed form (expressed analytically in terms of a finite number of certain "well-known" functions). Use numerical methods, but it would be necessary to evaluate the integral for each pair (μ, σ2) Evaluate independent of μ and σ, using the standard normal distribution: Z ~ N(0,1) Transformation of variables: let Z = (X - μ) / σ, allows the evaluation to be independent of μ and σ.

52 Weibull Distribution Used to describe the size distribution of particles. A random variable X has a Weibull distribution if its pdf has the form: 3 parameters: Location parameter: υ, (−∞ < ν < ∞) Shape parameter: β , (β > 0) Scale parameter. α, (> 0) Graph shows Weibull densities when v = 0 and a = 1

53 Weibull Distribution The value of k can be interpreted directly as follows: A value of k<1 indicates that the failure rate decreases over time. This happens if there is significant "infant mortality", or defective items failing early and the failure rate decreasing over time as the defective items are weeded out of the population. A value of k=1 indicates that the failure rate is constant over time. This might suggest random external events are causing mortality, or failure. A value of k>1 indicates that the failure rate increases with time. This happens if there is an "ageing" process, or parts that are more likely to fail as time goes on. The mean and variance is given by :

54 Applications of Weibull dist.
The Weibull distribution is used In survival analysis In reliability engineering and failure analysis In industrial engineering to represent manufacturing and delivery times In extreme value theory In weather forecasting To describe wind speed distributions, as the natural distribution often matches the Weibull shape In communications systems engineering In radar systems to model the dispersion of the received signals level produced by some types of clutters To model fading channels in wireless communications, as the Weibull fading model seems to exhibit good fit to experimental fading channel measurements In General insurance to model the size of Reinsurance claims, and the cumulative development of Asbestosis losses.

55 Triangular distribution
It is a continuous probability distribution with lower limit a, mode c and upper limit b. Mean is E(X) = (a + b + c)/3 Mode occurs at x = b = 3*E(X) – (a + c) Variance = V(X) =

56 Triangular distribution
The pdf is given by : The cdf is given by :

57 Lognormal Distribution
A log-normal distribution is a probability distribution of a random variable whose logarithm is normally distributed. If Y is a random variable with a normal distribution, then X = exp(Y) has a log-normal distribution; Likewise, if Y is log-normally distributed, then log(Y) is normally distributed. (The base of the logarithmic function does not matter)

58 Lognormal Distribution
A random variable X has a lognormal distribution if its pdf has the form: Where σ2 > 0 Notice that μ and σ2 are not the mean and variance of the lognormal. When Y has a N(μ, σ2) distribution, then X = eY has a lognormal distribution with parameters μ and σ2

59 μ = ln [μL2 / sqrt(μL2 + σ2L)]
Lognormal Distribution If the mean and variance of the lognormal are known to be μL and σ2L, respectively, then the parameters μ and σ2 are given by : μ = ln [μL2 / sqrt(μL2 + σ2L)] σ2 = ln [ (μL2 + σ2L )/ μL2 ] Three lognormal pdf’s all having mean 1, but variances ½, 1, and 2.

60 Poisson Process Poisson distribution describes the :
No of cars in a service station, No of calls to a call center, No of accidents on road, No of child births / day etc…. These events may be described by a counting function N(t) defined for all t >= 0, which will represent the number of events that occurred in interval [0,t]. Observation begins at time 0, regardless of whether an arrival occurred at that instant or not.

61 Poisson Process A counting process {N(t), t>=0} is a Poisson process with mean rate λ if the following assumptions are fulfilled : Arrivals occur one at a time {N(t), t>=0} has stationary increments : distribution of the number of arrivals b/w t and t+s depends on the length of intervals. i.e., the arrivals are completely random without rush or slack periods. {N(t), t>=0} has independent increments : Large or small no of arrivals in one time interval has no effect on the no of arrivals in the subsequent time intervals.

62 If arrivals occur according to a Poisson process meeting the 3 assumptions, then P(N(t) = n) =>
N(t) has the Poisson distribution with parameter a = λt Equal mean and variance: E[N(t)] = V[N(t)] = λt For any times s and t, such that s < t, the assumptions of stationery increments implies that, the random variable N(t) – N(s) representing the number of arrivals in the interval from s to t is also Poisson distributed with mean λ(t – s). Substitute (t – s) instead of (t) in the above eqn, we get P(N(t) – N(s) = n) for n = 0, 1, 2 … and E[N(t) – N(s)] = V[N(t) – N(s)] = λ(t – s)

63 Consider the interarrival times of a Poission process (A1, A2, …), where Ai is the elapsed time between arrival i and arrival i+1 The 1st arrival occurs after time t iff there are no arrivals in the interval [0,t], hence: P{A1 > t} = P{N(t) = 0} = e-λt Probability that the first arrival is going to occur in the interval [0,t] is given by :- P{A1 <= t} = 1 – e-λt [cdf of exp(λ)] Inter-arrival times, A1, A2, …, are exponentially distributed and independent with mean 1/λ

64 An alternative definition of a Poisson process is:-
If the inter-arrival times are distributed exponentially and independently, then, The number of arrivals by time t, say N(t) meets the three mentioned assumptions. Then it is a Poisson process 64

65 Properties of Poisson Process
Splitting Suppose each event of a Poisson process can be classified as Type I, with probability p and Type II, with probability 1-p. Let N1(t) and N2(t) be the random variables that denote the number of type I and type II events in [0,t] N(t) = N1(t) + N2(t), where N1(t) and N2(t) are both Poisson processes with rates λp and λ(1-p)

66 Properties of Poisson Process
Pooling Suppose two Poisson processes are pooled together N1(t) + N2(t) = N(t), where N(t) is a Poisson processes with rates λ1 + λ2

67 Nonstationary Poisson Process (NSPP)
Poisson Process without the stationary increments. We keep the poisson assumptions 1 and 3 but drop 2. Characterized by λ(t), the arrival rate at time t. NSPP is useful in situations in which the arrival rate varies during the period of interest, eg: meal times at restaurants, phone at business hours, traffic at peak hrs. The expected number of arrivals by time t, Λ(t):

68 Nonstationary Poisson Process (NSPP)
To be useful as an arrival rate function, λ(t) must be nonnegative and integrable. Let arrival times of a stationary process with rate λ = 1 be t1, t2, …, and arrival times of a NSPP with rate λ(t) be T1, T2, …, fundamental relationship for working with NSPP is : ti = Λ(Ti) Ti = Λ−1(ti) Which means, NSPP with rate λ(t) can be transformed into a stationary poisson process with arrival rate = 1 and vise versa. 68


Download ppt "Chapter 5 Statistical Models in Simulation"

Similar presentations


Ads by Google