Presentation is loading. Please wait.

Presentation is loading. Please wait.

Network Simulation Motivation: r learn fundamentals of evaluating network performance via simulation Overview: r fundamentals of discrete event simulation.

Similar presentations


Presentation on theme: "Network Simulation Motivation: r learn fundamentals of evaluating network performance via simulation Overview: r fundamentals of discrete event simulation."— Presentation transcript:

1 Network Simulation Motivation: r learn fundamentals of evaluating network performance via simulation Overview: r fundamentals of discrete event simulation r analyzing simulation outputs r ns-2 simulation

2 The evaluation spectrum numerical models simulation emulation prototype operational system

3 What is simulation? system under study (has deterministic rules governing its behavior) exogenous inputs to system (the environment) system boundary observer “real” life computer program simulates deterministic rules governing behavior psuedo random inputs to system (models environment) program boundary observer “simulated” life

4 Why Simulation? r goal: study system performance, operation r real-system not available, is complex/costly or dangerous (eg: space simulations, flight simulations) r quickly evaluate design alternatives (eg: different system configurations) r evaluate complex functions for which closed form formulas or numerical techniques not available

5 Simulation: advantages/drawbacks r advantages: r drawbacks/dangers: m

6 Programming a simulation What ‘s in a simulation program? r simulated time: internal (to simulation program) variable that keeps track of simulated time r system “state”: variables maintained by simulation program define system “state” m e.g., may track number (possibly order) of packets in queue, current value of retransmission timer r events: points in time when system changes state m each event has associated event time e.g., arrival of packet to queue, departure from queue precisely at these points in time that simulation must take action (change state and may cause new future events) m model for time between events (probabilistic) caused by external environment

7 Discrete Event Simulation r simulation program maintains and updates list of future events: event list r simulator structure: initialize event list get next (nearest future) event from event list time = event time process event (change state values, add/delete future events from event list update statistics done? n Need: r well defined set of events r for each event: simulated system action, updating of event list y

8 Simulation: example  packets arrive (avg. interrarrival time: 1/ ) to router (avg. execution time 1/  ) with two outgoing links  arriving packet joins link i with probability  i   r state of system: size of each queue r system events: m job arrivals m service time completions r define performance measure to be gathered

9 Simulation: example   Simulator actions on arrival event r choose a link m if link idle {place pkt in service, determine service time (random number drawn from service time distribution) add future event onto event list for pkt transfer completion, set number of pkts in queue to 1} m if buffer full {increment # dropped packets, ignore arrival} m else increment number in queue where queued r create event for next arrival (generate interarrival time) stick event on event list

10 Simulation: example   Simulator actions on departure event r remove event, update simulation time, update performance statistics r decrement counter of number of pkts in queue r If (number of jobs in queue > 0) put next pkt into service – schedule completion event (generate service time for put)

11 Simulation: example   Simulator initialization r event list: r state variables:

12 Gathering Performance Statistics r avg delay at queue i: record D ij : delay of customer j at queue i. Let N i be # customers passing through queue i r Average queue length at i: total simulated time  throughput at queue i,  i = Little’s Law

13 Analyzing Output Results Each time we run a simulation, (using different random number streams), we will get different output results! distribution of random numbers to be used during simulation (interarrival, service times) random number sequence 1 simulation output results 1 input output random number sequence 2 simulation output results 2 input output random number sequence M simulation output results M input output …

14 Analyzing Output Results r W 2,n : delay of nth departing customer from queue 2  

15 Analyzing Output Results r each run shows variation in customer delay r one run different from next r statistical characterization of delay must be made m expected delay of n-th customer m behavior as n approaches infinity m average of n customers  

16 Transient Behavior r simulation outputs that depend on initial condition (i.e., output value changes when initial conditions change) are called transient characteristics m “early” part of simulation m later part of simulation less dependent on initial conditions  

17 Effect of initial conditions r histogram of delay of 20 th customer, given initially empty (1000 runs) r histogram of delay of 20 th customer, given non-empty conditions

18 Steady state behavior r output results may converge to limiting “steady state” value if simulation run “long enough” r discard statistics gathered during transient phase, e.g., ignore first n 0 measurements of delay at queue 2 avg delay of packets [n, n+10] pick n 0 so statistic is “approximately the same” for different random number streams and remains same as n increases avg of 5 simulations

19

20

21 Confidence Intervals r run simulation: get estimate X 1 as estimate of performance metrics of interest r repeat simulation M times (each with new set of random numbers), get X 2, … X M – all different! r which of X 1, … X M is “right”? r intuitively, average of M samples should be “better” than choosing any one of M samples How “confident” are we in X?

22 Confidence Intervals  Can not get perfect estimate of true mean, , with finite # samples r Look for bounds: find c1 and c2 such that: Probability(c1 <  < c2) = 1 –   c1,c2  confidence interval 100(1-  ): confidence level  One approach for finding c1, c2 (suppose  =.1) m take k samples (e.g., k independent simulation runs) m sort m find largest value is smallest 5% -> c 1 m find smallest value in largest 5% -> c 2

23 Confidence Intervals: Central Limit Thm  Central Limit Theorem: If samples X 1, … X M independent and from same population with population mean  and standard deviation  then is approximately normally distributed with mean u and standard deviation sample mean:

24 Confidence Intervals.. more r Still don’t know population standard deviation. So we estimate it using sample (observed) standard deviation:  Given we can now find upper and lower tails of normal distributions containing  100% of the mass

25 Interpretation of Confidence Interval r If we calculate confidence intervals as in recipe, 95% of the confidence intervals thus computed will contain the true (unknown) population mean.

26 Confidence Intervals.. the recipe r Given samples X 1, … X M, (e.g., having repeated simulation M times), compute 95% confidence interval:

27 Generating confidence intervals for steady state measures r independent replications with deletions r method of batch means r autoregressive method r spectrum analysis r regenerative method

28 Independent replications 1. generate n independent replications with m samples, remove first l 0 samples from each to obtain 2. calculate sample mean and variance from 3. use t-distribution to compute confidence intervals 4. can combine with sequential stopping rule to obtain confidence interval of specified width.

29 0ther methods r batch means: take single run, delete first l 0 observations, divide remainder into n groups and obtain X i for i-th, i = 1,…,n m follow procedure for independent replications m complication due to nonindependence of X i s m potential efficiency due to deletion of only l 0 observation r autoregressive method: spectrum analysis: based on study of correlation of observations

30 r regenerative method: applicable to systems with regeneration points m regeneration point  future independent of past m can construct observations for intervals between regeneration points that will be iid m use of CLT provides confidence intervals

31 Comparing two different systems Example: want to compare mean response times of two queues where arrival process remains unchanged but speed of servers are different. r run each system n times to get {X 1,j } and {X 2,j } and take Z j = (X 1,j,X 2,j ) as the observations to determine confidence intervals for r note that no assumption is made regarding whether r {X 1,j } and {X 2,j } are independent of each other m using the same streams to generate rvs for j-th runs of both systems usually results in smaller sample variance of {Z j } m referred to as method of common random number

32 ns-2, the network simulator r discrete event simulator r modeling network protocols m wired, wireless, satellite m TCP, UDP, multicast, unicast m web, telnet, ftp m ad hoc, sensor nets m infrastructure: stats, tracing, error models, etc. r prepackaged protocols and modules, or create your own Our goal: r flavor of ns: simple example, modification, execution and trace analysis

33 “ns” components r ns, the simulator itself (this is all we’ll have time for) r nam, the Network AniMator m visualize ns (or other) output m GUI input simple ns scenarios r pre-processing: m traffic and topology generators r post-processing: m simple trace analysis, often in Awk, Perl, or Tcl r tutorial: http://www.isi.edu/nsnam/ns/tutorial/index.html r ns by example: http://nile.wpi.edu/NS/


Download ppt "Network Simulation Motivation: r learn fundamentals of evaluating network performance via simulation Overview: r fundamentals of discrete event simulation."

Similar presentations


Ads by Google