Presentation is loading. Please wait.

Presentation is loading. Please wait.

Traffic Measurements Modified from Carey Williamson.

Similar presentations


Presentation on theme: "Traffic Measurements Modified from Carey Williamson."— Presentation transcript:

1 Traffic Measurements Modified from Carey Williamson

2 Network Traffic Measurement A focus of networking research for 20+ years Collect data or packet traces showing packet activity on the network for different applications Study, analyze, characterize Internet traffic Goals: – Understand the basic methodologies used – Understand the key measurement results to date 2

3 Why Network Traffic Measurement? Understand the traffic on existing networks Develop models of traffic for future networks Useful for simulations, capacity planning studies 3

4 Measurement Environments Local Area Networks (LAN’s) – e.g., Ethernet LANs Wide Area Networks (WAN’s) – e.g., the Internet Wireless LANs … 4

5 Requirements Network measurement requires hardware or software measurement facilities that attach directly to network Allows you to observe all packet traffic on the network, or to filter it to collect only the traffic of interest Assumes broadcast-based network technology, superuser permission 5

6 Measurement Tools (1 of 3) Can be classified into hardware and software measurement tools Hardware: specialized equipment – Examples: HP 4972 LAN Analyzer, DataGeneral Network Sniffer, others... Software: special software tools – Examples: tcpdump, xtr, SNMP, others... 6

7 Measurement Tools (2 of 3) Measurement tools can also be classified as active or passive Active: the monitoring tool generates traffic of its own during data collection (e.g., ping, pchar) Passive: the monitoring tool is passive, observing and recording traffic info, while generating none of its own (e.g., tcpdump) 7

8 Measurement Tools (3 of 3) Measurement tools can also be classified as real-time or non-real-time Real-time: collects traffic data as it happens, and may even be able to display traffic info as it happens, for real-time traffic management Non-real-time: collected traffic data may only be a subset (sample) of the total traffic, and is analyzed off-line (later), for detailed analysis 8

9 Potential Uses of Tools (1 of 4) Protocol debugging – Network debugging and troubleshooting – Changing network configuration – Designing, testing new protocols – Designing, testing new applications – Detecting network weirdness: broadcast storms, routing loops, etc. 9

10 Potential Uses of Tools (2 of 4) Performance evaluation of protocols and applications – How protocol/application is being used – How well it works – How to design it better 10

11 Potential Uses of Tools (3 of 4) Workload characterization – What traffic is generated – Packet size distribution – Packet arrival process – Burstiness – Important in the design of networks, applications, interconnection devices, congestion control algorithms, etc. 11

12 Potential Uses of Tools (4 of 4) Workload modeling – Construct synthetic workload models that concisely capture the salient characteristics of actual network traffic – Use as representative, reproducible, flexible, controllable workload models for simulations, capacity planning studies, etc. 12

13 13

14 Traffic Measurement Time Scales Performance analysis – representative models throughput, packet loss, packet delay – Microseconds to minutes Network engineering – network configuration – capacity planning – demand forecasting – traffic engineering – Minutes to years Different measurement methods 14

15 Properties Most basic view of traffic is as a collection of packets passing through routers and links Packets and Bytes – One can capture/observe packets at some location – Packet arrivals interarrivals count traffic at timescale T – Captures workload generated by traffic on a per-packet basis – Packet Size time series of Byte count – Captures the amount of consumed bandwidth packet size distribution – router design etc. 15

16 Higher-level Structure Transport protocols and applications ON/OFF process – bursty workload – Packet-level – Packet Train interarrival threshold – Session single execution of an application Human generated 16

17 Flows Set of packets passing an observation point during a time interval with all packets having a set of common properties – Header field contents, packet characteristics, etc. IP flows – source/destination addresses – IP or transport header fields – prefix Network-defined flow – network’s workload – ingress and egress – Traffic matrix and Path matrix 17

18 Semantically Distinct Traffic Types Control Traffic – Control plane Routing protocols – BGP, OSPF, IS-IS Measurement and management – SNMP General control packets – ICMP – Data plane Malicious Traffic 18

19 19

20 Challenges Practical issues – Observability Core simplicity – Flows – Packets Distributed Internetworking IP Hourglass – Data volume – Data sharing 20

21 Challenges Statistical difficulties – Long tails and High variability Instability of metrics Modeling difficulty Confounding intuition – Stationarity and stability Stationarity: joint probability distribution does not change when shifted in time Stability: consistency of properties over time – Autocorrelation and memory in system behavior – High dimensionality 21

22 Tools Packet Capture – General purpose systems libpcap tcpdump ethereal scriptroute … – Special purpose system – Control plane traffic GNU Zebra Routeviews 22

23 Data Management Full packet capture and storage is challenging Limitations of commodity PC Data stream management Big Data platforms – Hadoop, etc. 23

24 Data Reduction Lossy compression Counters – SNMP Management Information Base Flow capture – Packet trains – Packet flows 24

25 Data Reduction Sampling – Basic packet sampling Random: with fixed probability Deterministic: periodic samples Stratified: multi step sampling – Trajectory sampling Chose a randomly sampled packet at all locations 25

26 Data Reduction Summarization – Bloom filters – Sketches: Dimension reducing random projections – Probabilistic counting – Landmark/sliding window models 26

27 Review: Bloom Filters Given a set S = {x 1,x 2,x3,…x n } on a universe U, want to answer queries of the form: Bloom filter provides an answer in – “Constant” time (time to hash). – Small amount of space. – But with some probability of being wrong. Alternative to hashing with interesting tradeoffs.

28 Review: Bloom Filters Start with an m bit array, filled with 0s. Hash each item x j in S k times. If H i (x j ) = a, set B[a] = 1. 0000000000000000 B 0100101001110110 B To check if y is in S, check B at H i (y). All k values must be 1. 0100101001110110 B 0100101001110110 B Possible to have a false positive; all k values are 1, but y is not in S. n items m = cn bits k hash functions

29 Review: Bloom Filters Tradeoffs Three parameters. – Size m/n : bits per item. – Time k : number of hash functions. – Error f : false positive probability. 29

30 Review: Bloom Filters False Positive Probability Pr(specific bit of filter is 0) is If r is fraction of 0 bits in the filter then false positive probability is Approximations valid as r is concentrated around E[r]. – Martingale argument suffices. Find optimal at k = (ln 2) m/n by calculus. – So optimal fpp is about (0.6185) m/n n items m = cn bits k hash functions

31 Data Reduction Dimensionality reduction – Clustering – Principal Component Analysis Probabilistic models – Distribution models – Dependence structure Inference – Traffic Matrix estimation 31

32 Curse of Dimensionality. A major problem is the curse of dimensionality. If the data x lies in high dimensional space, then an enormous amount of data is required to learn distributions or decision rules. Example: 50 dimensions. Each dimension has 20 levels. This gives a total of cells. But the no. of data samples will be far less. There will not be enough data samples to learn.

33 Dimension Reduction One way to avoid the curse of dimensionality is by projecting the data onto a lower-dimensional space. Techniques for dimension reduction: – Principal Component Analysis (PCA) – Fisher’s Linear Discriminant – Multi-dimensional Scaling. – Independent Component Analysis. – …

34 Principal Component Analysis PCA is the most commonly used dimension reduction technique. – Also called the Karhunen-Loeve transform PCA – data samples Compute the mean Computer the covariance:

35 Compute the eigenvalues and eigenvectors of the matrix Solve Order them by magnitude: PCA reduces the dimension by keeping direction such that Principal Component Analysis

36 For many datasets, most of the eigenvalues \lambda are negligible and can be discarded. The eigenvalue measures the variation In the direction e Example:

37 Why Principal Component Analysis? Motive –Find bases which has high variance in data –Encode data with small number of bases with low MSE

38 Dimensionality Reduction Can ignore the components of less significance. You do lose some information, but if the eigenvalues are small, you don’t lose much –n dimensions in original data –calculate n eigenvectors and eigenvalues –choose only the first p eigenvectors, based on their eigenvalues –final data set has only p dimensions

39 Dimensionality Reduction Variance Dimensionality

40 PCA may not find the best directions for discriminating between two classes. Example: suppose the two classes have 2D Gaussian densities as ellipsoids. 1 st eigenvector is best for representing the probabilities. 2 nd eigenvector is best for discrimination. PCA and Discrimination

41 Linear methods.. Principal Component Analysis (PCA) One Dimensional Manifold

42 Nonlinear Manifolds.. A Unroll the manifold PCA and MDS see the Euclidean distance What is important is the geodesic distance

43 To preserve structure preserve the geodesic distance and not the euclidean distance.

44 Two methods Tenenbaum et.al’s Isomap Algorithm –Global approach. –On a low dimensional embedding Nearby points should be nearby. Farway points should be faraway. Roweis and Saul’s Locally Linear Embedding Algorithm –Local approach Nearby points nearby

45 Isomap Estimate the geodesic distance between faraway points. For neighboring points Euclidean distance is a good approximation to the geodesic distance. For farway points estimate the distance by a series of short hops between neighboring points. –Find shortest paths in a graph with edges connecting neighboring data points Once we have all pairwise geodesic distances use classical metric MDS

46 Isomap - Algorithm Determine the neighbors. –All points in a fixed radius. –K nearest neighbors Construct a neighborhood graph. –Each point is connected to the other if it is a K nearest neighbor. –Edge Length equals the Euclidean distance Compute the shortest paths between two nodes –Floyd’s Algorithm –Djkastra’s ALgorithm Construct a lower dimensional embedding. –Classical MDS

47 Isomap

48 Observations 48

49 Overview of Traffic Analysis 49

50 Traffic Samples from Internet2 50

51 Packet Trains and Autocorrelation 51

52 Observation #1 The traffic model that you use is extremely important in the performance evaluation of routing, flow control, and congestion control strategies – Have to consider application-dependent, protocol-dependent, and network-dependent characteristics – The more realistic, the better 52

53 Observation #2 Characterizing aggregate network traffic is hard – Lots of (diverse) applications – Just a snapshot: traffic mix, protocols, applications, network configuration, technology, and users change with time 53

54 Observation #3 Packet arrival process is not Poisson – Packets travel in trains – Packets travel in tandems – Packets get clumped together (ack compression) – Interarrival times are not exponential – Interarrival times are not independent 54

55 Observation #4 Packet traffic is bursty – Average utilization may be very low – Peak utilization can be very high – Depends on what interval you use!! – Traffic may be self-similar bursts exist across a wide range of time scales – Defining burstiness (precisely) is difficult 55

56 Observation #5 Traffic is non-uniformly distributed amongst the hosts on the network – Example: 10% of the hosts account for 90% of the traffic (or 20-80) – Why? Clients versus servers, geographic reasons, popular ftp sites, web sites, etc. 56

57 Observation #6 Network traffic exhibits ‘‘locality’’ effects – Pattern is far from random – Temporal locality – Spatial locality – Persistence and concentration – True at host level, at gateway level, at application level 57

58 Observation #7 Well over 90% of the byte and packet traffic on most networks is TCP/IP – By far the most prevalent – Often as high as 95-99% – Most studies focus only on TCP/IP for this reason 58

59 Observation #8 Most conversations are short – Example: 90% of bulk data transfers send less than 10 kilobytes of data – Example: 50% of interactive connections last less than 90 seconds – Distributions may be ‘‘heavy tailed’’ i.e., extreme values may skew the mean and/or the distribution 59

60 Observation #9 Traffic is bidirectional – Data usually flows both ways – Not just acks in the reverse direction – Usually asymmetric bandwidth though – Pretty much what you would expect from the TCP/IP traffic for most applications 60

61 Observation #10 Packet size distribution is bimodal – Lots of small packets for interactive traffic and acknowledgements – Lots of large packets for bulk data file transfer type applications – Very few in between sizes 61

62 Network Security Monitoring and Analysis based on Big Data Technologies Bingdong Li 62

63 Objectives A network security monitor and analysis system based on Big Data technologies to – Measures the network – Real time continuous monitoring and interactive visualization – Intelligent network object classification and identification based on role behavior as context

64 Big Data Machine Learning Network Security Objectives

65 65

66 System Design Data Collection 66

67 System Design Online Real Time Process 67

68 System Design NoSQL Storage 68

69 System Design User Interfaces 69

70 70

71 Monitoring and Visualization Real Time response within a time constraint Interactive involve user interaction Continuously “continue to be effective overtime in light of the inevitable changes that occur” (NIST) 71

72 Network Status 72

73 Top N 73

74 Hadoop Dhruba Borthakur 74

75 Hadoop, Why? Need to process Multi Petabyte Datasets Expensive to build reliability in each application. Nodes fail every day – Failure is expected, rather than exceptional. – The number of nodes in a cluster is not constant. Need common infrastructure – Efficient, reliable, Open Source Apache License The above goals are same as Condor, but – Workloads are IO bound and not CPU bound

76 Commodity Hardware Typically in 2 level architecture – Nodes are commodity PCs – 30-40 nodes/rack – Uplink from rack is 3-4 gigabit – Rack-internal is 1 gigabit

77 Goals of HDFS Very Large Distributed File System – 10K nodes, 100 million files, 10 PB Assumes Commodity Hardware – Files are replicated to handle hardware failure – Detect failures and recovers from them Optimized for Batch Processing – Data locations exposed so that computations can move to where data resides – Provides very high aggregate bandwidth User Space, runs on heterogeneous OS

78 Secondary NameNode Client HDFS Architecture NameNode DataNodes 1. filename 2. BlckId, DataNodes o 3.Read data Cluster Membership NameNode : Maps a file to a file-id and list of MapNodes DataNode : Maps a block-id to a physical location on disk SecondaryNameNode: Periodic merge of Transaction log

79 Distributed File System Single Namespace for entire cluster Data Coherency – Write-once-read-many access model – Client can only append to existing files Files are broken up into blocks – Typically 128 MB block size – Each block replicated on multiple DataNodes Intelligent Client – Client can find location of blocks – Client accesses data directly from DataNode

80

81 NameNode Metadata Meta-data in Memory – The entire metadata is in main memory – No demand paging of meta-data Types of Metadata – List of files – List of Blocks for each file – List of DataNodes for each block – File attributes, e.g creation time, replication factor A Transaction Log – Records file creations, file deletions. etc

82 DataNode A Block Server – Stores data in the local file system (e.g. ext3) – Stores meta-data of a block (e.g. CRC) – Serves data and meta-data to Clients Block Report – Periodically sends a report of all existing blocks to the NameNode Facilitates Pipelining of Data – Forwards data to other specified DataNodes

83 Data Flow Web ServersScribe Servers Network Storage Hadoop Cluster Oracle RAC MySQL


Download ppt "Traffic Measurements Modified from Carey Williamson."

Similar presentations


Ads by Google