Presentation is loading. Please wait.

Presentation is loading. Please wait.

AMCS/CS 340 : Data Mining Mining Data Streams Xiangliang Zhang King Abdullah University of Science and Technology.

Similar presentations


Presentation on theme: "AMCS/CS 340 : Data Mining Mining Data Streams Xiangliang Zhang King Abdullah University of Science and Technology."— Presentation transcript:

1 AMCS/CS 340 : Data Mining Mining Data Streams Xiangliang Zhang King Abdullah University of Science and Technology

2 Outline Introduction of Data Streams Synopsis/sketch maintenance -Sampling -Sliding window -Counting Distinct Elements  Frequent pattern mining Stream Clustering Stream Classification Change and novelty detection 2 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

3 Motivations A large number of applications generate data streams – Telecommunication (call records) – System management (network events) – Surveillance (sensor network, audio/video) – Financial market (stock exchange) – Day to day business (credit card, ATM transactions, etc) Tasks: Real time query answering, statistics maintenance, and pattern discovery on data streams 3 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

4 Characteristics of Data Streams Characteristics -High volume (possibly infinite) of continuous data -Data arrive at a rapid rate -Data distribution changes on the fly -The system cannot store the entire stream (only the summary of the data seen thus far) -calculations about the stream should be done in a limited amount of (secondary) memory –Data streams — continuous, ordered, changing, fast, huge amount 4 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

5 Example: Network Management Application IP session data (collected using Cisco NetFlow) AT&T collects 100 GBs of NetFlow data each day! Network Operations Center Network Measurements Alarms Source Destination Duration Bytes Protocol K http K http K http K http K http K ftp K ftp K ftp 5

6 Example: Network Management Application Data stream processing in network management  Monitor link bandwidth usage, estimate traffic demands -How many bytes were sent between a pair of IP addresses? -What fraction network IP addresses are active? -List the top 100 IP addresses in terms of traffic  Quickly detect faults, congestion and isolate root cause -List all sessions that transmitted more than 1000 bytes -Identify all sessions whose duration was more than twice the normal -List all IP addresses that have witnessed a sudden spike in traffic  Load balancing, improve utilization of network resources 6 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

7 More Applications Mining query streams  Google wants to know what queries are more frequent today than yesterday. Mining click streams  Yahoo wants to know which of its pages are getting an unusual number of hits in the past hour. Mining social network news feeds  Look for trending topics on Twitter, Facebook Mining call records  Summarize telephone call records into customer bills. 7 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

8 Processor Limited Storage... 1, 5, 2, 7, 0, 9, 3... a, r, v, t, y, h, b... 0, 0, 1, 0, 1, 1, 0 time Streams Entering Queries / Statistics / Classification / Clustering Output  Single pass : Each record is examined at most once  Bounded storage : Limited Memory (M) for storing synopsis  Real-time : Per record processing time (to maintain synopsis) must be low Stream processing requirements 8

9 Outline Introduction of Data Streams Synopsis/sketch maintenance -Sampling -Sliding window -Counting Distinct Elements - Frequent pattern mining Stream Clustering Stream Classification Change and novelty detection 9 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

10 Sampling from a data stream Can not store the entire stream ?  store a sample Two different problems:  Sample a fixed proportion of elements in the stream (say 1 in 10)  Maintain a random sample of fixed size over a potentially infinite stream 10 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

11 Sliding Windows A useful model of stream processing is that queries are about a window of length N --- the N most recent elements received. q w e r t y u i o p a s d f g h j k l z x c v b n m Past Future 11 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining Maintaining statistics – Count/Sum of non-zero elements – Variance

12 Counting Distinct Elements Problem: a data stream consists of elements chosen from a set of size n. Maintain a count of the number of distinct elements seen so far. Obvious approach: maintain the set of elements seen. Application:  How many different Web pages does each customer request in a week?  How many different words are found among the Web pages being crawled at a site? Unusually low or high numbers could indicate artificial pages (spam made for influence search engine rankings?). 12 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

13 Using Small Storage Real Problem: what if we do not have space to store the complete set?  Estimate the count in an unbiased way.  Accept that the count may be in error, but limit the probability that the error is large. Flajolet-Martin Approach [FM85] 13 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

14 Frequent Pattern Mining Frequent patterns: patterns (set of items, sequence, etc.) that occur frequently in a database [AIS93] Frequent pattern mining: finding regularities in data – What products were often purchased together? – What are the subsequent purchases after buying a PC? – What kinds of DNA are sensitive to this new drug? – Can we classify web documents based on key-word combinations? Apriori Algorithm 14 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

15 Frequent Pattern in Data Streams – Challenges Maintaining exact counts for all (frequent) itemsets needs multiple scans of the stream – Maintain approximation of counts Finding the exact set of frequent itemsets from data streams cannot be online – Have to scan data streams multiple times – Space overhead – Finding approximation of set of frequent itemsets 15 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

16 Mining Approximate Frequent Patterns Approximate answers are often sufficient (e.g., trend/pattern analysis) Example: a router is interested in all flows:  whose frequency is at least σ (e.g., 10%) of the entire traffic stream seen so far  and feels that 1/10 of σ (ε = 0.1* σ) error is comfortable How to mine frequent patterns with good approximation?  Lossy Counting Algorithm (Manku & Motwani, VLDB’02)  Major ideas: not tracing items until it becomes frequent  Advantage: guaranteed error bound  Disadvantage: keep a large set of traces 16 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

17 Lossy Counting – Ideas Divide the stream into buckets, maintain a global count of buckets seen so far For any item, if its count is less than the global count of buckets, then its count does not need to be maintained – How to divide buckets so that the possible errors are bounded? – How to guarantee the number of entries needed to be recorded is also bounded? 17 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

18 Lossy Counting for Frequent Items Bucket 1Bucket 2Bucket 3 Divide Stream into ‘Buckets’ (bucket size is 1/ ε = 10) 18 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

19 First Bucket of Stream Empty (summary) + After a bucket, decrease all counters by 1 19 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

20 Next Bucket of Stream After a bucket, decrease all counters by Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

21 Approximation Guarantee Given: (1) support threshold: σ, (2) error threshold: ε, and (3) stream length N Output: items with frequency counts exceeding (σ – ε) N How much do we undercount? If stream length seen so far = N and bucket-size = 1/ε  then frequency count error  #buckets = εN Approximation guarantee  No false negatives (freq but not reported)  False positives have true frequency count at least (σ–ε)N  Frequency count underestimated by at most εN 21 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

22 Outline Introduction of Data Streams Synopsis/sketch maintenance -Sampling -Sliding window -Counting Distinct Elements - Frequent pattern mining Stream Clustering Stream Classification Change and novelty detection 22 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

23 Clustering Data Streams 23 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

24 CluStream: Clustering On-line Streams 24 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

25 CluStream: Clustering On-line Streams Online micro-cluster maintenance Initial creation of q micro-clusters q is usually significantly larger than the number of natural clusters Online incremental update of micro-clusters If new point is within max-boundary, insert into the micro-cluster Otherwise, create a new cluster May delete obsolete micro-cluster or merge two closest ones Offline query-based macro-clustering Based on a user-specified time-horizon h and the number of macro- clusters K, compute macro-clusters using clustering algorithm, e.g. k-means, DbScan. 25 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

26 Clustering Streams, Model + Reservoir 26 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

27 Clustering Streams, Model + Reservoir 27 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

28 Clustering Streams, Model + Reservoir 28 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

29 Clustering Streams, Model + Reservoir 29 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

30 Clustering Streams, Model + Reservoir 30 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

31 Clustering Streams, Model + Reservoir 31 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

32 Clustering Streams, Model + Reservoir 32

33 Summary – Clustering Clustering data stream with one scan and limited main memory – Clustering in a sliding window – Clustering the whole stream (online) How to handle evolving data? – Online summarization and offline analysis – Change detection Applications and extensions – Outlier detection, nearest neighbor search, reverse nearest neighbor queries, … 33 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

34 Outline Introduction of Data Streams Synopsis/sketch maintenance -Sampling -Sliding window -Counting Distinct Elements - Frequent pattern mining Stream Clustering Stream Classification Change and novelty detection 34 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

35 Classification for Dynamic Data Streams Decision tree induction for stream data classification VFDT (Very Fast Decision Tree) / CVFDT (Domingos, Hulten, Spencer, KDD00/KDD01) Is decision-tree good for modeling fast changing data, e.g., stock market analysis? Other stream classification methods  Instead of decision-trees, consider other models -Naïve Bayesian -Ensemble (Wang, Fan, Yu, Han. KDD’03) -K-nearest neighbors (Aggarwal, Han, Wang, Yu. KDD’04)  incremental updating, dynamic maintenance, and model construction 35

36 What are the Challenges? Data Volume – impossible to mine the entire data at one time – can only afford constant memory per data sample Concept Drifts – previously learned models are invalid Cost of Learning – model updates can be costly – can only afford constant time per data sample 36

37 The Decision Tree Classifier Learning (Training) : – Input: a data set of (X, y), where X is a vector, y a class label – Output: a model (decision tree) Testing: – Input: a test sample (x, ?) – Output: a class label prediction for x 37

38 The Decision Tree Classifier A divide-and-conquer approach – Simple algorithm, intuitive model Compute information gain for data in each node – Super-linear complexity Typically a decision tree grows one level for each scan of data – Multiple scans are required The data structure is not ‘stable’ – Subtle changes of data can cause global changes in the data structure 38

39 Challenge for streams Task: – Given enough samples, can we build a tree in constant time that is nearly identical to the tree a batch learner (C4.5, Sprint, etc.) would build? Intuition: – With increasing # of samples, the # of possible decision trees becomes smaller Forget about concept drifts for now. 39

40 yesno Packets > 10 Protocol = http Protocol = ftp yes no Packets > 10 Bytes > 60K Protocol = http Data Stream Ack. From Gehrke’s SIGMOD tutorial slides Decision-Tree Induction with Data Streams  At each node, we shall accumulate enough samples (n) before we make a split  Problem: How many examples are necessary? n=?

41 Hoeffding Bound Given – r : real valued random variable – n : # independent observations of r – R : range of r The difference between r and r avg is bounded by ε, with probability 1-δ, P( |μ r - r avg | ≥ ε) < 1-δ and 41

42 Hoeffding Bound P( |μ r - r avg | ≥ ε) < 1-δ and Properties: – Hoeffding bound is independent of data distribution – Error ε decreases when n (# of samples) increases –Hoeffding Tree,  based on Hoeffding Bound principle  At each node, we shall accumulate enough samples (n) before we make a split  When n is large enough, error ε decreases to a small value 42

43 Hoeffding Tree Algorithm Hoeffding Tree Input -S: sequence of examples -X: attributes -G( ): evaluation function, e.g., Gini gain Hoeffding Tree Algorithm -for each example in S  retrieve G(X a ) and G(X b ) //two highest G(X i )  if ( G(X a ) – G(X b ) > ε ) ε is computed by using hoeffding bound split on X a recursively go to next node 43

44 Hoeffding Tree: Pros and Cons Scales better than traditional DT algorithms – Incremental – Sub-linear with sampling – Small memory requirement Cons: – Only consider top 2 attributes – Tie breaking takes time – Grow a deep tree takes time 44

45 Outline Introduction of Data Streams Synopsis/sketch maintenance -Sampling -Sliding window -Counting Distinct Elements - Frequent pattern mining Stream Clustering Stream Classification Change and novelty detection 45 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

46 Change detection General idea: compare a reference distribution with a current window of events reference distribution Kullback-Leibler distance can be used to measure the difference between two given distributions [Dasu et al, 2006] 46 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

47 Change detection General idea: compare a reference distribution with a current window of events reference distribution 47 Density statistic test can be used to test whether the newly observed data points S0 are sampled from the underlying distribution that produced the baseline data set S. [Song et al, 2007] Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

48 Issue of reference window in change detection General idea: compare a reference distribution with a current window of events reference distribution Based on the stationary reference data What if the underlying distribution is not stationary ? e.g. in network intrusion detection by monitoring network traffic, the distribution of reference data (usually normal data) is evolving over time 48 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

49 Summary: Stream Data Mining Stream data mining: A rich and on-going research field  Research in database community:  DSMS system architecture, continuous query processing, supporting mechanisms  Stream data mining  Powerful tools for finding general and unusual patterns  Effectiveness, efficiency and scalability: lots of open problems 49 Xiangliang Zhang, KAUST AMCS/CS 340: Data Mining

50 References on Stream Data Mining (1) C. Aggarwal, J. Han, J. Wang, P. S. Yu. A Framework for Clustering Data Streams, VLDB'03 C. C. Aggarwal, J. Han, J. Wang and P. S. Yu. On-Demand Classification of Evolving Data Streams, KDD'04 C. Aggarwal, J. Han, J. Wang, and P. S. Yu. A Framework for Projected Clustering of High Dimensional Data Streams, VLDB'04 S. Babu and J. Widom. Continuous Queries over Data Streams. SIGMOD Record, 2001 B. Babcock, S. Babu, M. Datar, R. Motwani and J. Widom. Models and Issues in Data Stream Systems”, PODS'02. (Conference tutorial)Conference tutorial Y. Chen, G. Dong, J. Han, B. W. Wah, and J. Wang. "Multi-Dimensional Regression Analysis of Time-Series Data Streams, VLDB'02 P. Domingos and G. Hulten, “Mining high-speed data streams”, KDD'00 A. Dobra, M. N. Garofalakis, J. Gehrke, R. Rastogi. Processing Complex Aggregate Queries over Data Streams, SIGMOD’02 J. Gehrke, F. Korn, D. Srivastava. On computing correlated aggregates over continuous data streams. SIGMOD'01 C. Giannella, J. Han, J. Pei, X. Yan and P.S. Yu. Mining frequent patterns in data streams at multiple time granularities, Kargupta, et al. (eds.), Next Generation Data Mining’02 Mayur Datar, Aristides Gionis, Piotr Indyk, Rajeev Motwani. Maintaining stream statistics over sliding windows. SODA 2002

51 References on Stream Data Mining (2) S. Guha, N. Mishra, R. Motwani, and L. O'Callaghan. Clustering Data Streams, FOCS'00 G. Hulten, L. Spencer and P. Domingos: Mining time-changing data streams. KDD 2001 S. Madden, M. Shah, J. Hellerstein, V. Raman, Continuously Adaptive Continuous Queries over Streams, SIGMOD02 G. Manku, R. Motwani. Approximate Frequency Counts over Data Streams, VLDB’02 A. Metwally, D. Agrawal, and A. El Abbadi. Efficient Computation of Frequent and Top-k Elements in Data Streams. ICDT'05 S. Muthukrishnan, Data streams: algorithms and applications, Proceedings of the fourteenth annual ACM-SIAM symposium on Discrete algorithms, 2003 R. Motwani and P. Raghavan, Randomized Algorithms, Cambridge Univ. Press, 1995 S. Viglas and J. Naughton, Rate-Based Query Optimization for Streaming Information Sources, SIGMOD’02 Y. Zhu and D. Shasha. StatStream: Statistical Monitoring of Thousands of Data Streams in Real Time, VLDB’02 H. Wang, W. Fan, P. S. Yu, and J. Han, Mining Concept-Drifting Data Streams using Ensemble Classifiers, KDD'03

52 Acknowledgements Some of the material is borrowed from lectures of  Jiawei Han and Micheline Kamber  Minos Garofalakis, Johannes Gehrke and Rajeev Rastogi  Jure Leskovec and Anand Rajaraman  Haixun Wang, Jian Pei, and Philip S. Yu 52


Download ppt "AMCS/CS 340 : Data Mining Mining Data Streams Xiangliang Zhang King Abdullah University of Science and Technology."

Similar presentations


Ads by Google