Download presentation

Presentation is loading. Please wait.

Published byDaphne Sennett Modified over 2 years ago

1

2
1 Fast Calculations of Simple Primitives in Time Series Dennis Shasha Department of Computer Science Courant Institute of Mathematical Sciences New York university Joint work with Richard Cole, Xiaojian Zhao (correlation), Zhihua Wang (humming), Yunyue Zhu (both), and Tyler Neylon (svds, trajectories)

3
2 Roadmap Section 1 : Motivation Section 2 : Statstream: A Fast Sliding Window based Correction Detector Problem Statement Cooperative and Uncooperative Time Series Algorithmic Framework DFT based Scheme and Random Projection Combinatorial Design and Bootstrapping Empirical Study Section 3 : Elastic Burst Detection Problem Statement Challenge Shifted Binary Tree Astrophysical Application

4
3 Overall Motivation Financial time series streams are watched closely by millions of traders. What exactly do they look for and how can we help them do it faster? hich pairs of stocks had highly correlated returns over the last three hours Typical query:Which pairs of stocks had highly correlated returns over the last three hours? Do there exist bursts of gamma rays in windows of any size from 8 milliseconds to 4 hours Physicists study the time series emerging from their sensors. Typical query:Do there exist bursts of gamma rays in windows of any size from 8 milliseconds to 4 hours? Musicians produce time series. Typical query: Even though I cant hum well, please find this song. I want the CD.

5
4 Why Speed Is Important As processors speed up, algorithmic efficiency no longer matters … one might think. True if problem sizes stay same but they don t. As processors speed up, sensors improve -- satellites spewing out a terabyte a day, magnetic resonance imagers give higher resolution images, etc. Desire for real time response to queries. /86

6
5 Surprise, surprise More data, real-time response, increasing importance of correlation IMPLIES Efficient algorithms and data management more important than ever! /86

7
6 Section 2: Statstream: A Fast Sliding Window based Correction Detector

8
7 Scenario Stock prices streams The New York Stock Exchange (NYSE) 50,000 securities (streams); 100,000 ticks (trade and quote) Pairs Trading, a.k.a. Correlation Trading Query:which pairs of stocks were correlated with a value of over 0.9 for the last three hours? XYZ and ABC have been correlated with a correlation of 0.95 for the last three hours. Now XYZ and ABC become less correlated as XYZ goes up and ABC goes down. They should converge back later. I will sell XYZ and buy ABC …

9
8 Motivation: Online Detection of High Correlation Correlated!

10
9 Problem Statement Synchronous time series window correlation Given N s streams, a start time t start,and a window size w, find, for each time window W of size w, all pairs of streams S 1 and S 2 such that S 1 during time window W is highly correlated with S 2 during the same time window. (Possible time windows are [t start t start +w - 1], [t start +1 t start +w], where t start is some start time.) Asynchronous correlation Allow shifts in time. That is, given Ns streams and a window size w, find all time windows W 1 and W 2 where |W 1 |= |W 2 |= w and all pairs of streams S 1 and S 2 such that S 1 during W 1 is highly correlated with S 2 during W 2.

11
10 Cooperative and Uncooperative Time Series Cooperative time series Exhibit a fundamental degree of regularity at least over the short term, Allow long time series to be compressed to a few coefficients with little loss of information using data reduction techniques such as Fourier Transforms and Wavelet Transforms. Example: stock price time series Uncooperative time series Regularities are absent – resembles noise. Example: stock return time series (difference in price/avg price)

12
11 Algorithmic Framework Basic Definitions: Timepoint: The smallest unit of time over which the system collects data, e.g., a second. Basic window: A consecutive subsequence of time points over which the system maintains a digest (i.e., a compressed representation) and returns resuls e.g., two minutes. Sliding window: A user-defined consecutive subsequence of basic windows over which the user wants statistics, e.g., an hour. The user might ask, which pairs of streams were correlated with a value of over 0.9 for the last hour? Then again 2 minutes later.

13
12 Definitions: Sliding window and Basic window … Stock 1 Stock 2 Stock 3 Stock n Sliding window Time axis Sliding window size=8 Basic window size=2 Basic window Time point

14
13 Algorithmic Strategy (cooperative case) Dimensionality Reduction (DFT, DWT, SVD) time series 1 time series 2 time series 3 … time series n … digest 1 digest 2 … digest n … Grid structure Correlated pairs

15
14 GEMINI framework (Faloutsos et al.) Transformation ideally has lower-bounding property

16
15 DFT based Scheme* Sliding window Basic window digests: sum DFT coefs Basic window Time point Basic window digests: sum DFT coefs *D. Shasha and Y. Zhu. High Performance Discovery in Time Series: Techniques and Case Studies. Springer, 2004.

17
16 Incremental Processing Compute the DFT a basic window at the time. Then add (with angular shifts) to get a DFT for the whole sliding window. Time is just DFT time for basic window + time proportional to number of DFT components we need. Using the first few DFT coefficients for the whole sliding window, represent the sliding window by a point in a grid structure. End up having to compare very few time windows, so a potentially quadratic comparison problem becomes linear in practice.

18
17 Grid Structure

19
18 Problem: Doesnt always work DFT approximates the price-like data type very well. However, it is poor for stock returns (todays price – yesterdays price)/yesterdays price. Return is more like white noise which contains all frequency components. DFT uses the first n (e.g. 10) coefficients in approximating data, which is insufficient in the case of white noise.

20
19 DFT on random walk (works well) and white noise (works badly)

21
20 Random Projection: Intuition You are walking in a sparse forest and you are lost. You have an outdated cell phone without a GPS. You want to know if you are close to your friend. You identify yourself as 100 meters from the pointy rock and 200 meters from the giant oak etc. If your friend is at similar distances from several of these landmarks, you might be close to one another. Random projections are analogous to these distances to landmarks.

22
21 How to compute a Random Projection* Random vector pool: A list of random vectors drawn from stable distribution (like the landmarks) Project the time series into the space spanned by these random vectors The Euclidean distance (correlation) between time series is approximated by the distance between their sketches with a probabilistic guarantee. Note: Sketches do not provide approximations of individual time series window but help make comparisons. W.B.Johnson and J.Lindenstrauss. Extensions of Lipshitz mapping into hilbert space. Contemp. Math.,26: ,1984

23
inner product random vector sketches raw time series Random Projection X current position Y current position Rocks, buildings … Y relative distances X relative distances

24
23 Sketch Guarantees Johnson-Lindenstrauss Lemma: For any and any integer n, let k be a positive integer such that Then for any set V of n points in, there is a map such that for all Further this map can be found in randomized polynomial time

25
24 Empirical Study: sketch distance/real distance Sketch=30 Sketch=80 Sketch=1000

26
25 Empirical Comparison : DFT, DWT and Sketch

27
26 Algorithm overview using random projections/sketches Partition each sketch vector s of size N into groups of some size g; The i th group of each sketch vector s is placed in the i th grid structure (of dimension g). If two sketch vectors s 1 and s 2 are within distance cd, where d is the target distance, in more than a fraction f of the groups, then the corresponding windows are candidate highly correlated windows and should be checked exactly.

28
27 Optimization in Parameter Space Next, how to choose the parameters g, c, f, N? Size of Sketch (N): 30, 36, 48, 60 Group Size (g): 1, 2, 3, 4 Distance Multiplier (c): 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1, 1.1, 1.2, 1.3 Fraction (f): 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1

29
28 Optimization in Parameter Space Essentially, we will prepare several groups of good parameter candidates and choose the best one to be applied to the given data But, how to select the good candidates? Combinatorial Design (CD) Bootstrapping

30
29 Combinatorial Design The pair-wise combinations of all the parameters Informally: Each value of parameter X will be combined with each value of parameter Y in at least one experiment, for all X, Y Example: if there are four parameters having respectively 4, 4, 13, and 10 values, exhaustive search requires 2080 experiments vs. 130 for pair- wise combinatorial design. sketchparam.csv *

31
30 Exploring the Neighborhood around the Best Values Because combinatorial design is NOT exhaustive, we may not find the optimal combination of parameters at first. Solution: When good parameter values are found, their local neighbors will be searched further for better solutions

32
31 How Bootstrapping Is Used Goal: Test the robustness of a conclusion on a sample data set by creating new samples from the initial sample with replacement. Procedure: A sample set with 1,000,000 pairs of time series windows. Among them, choose with replacement 20,000 sample points Compute the recall and precision each time Repeat many times (e.g. 100 or more)

33
32 Testing for stability Bootstrap 100 times Compute mean and std of recalls and precisions What we want from good parameters Mean(recall)-std(recall)>Threshold(recall) Mean(precision)-std(precision)>Threshold(precision) If there are no such parameters, enlarge the replacement sample size

34
Inner product with random vectors r1,r2,r3,r4,r5,r6 XY Z

35
Grid structure

36

37
36 Experiments

38
37 Section 3: Elastic Burst Detection

39
38 Elastic Burst Detection: Problem Statement Problem: Given a time series of positive numbers x 1, x 2,..., x n, and a threshold function f(w), w=1,2,...,n, find the subsequences of any size such that their sums are above the thresholds: all 0

40
39 Burst Detection: Challenge Single stream problem. What makes it hard is we are looking at multiple window sizes at the same time. Naïve approach is to do this one window size at a time.

41
40 Astrophysical Application Motivation: In astrophysics, the sky is constantly observed for high-energy particles. When a particular astrophysical event happens, a shower of high-energy particles arrives in addition to the background noise. An unusual event burst may signal an event interesting to physicists. Technical Overview: 1.The sky is partitioned into 1800*900 buckets Sliding window lengths are monitored from 0.1s to 39.81s 3.The original code implements the naive window-at-a-time algorithm. Cant do more windows

42
41 Bursts across different window sizes in Gamma Rays Challenge : to discover not only the time of the burst, but also the duration of the burst.

43
42 Shifted Binary Tree (SBT) Define threshold for node for size 2 k to be threshold for window of size 1+ 2 k-1

44
43 Burst Detection using SBT Any window of size w, 2 i-1 +2 w 2 i +1, is included in one of the windows at level i+1. For non-negative data stream and a monotonic aggregation function, if a node at level i+1 doesnt exceed the threshold for window size 2 i-1 +2, none of the windows of sizes between 2 i-1 +2 and 2 i +1 will contain a burst; otherwise need detailed search to test for real bursts Filter many windows, thus reducing the CPU time dramatically Shortcoming: fixed structure. Can do badly if bursts very unlikely or relatively likely.

45
44 Shifted Aggregation Tree Hierarchical tree structure-each node is an aggregate Different from the SBT in two ways: Parent-child structure: Define the topological relationship between a node and its children Shifting pattern: Define how many time points apart between two neighboring nodes at the same level

46
45 Aggregation Pyramid (AP) N-level isosceles triangular- shaped data structure built on a sliding window of length N N-level isosceles triangular- shaped data structure built on a sliding window of length N Level 0 has a one-to-one correspondence to the input time series Level 0 has a one-to-one correspondence to the input time series Level h stores the aggregates for h+1 consecutive elements, i.e, a sliding window of length h+1 Level h stores the aggregates for h+1 consecutive elements, i.e, a sliding window of length h+1 AP stores every aggregate for every window size starting at every time point AP stores every aggregate for every window size starting at every time point

47
46 Aggregation Pyramid Property 45 o diagonal: same starting time 135 o diagonal: same ending time Shadow of cell(t,h): a sliding window starting at time t and ending at t+h-1 Coverage of cell(t,h): all the cells in the sub-pyramid rooted at cell(t,h) Overlap of cell(t 1,h 1 ) and cell(t 2,h 2 ): a cell at the intersection of the 135 o diagonal touching cell(t 1,h 1 ) and the 45 o diagonal touching cell(t 2,h 2 )

48
47 Embed Shifted Binary Tree in Aggregation Pyramid

49
48 Aggregation Pyramid as a Host Data Structure Many structures besides Shifted Binary Tree in an Aggregation Pyramid Many structures besides Shifted Binary Tree in an Aggregation Pyramid The update-filter-search framework guarantees detection of all the bursts as long as the structure includes the level 0 cells and the top-level cell The update-filter-search framework guarantees detection of all the bursts as long as the structure includes the level 0 cells and the top-level cell What kind of structures are good for burst detection? What kind of structures are good for burst detection?

50
49 Which Shifted Aggregation Tree to be used? Many Shifted Aggregation Trees available, all of them guarantee detection of all the bursts, which structure to be used? Many Shifted Aggregation Trees available, all of them guarantee detection of all the bursts, which structure to be used? Intuitively, the denser a structure, the more updating time, and the less detailed search time, and vice versa. Intuitively, the denser a structure, the more updating time, and the less detailed search time, and vice versa. The structure minimizing the total CPU running time, given the input The structure minimizing the total CPU running time, given the input

51
50 State-space Algorithm View a Shifted Aggregation Tree (SAT) as a state View a Shifted Aggregation Tree (SAT) as a state View the growth from one SAT to another as a transformation between states View the growth from one SAT to another as a transformation between states

52
51 State-space Algorithm Initial state: a Shifted Aggregation Tree (SAT) containing only level 0 Initial state: a Shifted Aggregation Tree (SAT) containing only level 0 Transformation rule: If by adding one level to the top of SAT B, we get SAT A, state B is transformed to state A Transformation rule: If by adding one level to the top of SAT B, we get SAT A, state B is transformed to state A Final state: a SAT which can cover the max window size of interest Final state: a SAT which can cover the max window size of interest Traversing strategy: best-first search Traversing strategy: best-first search Associate each state with a cost Associate each state with a cost Prune: to explore more reasonable structures Prune: to explore more reasonable structures

53
52 Results Shifted Aggregation Tree outperforms Shifted Binary Tree. Shifted Aggregation Tree outperforms Shifted Binary Tree. A factor of 35 times speedup in some experiment A factor of 35 times speedup in some experiment Shifted Aggregation Tree can adjust its structure to adapt different inputs. Shifted Aggregation Tree can adjust its structure to adapt different inputs.

54
53 Greedy Dynamic Burst Detection Real world data keeps changing Real world data keeps changing Poor if the training data differs significantly from the data to be detected Poor if the training data differs significantly from the data to be detected 10%-250% more detection time shown in the figure below 10%-250% more detection time shown in the figure below

55
54 Ideas Basic ideas: change a structure if a change helps to reduce the overall cost Basic ideas: change a structure if a change helps to reduce the overall cost Greedy when making a structure denser Greedy when making a structure denser If the saved detailed search cost is greater than the updating/filtering cost, add this level If the saved detailed search cost is greater than the updating/filtering cost, add this level Delayed greedy when making a structure sparser Delayed greedy when making a structure sparser Alarms likely occur in clusters, across multiple sizes and multiple neighboring windows Alarms likely occur in clusters, across multiple sizes and multiple neighboring windows A lower level may support a higher level A lower level may support a higher level Don t remove a level if an alarm occurred recently Don t remove a level if an alarm occurred recently

56
55 Algorithm Sketch Start with a trained structure using the state-space algorithm Start with a trained structure using the state-space algorithm If an alarm is raised at level k If an alarm is raised at level k If adding a level in between level k and level k-1 can save some cost, add this level If adding a level in between level k and level k-1 can save some cost, add this level If can t add due to some lower level to support this level not in the structure, add the lower level If can t add due to some lower level to support this level not in the structure, add the lower level If can t add because that the shift doesn t satisfy the property of Shifted Aggregation Tree, legally narrow the shifts If can t add because that the shift doesn t satisfy the property of Shifted Aggregation Tree, legally narrow the shifts elseif the aggregate at level k doesn t exceed the minimum threshold for level k-1 elseif the aggregate at level k doesn t exceed the minimum threshold for level k-1 If no alarm occurred recently If no alarm occurred recently If legal to remove level k-1, remove it If legal to remove level k-1, remove it else legally double the shift else legally double the shift

57
56Results Different training sets Different training sets The dynamic algorithm overcomes the discrepancy resulting from a biased training data The dynamic algorithm overcomes the discrepancy resulting from a biased training data Different sets of window sizes Different sets of window sizes When the number of window sizes is small, the dynamic algorithm performs slightly less well than the static algorithm When the number of window sizes is small, the dynamic algorithm performs slightly less well than the static algorithm

58
57 Volume Spike Detection in Stocking Trading Trading volume indicates buying/selling interest, the underlying force for price movements Trading volume indicates buying/selling interest, the underlying force for price movements Volume spike: a burst of trading activity, a signal in program trading Volume spike: a burst of trading activity, a signal in program trading High rate: more than 100 updates per second per stock High rate: more than 100 updates per second per stock (from marketvolume.com)

59
58 Volume Spike Detection in Stocking Trading Setup Setup Jan May 2004 tick-by- tick trading activities of the IBM stock Jan May 2004 tick-by- tick trading activities of the IBM stock 23 million time points 23 million time points Exponential distribution Exponential distribution Results Results Real time response: 0.01ms per time point on average Real time response: 0.01ms per time point on average 3-5 times speedup over Shifted Binary Tree 3-5 times speedup over Shifted Binary Tree The dynamic algorithm performs slightly better than the static algorithm. The dynamic algorithm performs slightly better than the static algorithm.

60
59 Click Fraud Detection in Website Traffic Monitoring Setup Setup 2003 Sloan Digital Sky Survey web traffic log, same type of data as click data 2003 Sloan Digital Sky Survey web traffic log, same type of data as click data Number of requests within each second Number of requests within each second 31 million time points 31 million time points Poisson distribution Poisson distribution Results: Results: 2-5 times speedup over Shifted Binary Tree 2-5 times speedup over Shifted Binary Tree The dynamic algorithm performs better than the static algorithm. The dynamic algorithm performs better than the static algorithm.

61
60 Fast and Accurate Time Series Matching with Time-Warping

62
61 Outline Problem Statement Related work review Case study: Query-by-Humming Future work

63
62 Motivation Many applications naturally generate time series data The most convenient way to investigate the data is by using existing examples as queries to find similar data In most cases, queries may have inaccurate timing

64
63 Goal of this work Goal Build fast and accurate similarity search algorithms for large scale time series system that allow complex time shifting in the query Two Major Challenges Query ambiguity The large size of the database

65
64 Related Work Review GEMINI framework Introduced by C. Faloutsos, M. Ranganathan and Y. Manolopoulos to avoid linear scan comparison Dynamic Time Warping (DTW) Introduced by D. Berndt and J. Clifford to allow time- shifting in the time series comparison Envelope and Envelope Transforms Introduced by E. Keogh to index DTW distance Generalized into GEMINI framework by our group.

66
65 Dynamic Time Warping DTW Distance between two time series x,y is Equal to optimal path finding Each path (1,1) (m,n) is an alignment (i,j) represents aligning x(i) with y(j) cost(i,j) equals |x(i)-y(j)| Optimal path has minimum total cost

67
66 Problem of DTW Distance DTW Distance does not obey triangle-inequality. which most standard indexing methods require.

68
67 Envelope and Envelope Transform Envelope Filter Transformed Envelope Filter Filter out bad candidates with lower computing cost and guarantee no false negative Feature Space

69
68 Example of Envelope Transform Piecewise Aggregate Approximation (PAA)

70
69 Case Study: Query by Humming Application Specific Challenges Related Work Proposed Framework Experiment Humming Music Database System Similar music segments query return

71
70 Challenges People may hum out of tune Different base key Inaccurate relative pitch Instable pitch Different tempo Varying tempo Hard to segment notes in the humming

72
71 Flourishing Literature String-represented note-sequence matching [A. Uitdenboderd et al Matching techniques for large music databases.ACM Multimedia 99] Data Reduction Transformations (STFT) [ N. Kosugi et al A pratical query-by-humming system for a large music database. ACM Multimedia 2000] Melody slope matching [Yongwei Zhu et al Pitch tracking and melody slope matching for song retrieval. Advances in Multimedia Information Processing PCM 2001] Dynamic Time Warping (DTW) on pitch contour [Y. Zhu and D. Shasha Warping indexes with envelope transforms for query by humming. ACM SIGMOD 2003] String-editing on note sequence combined with rhythm alignment [W. Archer A. Methods for retrieving musical information based on rhythm and pitch correlation CSGSC 2003]

73
72 Problems with Related Work Difficult to do performance comparison No Standard for Evaluation Data set and test set are different Definition of accuracy is not reliable General conclusions nevertheless possible: Warped distance measure is more robust Need to scale up warped distance matching

74
73 Experiment on Scaling Up Test set 41 hummed tunes from Beatles songs Collected from both amateurs and professionals Recognizable by more than three persons Two data sets A. 53 Beatles songs (included by B) B songs including 123 Beatles songs, 908 American rock-and- pop songs and one Chinese game music Top K Hit-rate= # recognized in TopK list # recognized by human * Both system are optimized using another test set which includes 17 hummings

75
74 Framework Proposal note/duration sequence segment notes Query criteria Database Humming with ta keywords Top N match Nearest-N search on DTW distance with transformed envelope filter melody (note) Top N match Alignment verifier rhythm (duration) & melody (note) Database statistics based features Boosted feature filter boosting Database Keyword filter

76
75 Important Heuristic: ta Based Humming* Solve the problem of note segmentation in most cases Compare humming with la and ta * Idea from N. Kosugi et al A pratical query-by-humming system for a large music database ACM Multimedia 2000

77
76 Benefits of ta Style Humming Decrease the size of time series by orders of magnitude. Thus reduce the computation of DTW distance

78
77 Important Heuristic: Statistics-Based Filters * Low dimensional statistics feature Lower computation cost comparing to DTW distance Quickly filter out true negatives Example Filter out candidates whose note length is much larger/smaller than the querys note length More Standard Deviation of note values Zero crossing rate of note values Number of local minimum/maximum of note values Histogram of note values * Intuition from Erling Wold et al Content-based classification, search and retrieval of audio IEEE Multimedia 1996

79
78 Important Heuristic: Boosting Characteristics of statistics-based filters Quick but does not guarantee no false negative Becomes weak classifier for bad parameters setting Ideal candidates for boosting Boosting * An algorithm for constructing a strong classifier using only a training set and a set of weak classification algorithm A particular linear combination of these weak classifiers is used as the final classifier which has a much smaller probability of misclassification * Cynthia Rudin et al On the Dynamics of Boosting In Advances in Neural Information Processing Systems 2004

80
79 Important Heuristic: Alignment Verification Nearest-N search only used melody information, which does not guarantee the rhythm will match Will A. Arentz et al. suggests combining rhythm and melody information in similarity comparison Results are generally better than using only melody information Not appropriate when the sum of several notes duration in the query may be related to duration of one note in the candidate Our method: 1. Use melody information for DTW distance computing 2. Reject candidates that have bad local note alignment 3. Merge durations appropriately based on the note alignment 4. Reject candidates which have bad duration alignment

81
80 Experiments Data Set 1,032 songs were divided and organized into 73, 051 melodic segments Computing Environment Pentium IV 2.4G 768M memory: all the data can be loaded K: an array processing language Query Criteria Return Top 15 list which has DTW distance less than 0.5 Allow 0.05 * N local time shifting for query with N notes

82
81 Experiment One: Human Humming Query Set 109 `ta`-style hummings The previous 41 hummings for Beatles songs 65 hummings for American rock-and-pop song 3 hummings for the Chinese game music Recognizable by at least two persons Number of notes varies from 6 to 24

83
82 Future Work Model and estimate the error more accurately Analyze the relationship between algorithm's performance and observed humming errors Build a standard benchmark to evaluate and compare different QbH systems Investigate more lower-bounding filters on lower levels Investigate more classifiers to boost Build intelligent system Self-improving by adjusting to a particular users humming patterns

84
83 Themes and Approaches Our approach: take simple problems and make them fast. Correlation and related primitives (e.g. matching pursuit) are simple, but we want to do them for many thousands of time series incrementally and in near linear time. Burst detection over many window sizes in near linear time. Query by humming: large scale and accurate. Coming to a store near you?

Similar presentations

© 2016 SlidePlayer.com Inc.

All rights reserved.

Ads by Google