Presentation is loading. Please wait.

Presentation is loading. Please wait.

Indexing Time Series Based on original slides by Prof. Dimitrios Gunopulos and Prof. Christos Faloutsos with some slides from tutorials by Prof. Eamonn.

Similar presentations


Presentation on theme: "Indexing Time Series Based on original slides by Prof. Dimitrios Gunopulos and Prof. Christos Faloutsos with some slides from tutorials by Prof. Eamonn."— Presentation transcript:

1 Indexing Time Series Based on original slides by Prof. Dimitrios Gunopulos and Prof. Christos Faloutsos with some slides from tutorials by Prof. Eamonn Keogh and Dr. Michalis Vlachos. Excellent tutorials (and not only) about Time Series can be found there: A nice tutorial on Matlab and Time series is also there:

2 Time Series Databases A time series is a sequence of real numbers, representing the measurements of a real variable at equal time intervals Stock prices Volume of sales over time Daily temperature readings ECG data A time series database is a large collection of time series

3 Time Series Data A time series is a collection of observations made sequentially in time. .. 50 100 150 200 250 300 350 400 450 500 23 24 25 26 27 28 29 value axis time axis

4 Time Series Problems (from a database perspective)
The Similarity Problem X = x1, x2, …, xn and Y = y1, y2, …, yn Define and compute Sim(X, Y) E.g. do stocks X and Y have similar movements? Retrieve efficiently similar time series (Indexing for Similarity Queries)

5 Types of queries whole match vs sub-pattern match
range query vs nearest neighbors all-pairs query

6 Examples Find companies with similar stock prices over a time interval
Find products with similar sell cycles Cluster users with similar credit card utilization Find similar subsequences in DNA sequences Find scenes in video streams

7 distance function: by expert (eg, Euclidean distance)
day $price 1 365 distance function: by expert (eg, Euclidean distance)

8 Problems Define the similarity (or distance) function
Find an efficient algorithm to retrieve similar time series from a database (Faster than sequential scan) The Similarity function depends on the Application

9 Metric Distances What properties should a similarity distance have to allow (easy) indexing? D(A,B) = D(B,A) Symmetry D(A,A) = 0 Constancy of Self-Similarity D(A,B) >= 0 Positivity D(A,B)  D(A,C) + D(B,C) Triangular Inequality Some times the distance function that best fits an application is not a metric… then indexing becomes interesting….

10 Euclidean Similarity Measure
View each sequence as a point in n-dimensional Euclidean space (n = length of each sequence) Define (dis-)similarity between sequences X and Y as p=1 Manhattan distance p=2 Euclidean distance

11 Euclidean model Query Q Database Distance 0.98 0.07 0.21 0.43 Rank 4 1
n datapoints Database n datapoints Distance 0.98 0.07 0.21 0.43 Rank 4 1 2 3 S Q Euclidean Distance between two time series Q = {q1, q2, …, qn} and S = {s1, s2, …, sn}

12 Advantages Easy to compute: O(n)
Allows scalable solutions to other problems, such as indexing clustering etc...

13 Dynamic Time Warping [Berndt, Clifford, 1994]
Allows acceleration-deceleration of signals along the time dimension Basic idea Consider X = x1, x2, …, xn , and Y = y1, y2, …, yn We are allowed to extend each sequence by repeating elements Euclidean distance now calculated between the extended sequences X’ and Y’ Matrix M, where mij = d(xi, yj)

14 Example Euclidean distance vs DTW

15 Dynamic Time Warping [Berndt, Clifford, 1994]
warping path j = i – w j = i + w Y y3 y2 y1 x1 x2 x3 X

16 Restrictions on Warping Paths
Monotonicity Path should not go down or to the left Continuity No elements may be skipped in a sequence Warping Window | i – j | <= w

17 Example Matrix of the pair-wise distances for element si with qj
s s s3 s s s s s s9 q q q q q q q q q Matrix of the pair-wise distances for element si with qj

18 Example s s s s s s s s s9 q q q q q q q q q Matrix computed with Dynamic Programming based on the: dist(i,j) = dist(si, yj) + min {dist(i-1,j-1), dist(i, j-1), dist(i-1,j))

19 Formulation Let D(i, j) refer to the dynamic time warping distance between the subsequences x1, x2, …, xi y1, y2, …, yj D(i, j) = | xi – yj | + min{ D(i – 1, j), D(i – 1, j – 1), D(i, j – 1) }

20 Solution by Dynamic Programming
Basic implementation = O(n2) where n is the length of the sequences will have to solve the problem for each (i, j) pair If warping window is specified, then O(nw) Only solve for the (i, j) pairs where | i – j | <= w

21 Longest Common Subsequence Measures (Allowing for Gaps in Sequences)
Gap skipped

22 Longest Common Subsequence (LCSS)
LCSS is more resilient to noise than DTW. Disadvantages of DTW: All points are matched Outliers can distort distance One-to-many mapping ignore majority of noise Advantages of LCSS: Outlying values not matched Distance/Similarity distorted less Constraints in time & space match match

23 Longest Common Subsequence
Similar dynamic programming solution as DTW, but now we measure similarity not distance. Can also be expressed as distance

24 Similarity Retrieval Range Query Nearest Neighbor query
Find all time series S where Nearest Neighbor query Find all the k most similar time series to Q A method to answer the above queries: Linear scan … very slow A better approach GEMINI

25 GEMINI Solution: Quick-and-dirty' filter:
extract m features (numbers, eg., avg., etc.) map into a point in m-d feature space organize points with off-the-shelf spatial access method (‘SAM’) retrieve the answer using a NN query discard false alarms

26 GEMINI Range Queries Build an index for the database in a feature space using an R-tree Algorithm RangeQuery(Q, e) Project the query Q into a point q in the feature space Find all candidate objects in the index within e Retrieve from disk the actual sequences Compute the actual distances and discard false alarms

27 GEMINI NN Query Algorithm K_NNQuery(Q, K)
Project the query Q in the same feature space Find the candidate K nearest neighbors in the index Retrieve from disk the actual sequences pointed to by the candidates Compute the actual distances and record the maximum Issue a RangeQuery(Q, emax) Compute the actual distances, return best K

28 GEMINI GEMINI works when:
Dfeature(F(x), F(y)) <= D(x, y) Note that, the closer the feature distance to the actual one, the better.

29 Generic Search using Lower Bounding
simplified DB Answer Superset original DB Final Answer set Verify against original DB simplified query query

30 Problem How to extract the features? How to define the feature space?
Fourier transform Wavelets transform Averages of segments (Histograms or APCA) Chebyshev polynomials .... your favorite curve approximation...

31 Fourier transform DFT (Discrete Fourier Transform)
Transform the data from the time domain to the frequency domain highlights the periodicities SO?

32 DFT A: several real sequences are periodic Q: Such as? A:
sales patterns follow seasons; economy follows 50-year cycle (or 10?) temperature follows daily and yearly cycles Many real signals follow (multiple) cycles

33 How does it work? value x ={x0, x1, ... xn-1} s ={s0, s1, ... sn-1}
Decomposes signal to a sum of sine and cosine waves. Q:How to assess ‘similarity’ of x with a (discrete) wave? value x ={x0, x1, ... xn-1} s ={s0, s1, ... sn-1} time 1 n-1

34 How does it work? Freq=1/period value value
A: consider the waves with frequency 0, 1, ...; use the inner-product (~cosine similarity) Freq=1/period 1 n-1 time value freq. f=1 (sin(t * 2 p/n) ) 1 n-1 time value freq. f=0

35 How does it work? A: consider the waves with frequency 0, 1, ...; use the inner-product (~cosine similarity) 1 n-1 time value freq. f=2

36 How does it work? 1 n-1 cosine, f=1 sine, freq =1 1 n-1 1 n-1
1 n-1 ‘basis’ functions 1 n-1 cosine, f=1 sine, freq =1 1 n-1 cosine, f=2 sine, freq = 2 1 n-1 1 n-1

37 How does it work? Basis functions are actually n-dim vectors, orthogonal to each other ‘similarity’ of x with each of them: inner product DFT: ~ all the similarities of x with the basis functions

38 How does it work? Since ejf = cos(f) + j sin(f) (j=sqrt(-1)),
we finally have:

39 DFT: definition Discrete Fourier Transform (n-point): inverse DFT

40 DFT: properties Observation - SYMMETRY property: Xf = (Xn-f )*
( “*”: complex conjugate: (a + b j)* = a - b j ) Thus we use only the first half numbers

41 DFT: Amplitude spectrum
Intuition: strength of frequency ‘f’ count Af freq: 12 time freq. f

42 DFT: Amplitude spectrum
excellent approximation, with only 2 frequencies! so what?

43 n = 128 The graphic shows a time series with 128 points.
0.4995 0.5264 0.5523 0.5761 0.5973 0.6153 0.6301 0.6420 0.6515 0.6596 0.6672 0.6751 0.6843 0.6954 0.7086 0.7240 0.7412 0.7595 0.7780 0.7956 0.8115 0.8247 0.8345 0.8407 0.8431 0.8423 0.8387 Raw Data The graphic shows a time series with 128 points. The raw data used to produce the graphic is also reproduced as a column of numbers (just the first 30 or so points are shown). C 20 40 60 80 100 120 140 n = 128

44 1.5698 1.0485 0.7160 0.8406 0.3709 0.4670 0.2667 0.1928 0.1635 0.1602 0.0992 0.1282 0.1438 0.1416 0.1400 0.1412 0.1530 0.0795 0.1013 0.1150 0.1801 0.1082 0.0812 0.0347 0.0052 0.0017 0.0002 ... Fourier Coefficients 0.4995 0.5264 0.5523 0.5761 0.5973 0.6153 0.6301 0.6420 0.6515 0.6596 0.6672 0.6751 0.6843 0.6954 0.7086 0.7240 0.7412 0.7595 0.7780 0.7956 0.8115 0.8247 0.8345 0.8407 0.8431 0.8423 0.8387 Raw Data We can decompose the data into 64 pure sine waves using the Discrete Fourier Transform (just the first few sine waves are shown). The Fourier Coefficients are reproduced as a column of numbers (just the first 30 or so coefficients are shown). C 20 40 60 80 100 120 140

45 We have discarded of the data. C n = 128 N = 8 Cratio = 1/16 C’
Truncated Fourier Coefficients 0.4995 0.5264 0.5523 0.5761 0.5973 0.6153 0.6301 0.6420 0.6515 0.6596 0.6672 0.6751 0.6843 0.6954 0.7086 0.7240 0.7412 0.7595 0.7780 0.7956 0.8115 0.8247 0.8345 0.8407 0.8431 0.8423 0.8387 Raw Data Fourier Coefficients 1.5698 1.0485 0.7160 0.8406 0.3709 0.4670 0.2667 0.1928 0.1635 0.1602 0.0992 0.1282 0.1438 0.1416 0.1400 0.1412 0.1530 0.0795 0.1013 0.1150 0.1801 0.1082 0.0812 0.0347 0.0052 0.0017 0.0002 ... C 1.5698 1.0485 0.7160 0.8406 0.3709 0.4670 0.2667 0.1928 n = 128 N = 8 Cratio = 1/16 C’ 20 40 60 80 100 120 140 We have discarded of the data.

46 Sorted Truncated Fourier Coefficients 1.5698 1.0485 0.7160 0.8406 0.3709 0.1670 0.4667 0.1928 0.1635 0.1302 0.0992 0.1282 0.2438 0.2316 0.1400 0.1412 0.1530 0.0795 0.1013 0.1150 0.1801 0.1082 0.0812 0.0347 0.0052 0.0017 0.0002 ... Fourier Coefficients 0.4995 0.5264 0.5523 0.5761 0.5973 0.6153 0.6301 0.6420 0.6515 0.6596 0.6672 0.6751 0.6843 0.6954 0.7086 0.7240 0.7412 0.7595 0.7780 0.7956 0.8115 0.8247 0.8345 0.8407 0.8431 0.8423 0.8387 Raw Data C 1.5698 1.0485 0.7160 0.8406 0.2667 0.1928 0.1438 0.1416 C’ 20 40 60 80 100 120 140 Instead of taking the first few coefficients, we could take the best coefficients

47 DFT: Parseval’s theorem
sum( xt 2 ) = sum ( | X f | 2 ) Ie., DFT preserves the ‘energy’ or, alternatively: it does an axis rotation: x1 x = {x0, x1} x0

48 Lower Bounding lemma Using Parseval’s theorem we can prove the lower bounding property! So, apply DFT to each time series, keep first 3-10 coefficients as a vector and use an R-tree to index the vectors R-tree works with euclidean distance, OK.

49 Time series collections
Fourier and wavelets are the most prevalent and successful “descriptions” of time series. Next, we will consider collections of M time series, each of length N. What is the series that is “most similar” to all series in the collection? What is the second “most similar”, and so on…

50 Time series collections
Some notation: values at time t, xt i-th series, x(i)

51 Principal Component Analysis Example
Exchange rates (vs. USD) Principal components 1-4 (  0) u1 -0.05 0.05 U1 U2 U3 500 1000 1500 2000 2500 Time U4 -2 2 AUD BEF CAD FRF DEM JPY NLG NZL ESP SEK CHF 500 1000 1500 2000 2500 Time GBP = 48% u2 + 33% = 81% u3 + 11% = 92% u4 + 4% = 96% “Best” basis : { u1, u2, u3, u4 } x(2) = 49.1u u u u4 + 1 Coefficients of each time series w.r.t. basis { u1, u2, u3, u4 } : Export setup for t.s. plot: 5x7 in (expand axes)

52 Principal component analysis
First two principal components -2 2 CAD -30 -20 -10 10 20 30 40 50 60 -2 2 ESP SEK -2 2 GBP AUD -2 2 FRF i,2 -2 2 BEF NZL CHF -2 2 NLG -2 2 DEM -2 2 JPY i,1

53 Principal Component Analysis Matrix notation — Singular Value Decomposition (SVD)
X = UVT X U x(1) x(2) x(M) u1 u2 uk VT 1 2 3 M . = coefficients w.r.t. basis in U (columns) time series basis for time series

54 Principal Component Analysis Matrix notation — Singular Value Decomposition (SVD)
X = UVT X U x(1) x(2) x(M) u1 u2 uk VT v’1 1 2 3 N v’2 . = v’k basis for measurements (rows) time series basis for time series coefficients w.r.t. basis in U (columns)

55 Principal Component Analysis Matrix notation — Singular Value Decomposition (SVD)
X = UVT X U x(1) x(2) x(M) u1 u2 uk VT v1 v2 vk 1 2 . . = k basis for measurements (rows) scaling factors time series basis for time series

56 PCA gives another lower dimensional transformation
Easy to show that the lower bounding lemma holds but needs a collection of time series and expensive to compute it exactly

57 Feature Spaces X X' DFT X X' DWT X X' SVD
20 40 60 80 100 120 140 X X' DFT X X' DWT 20 40 60 80 100 120 140 X X' SVD 20 40 60 80 100 120 140 eigenwave 0 eigenwave 1 eigenwave 2 eigenwave 3 eigenwave 4 eigenwave 5 eigenwave 6 eigenwave 7 Haar 0 Haar 1 Haar 2 Haar 3 Haar 4 Haar 5 Haar 6 Haar 7 1 2 3 4 5 6 7 Agrawal, Faloutsos, Swami 1993 Chan & Fu 1999 Korn, Jagadish, Faloutsos 1997

58 Piecewise Aggregate Approximation (PAA)
value axis time axis Original time series (n-dimensional vector) S={s1, s2, …, sn} sv1 sv2 sv3 sv4 sv5 sv6 sv7 sv8 n’-segment PAA representation (n’-d vector) S = {sv1 , sv2, …, svn’ } PAA representation satisfies the lower bounding lemma (Keogh, Chakrabarti, Mehrotra and Pazzani, 2000; Yi and Faloutsos 2000)

59 Can we improve upon PAA? n’-segment PAA representation (n’-d vector)
sv1 sv2 sv3 sv4 sv5 sv6 sv7 sv8 n’-segment PAA representation (n’-d vector) S = {sv1 , sv2, …, svN } Adaptive Piecewise Constant Approximation (APCA) sv1 sv2 sv3 sv4 sr1 sr2 sr3 sr4 n’/2-segment APCA representation (n’-d vector) S= { sv1, sr1, sv2, sr2, …, svM , srM } (M is the number of segments = n’/2)

60 Distance Measure Lower bounding distance DLB(Q,S)
D(Q,S) Exact (Euclidean) distance D(Q,S) S Q’ Q DLB(Q’,S) DLB(Q’,S)

61 Lower Bounding the Dynamic Time Warping
Recent approaches use the Minimum Bounding Envelope for bounding the constrained DTW Create a d Envelope of the query Q (U, L) Calculate distance between MBE of Q and any sequence A One can show that: D(MBE(Q)δ,A) < DTW(Q,A) d is the constraint U MBE(Q) A Q L

62 Lower Bounding the Dynamic Time Warping
Q A LB by Keogh approximate MBE and sequence using MBRs LB = 13.84 Q A LB by Zhu and Shasha approximate MBE and sequence using PAA LB = 25.41

63 Computing the LB distance
Use PAA to approximate each time series A in the sequence and U and L of the query envelop using k segments Then the LB_PAA can be computed as follows:

64 where is the average of the i-th segment of the time
series A, i.e. similarly we compute and


Download ppt "Indexing Time Series Based on original slides by Prof. Dimitrios Gunopulos and Prof. Christos Faloutsos with some slides from tutorials by Prof. Eamonn."

Similar presentations


Ads by Google