Download presentation

Presentation is loading. Please wait.

Published byDwayne Stiverson Modified over 2 years ago

1
Learning Trajectory Patterns by Clustering: Comparative Evaluation Group D

2
Problem Description & Definition

3
Preprocessing Grid Quantization Clustering Distance/Similarity - modified Euclidean distance, dynamic time warping and longest common sequence Clustering - bisection, Agglomerative and min-cut graph based with number of clusters predefined Clustering Validation Ground-truth based Hungarian Algorithm for matching clusters generated with ground-truth clusters Problem Description & Definition

4
Preprocessing Grid quantization s=2 Normalization Grid Quantization

5
Preprocessing Location 1 Location 2 Location 3 Location 4 Computation Complexity Reduction Entry and Exit detection based on clustering starting and ending points of each trajectory (k-means clustering k=4)

6
Distance Metrics Modified Euclidean Distance (m>n) Dynamic Time Warping DTW is used to compare unequal length signals by ﬁnding a time warping that minimizes the total distance between matching points

7
Distance Metrics Longest Common Sub Sequence s 1 ={a, b, c, d, e, f}; s 2 ={b, d, e, f, m,n} LCSS(s 1,s 2 )={b, d, f} where δ is a constant that controls how far we can look in the past and ε is a constant that controls the size of proximity in which we are looking for matches

8
Gaussian Kernel Function Distance to Similarity Metrics A similarity matrix S = {sij}, which represents a fully connected graph, is constructed from the trajectory distances using a Gaussian kernel function Where D represents one of the distance measure deﬁned previously and the parameter σ describes the trajectory neighborhood. Large values of σ cause further apart trajectories to have a higher similarity score while small values lead to a more sparse similarity matrix (more entries will be very small) σ =0.1σ =0.9σ =2.1σ =4.1 σ =7.1 DTW

9
Clustering Methods(CLUTO) Divisive Divisive clustering is the top-down clustering where the entire trajectory training set is considered a single cluster. The K clusters are obtained by performing K − 1 repeated bisections where each bisecting cluster split results an optimal 2-way division of the similarity matrix. In addition to ensuring local optimality a global optimization step is used to optimize the solution across all bisections. Agglomerative Agglomerative clustering is a bottom-up strategy that initially treats each trajectory as an individual cluster and merges similar clusters hierarchically in a tree-like structure, stopping when only K clusters remain. Graph (min-cut) Similar to the divisive clustering method, graph methods seek to divide the full dataset into individual clusters. Instead of operating directly on the similarity matrix, a nearest neighbor graph is constructed where a trajectory is a vertex. Each vertex is connected by a weighted edge to its most similar trajectories. The K clusters are found using a min-cut partitioning algorithm which ﬁnds a division of the graph with minimal loss of edge weights.

10
Clustering Validation c1 c2 c3 Ground truth clusters c2 c1 c3 Clusters to evaluated Hungarian Algorithms to maximize The number of clusters matched Accuracy=n_matched/n_total

11
Evaluation Dataset CLUTO CLUTO is a software package for clustering low- and high-dimensional datasets and for analyzing the characteristics of the various clusters. Standalone program scluster is utilized for clustering trajectories 1032 trajectories 18 clusters Lankershim Dataset

12
How the size of Gaussian Kernel function influences the converting from distance matrix to similarity matrix: σ should be large enough Evaluation-Distance Metrics DTW + Agglomerative σ accuracy

13
How the size of Gaussian Kernel function influences the converting from distance matrix to similarity matrix: σ should be large enough Evaluation-Distance Metrics DTW + Divisive accuracy

14
How the size of Gaussian Kernel function influences the converting from distance matrix to similarity matrix: σ should be large enough Evaluation-Distance Metrics Modified_Euclidean + Divisive

15
Evaluation-Distance Metrics How (δ, ε)parameters of LCSS influences the clustering results δ LCSS+ Graph

16
Evaluation-Clustering How (δ, ε)parameters of LCSS influences the clustering results ε LCSS+ Graph

17
Evaluation-Clustering Modified_Euclidean, DTW σ=7.1 LCSS δ=3, ε=8 d1-Modified Euclidean, d2-DTW, d3-LCSS c1-divisive, c2-agglomerative, c3-graph Distance Metric d1 d2 d3 Clustering c1c2c3c1c2c3c1c2c3 Accuracy0.830.570.8220.9770.830.9170.9560.910.959 Distance Computation Time(s) 0.0015 0.15 0.02 Clustering Computation Time(s) 2.8590.3590.2972.7820.3750.3053.0310.3280.532

18
Conclusion Distance Metric Computation Complexity d1

19
Demo

20
Thanks

Similar presentations

OK

Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!

Example Apply hierarchical clustering with d min to below data where c=3. Nearest neighbor clustering d min d max will form elongated clusters!

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google