Visualizing and Clustering Life Science Applications in Parallel HiCOMB 2015 14th IEEE International Workshop on High Performance Computational Biology.

Slides:



Advertisements
Similar presentations
Pattern Recognition and Machine Learning
Advertisements

Future Internet Technology Building
Lecture 23 Exemplary Inverse Problems including Earthquake Location.
CSCI 347 / CS 4206: Data Mining Module 07: Implementations Topic 03: Linear Models.
Siddharth Choudhary.  Refines a visual reconstruction to produce jointly optimal 3D structure and viewing parameters  ‘bundle’ refers to the bundle.
Panel: New Opportunities in High Performance Data Analytics (HPDA) and High Performance Computing (HPC) The 2014 International Conference.
Proteomics Metagenomics and Deterministic Annealing Indiana University October Geoffrey Fox
Parallelizing Data Analytics INTERNATIONAL ADVANCED RESEARCH WORKSHOP ON HIGH PERFORMANCE COMPUTING From Clouds and Big Data to Exascale.
SASH Spatial Approximation Sample Hierarchy
Neural Networks for Optimization William J. Wolfe California State University Channel Islands.
© University of Minnesota Data Mining for the Discovery of Ocean Climate Indices 1 CSci 8980: Data Mining (Fall 2002) Vipin Kumar Army High Performance.
Lecture 5: Learning models using EM
Prénom Nom Document Analysis: Data Analysis and Clustering Prof. Rolf Ingold, University of Fribourg Master course, spring semester 2008.
High Performance Computing 1 Parallelization Strategies and Load Balancing Some material borrowed from lectures of J. Demmel, UC Berkeley.
Hypercubes and Neural Networks bill wolfe 10/23/2005.
Distributed Model-Based Learning PhD student: Zhang, Xiaofeng.
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
FLANN Fast Library for Approximate Nearest Neighbors
Radial Basis Function Networks
Simulated Annealing G.Anuradha. What is it? Simulated Annealing is a stochastic optimization method that derives its name from the annealing process used.
HPC-ABDS: The Case for an Integrating Apache Big Data Stack with HPC
Data Mining Runtime Software and Algorithms BigDat 2015: International Winter School on Big Data Tarragona, Spain, January 26-30, 2015 January
Iterative computation is a kernel function to many data mining and data analysis algorithms. Missing in current MapReduce frameworks is collective communication,
Manifold learning: Locally Linear Embedding Jieping Ye Department of Computer Science and Engineering Arizona State University
Deterministic Annealing Indiana University CS Theory group January Geoffrey Fox
X-Informatics Web Search; Text Mining B 2013 Geoffrey Fox Associate Dean for.
Generative Topographic Mapping in Life Science Jong Youl Choi School of Informatics and Computing Pervasive Technology Institute Indiana University
CSC2535: Computation in Neural Networks Lecture 11: Conditional Random Fields Geoffrey Hinton.
Remarks on Big Data Clustering (and its visualization) Big Data and Extreme-scale Computing (BDEC) Charleston SC May Geoffrey Fox
CSC321: Neural Networks Lecture 12: Clustering Geoffrey Hinton.
Presenter: Yang Ruan Indiana University Bloomington
Generative Topographic Mapping by Deterministic Annealing Jong Youl Choi, Judy Qiu, Marlon Pierce, and Geoffrey Fox School of Informatics and Computing.
Deterministic Annealing Networks and Complex Systems Talk 6pm, Wells Library 001 Indiana University November Geoffrey.
PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 3: LINEAR MODELS FOR REGRESSION.
CSC321: 2011 Introduction to Neural Networks and Machine Learning Lecture 9: Ways of speeding up the learning and preventing overfitting Geoffrey Hinton.
So Far……  Clustering basics, necessity for clustering, Usage in various fields : engineering and industrial fields  Properties : hierarchical, flat,
Multidimensional Scaling by Deterministic Annealing with Iterative Majorization Algorithm Seung-Hee Bae, Judy Qiu, and Geoffrey Fox SALSA group in Pervasive.
Deterministic Annealing Dimension Reduction and Biology Indiana University Environmental Genomics April Geoffrey.
CSC2535: Computation in Neural Networks Lecture 12: Non-linear dimensionality reduction Geoffrey Hinton.
Simulated Annealing G.Anuradha.
Tony Jebara, Columbia University Advanced Machine Learning & Perception Instructor: Tony Jebara.
Looking at Use Case 19, 20 Genomics 1st JTC 1 SGBD Meeting SDSC San Diego March Judy Qiu Shantenu Jha (Rutgers) Geoffrey Fox
Data Science Research and Education with Bioinformatics Applications IUPUI October Geoffrey Fox School.
Analyzing Expression Data: Clustering and Stats Chapter 16.
CS246 Latent Dirichlet Analysis. LSI  LSI uses SVD to find the best rank-K approximation  The result is difficult to interpret especially with negative.
Big Data to Knowledge Panel SKG 2014 Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China August Geoffrey Fox
CSC321: Introduction to Neural Networks and Machine Learning Lecture 15: Mixtures of Experts Geoffrey Hinton.
Deterministic Annealing and Robust Scalable Data mining for the Data Deluge Petascale Data Analytics: Challenges, and Opportunities (PDAC-11) Workshop.
Optimization Indiana University July Geoffrey Fox
CSC321: Lecture 25: Non-linear dimensionality reduction Geoffrey Hinton.
Yang Ruan PhD Candidate Salsahpc Group Community Grid Lab Indiana University.
ECE 8443 – Pattern Recognition ECE 8527 – Introduction to Machine Learning and Pattern Recognition Objectives: Mixture Densities Maximum Likelihood Estimates.
1 Panel on Merge or Split: Mutual Influence between Big Data and HPC Techniques IEEE International Workshop on High-Performance Big Data Computing In conjunction.
Image & Model Fitting Abstractions February 2017
DA Algorithms Analytics February 2017 Software: MIDAS HPC-ABDS
High Performance Big Data Computing in the Digital Science Center
DACIDR for Gene Analysis
Scalable Parallel Interoperable Data Analytics Library
SPIDAL and Deterministic Annealing
SPIDAL Presentation December
Towards High Performance Data Analytics with Java
Simple Kmeans Examples
Department of Intelligent Systems Engineering
LECTURE 21: CLUSTERING Objectives: Mixture Densities Maximum Likelihood Estimates Application to Gaussian Mixture Models k-Means Clustering Fuzzy k-Means.
Robust Parallel Clustering Algorithms
Basic Kmeans Clustering and Optimization
Indiana University July Geoffrey Fox
MDS and Visualization September Geoffrey Fox
Convergence of Big Data and Extreme Computing
Presentation transcript:

Visualizing and Clustering Life Science Applications in Parallel HiCOMB th IEEE International Workshop on High Performance Computational Biology at IPDPS 2015 Hyderabad, India May Geoffrey Fox School of Informatics and Computing Digital Science Center Indiana University Bloomington 5/25/20151

Deterministic Annealing Algorithms 25/25/2015

Some Motivation Big Data requires high performance – achieve with parallel computing Big Data sometimes requires robust algorithms as more opportunity to make mistakes Deterministic annealing (DA) is one of better approaches to robust optimization and broadly applicable – Started as “Elastic Net” by Durbin for Travelling Salesman Problem TSP – Tends to remove local optima – Addresses overfitting – Much Faster than simulated annealing Physics systems find true lowest energy state if you anneal i.e. you equilibrate at each temperature as you cool Uses mean field approximation, which is also used in “Variational Bayes” and “Variational inference” 5/25/20153

(Deterministic) Annealing Find minimum at high temperature when trivial Small change avoiding local minima as lower temperature Typically gets better answers than standard libraries- R and Mahout And can be parallelized and put on GPU’s etc. 45/25/2015

General Features of DA In many problems, decreasing temperature is classic multiscale – finer resolution (√T is “just” distance scale) In clustering √T is distance in space of points (and centroids), for MDS scale in mapped Euclidean space T = ∞, all points are in same place – the center of universe For MDS all Euclidean points are at center and distances are zero. For clustering, there is one cluster As Temperature lowered there are phase transitions in clustering cases where clusters split – Algorithm determines whether split needed as second derivative matrix singular Note DA has similar features to hierarchical methods and you do not have to specify a number of clusters; you need to specify a final distance scale 5 5/25/2015

Math of Deterministic Annealing H(  ) is objective function to be minimized as a function of parameters  (as in Stress formula given earlier for MDS) Gibbs Distribution at Temperature T P(  ) = exp( - H(  )/T) /  d  exp( - H(  )/T) Or P(  ) = exp( - H(  )/T + F/T ) Use the Free Energy combining Objective Function and Entropy F = =  d  {P(  )H + T P(  ) lnP(  )} Simulated annealing performs these integrals by Monte Carlo Deterministic annealing corresponds to doing integrals analytically (by mean field approximation) and is much much faster Need to introduce a modified Hamiltonian for some cases so that integrals are tractable. Introduce extra parameters to be varied so that modified Hamiltonian matches original In each case temperature is lowered slowly – say by a factor 0.95 to at each iteration 5/25/20156

Some Uses of Deterministic Annealing DA Clustering improved K-means – Vectors: Rose (Gurewitz and Fox 1990 – 486 citations encouraged me to revisit) – Clusters with fixed sizes and no tails (Proteomics team at Broad) – No Vectors: Hofmann and Buhmann (Just use pairwise distances) Dimension Reduction for visualization and analysis – Vectors: GTM Generative Topographic Mapping – No vectors SMACOF: Multidimensional Scaling) MDS (Just use pairwise distances) Can apply to HMM & general mixture models (less study) – Gaussian Mixture Models – Probabilistic Latent Semantic Analysis with Deterministic Annealing DA-PLSA as alternative to Latent Dirichlet Allocation for finding “hidden factors” Have scalable parallel versions of much of above – mainly Java Many clustering methods – not clear what is best although DA pretty good and improves K-means at increased computing cost which is not always useful DA clearly improves MDS which is ~only reliable” dimension reduction? 5/25/20157

Clusters v. Regions In Lymphocytes clusters are distinct; DA useful In Pathology, clusters divide space into regions and sophisticated methods like deterministic annealing are probably unnecessary 8 Pathology 54D Lymphocytes 4D 5/25/2015

Some Problems we are working on Analysis of Mass Spectrometry data to find peptides by clustering peaks (Broad Institute/Hyderabad) – ~0.5 million points in 2 dimensions (one collection) -- ~ 50,000 clusters summed over charges Metagenomics – 0.5 million (increasing rapidly) points NOT in a vector space – hundreds of clusters per sample – Apply MDS to Phylogenetic trees Pathology Images >50 Dimensions Social image analysis is in a highish dimension vector space – million images; 1000 features per image; million clusters Finding communities from network graphs coming from Social media contacts etc. 95/25/2015

10 MDS and Clustering on ~60K EMR

5/25/ Colored by Sample – not clustered. Distances from vectors in word space. This 128 4mers for ~200K sequences

Examples of current DA algorithms: LCMS 125/25/2015

Background on LC-MS Remarks of collaborators – Broad Institute/Hyderabad Abundance of peaks in “label-free” LC-MS enables large-scale comparison of peptides among groups of samples. In fact when a group of samples in a cohort is analyzed together, not only is it possible to “align” robustly or cluster the corresponding peaks across samples, but it is also possible to search for patterns or fingerprints of disease states which may not be detectable in individual samples. This property of the data lends itself naturally to big data analytics for biomarker discovery and is especially useful for population-level studies with large cohorts, as in the case of infectious diseases and epidemics. With increasingly large-scale studies, the need for fast yet precise cohort-wide clustering of large numbers of peaks assumes technical importance. In particular, a scalable parallel implementation of a cohort-wide peak clustering algorithm for LC-MS-based proteomic data can prove to be a critically important tool in clinical pipelines for responding to global epidemics of infectious diseases like tuberculosis, influenza, etc. 135/25/2015

Proteomics 2D DA Clustering T= with 60 Clusters (will be 30,000 at T=0.025) 5/25/201514

The brownish triangles are “sponge” (soaks up trash) peaks outside any cluster. The colored hexagons are peaks inside clusters with the white hexagons being determined cluster center 15 Fragment of 30,000 Clusters Points 5/25/2015

Continuous Clustering This is a very useful subtlety introduced by Ken Rose but not widely known although it greatly improves algorithm Take a cluster k to be split into 2 with centers Y(k) A and Y(k) B with initial values Y(k) A = Y(k) B at original center Y(k) Then typically if you make this change and perturb the Y(k) A and Y(k) B, they will return to starting position as F at stable minimum (positive eigenvalue) But instability (the negative eigenvalue) can develop and one finds Implement by adding arbitrary number p(k) of centers for each cluster Z i =  k=1 K p(k) exp(-  i (k)/T) and M step gives p(k) = C(k)/N Halve p(k) at splits; can’t split easily in standard case p(k) = 1 Show weighting in sums like Z i now equipoint not equicluster as p(k) proportional to points C(k) in cluster 16 Free Energy F Y(k) A and Y(k) B Y(k) A + Y(k) B Free Energy F Y(k) A - Y(k) B 5/25/2015

Trimmed Clustering Clustering with position-specific constraints on variance: Applying redescending M-estimators to label-free LC-MS data analysis (Rudolf Frühwirth, D R Mani and Saumyadipta Pyne) BMC Bioinformatics 2011, 12:358 H TCC =  k=0 K  i=1 N M i (k) f(i,k) – f(i,k) = (X(i) - Y(k)) 2 /2  (k) 2 k > 0 – f(i,0) = c 2 / 2 k = 0 The 0’th cluster captures (at zero temperature) all points outside clusters (background) Clusters are trimmed (X(i) - Y(k)) 2 /2  (k) 2 < c 2 / 2 Relevant when well defined errors T ~ 0 T = 1 T = 5 Distance from cluster center 5/25/201517

Cluster Count v. Temperature for 2 Runs All start with one cluster at far left T=1 special as measurement errors divided out DA2D counts clusters with 1 member as clusters. DAVS(2) does not 5/25/201518

Simple Parallelism as in k-means Decompose points i over processors Equations either pleasingly parallel “maps” over i Or “All-Reductions” summing over i for each cluster Parallel Algorithm: – Each process holds all clusters and calculates contributions to clusters from points in node – e.g. Y(k) =  i=1 N X i / C(k) Runs well in MPI or MapReduce – See all the MapReduce k-means papers 195/25/2015

Better Parallelism The previous model is correct at start but each point does not really contribute to each cluster as damped exponentially by exp( - (X i - Y(k)) 2 /T ) For Proteomics problem, on average only 6.45 clusters needed per point if require (X i - Y(k)) 2 /T ≤ ~40 (as exp(-40) small) So only need to keep nearby clusters for each point As average number of Clusters ~ 20,000, this gives a factor of ~3000 improvement Further communication is no longer all global; it has nearest neighbor components and calculated by parallelism over clusters which can be done in parallel if separated 205/25/2015

21 Speedups for several runs on Tempest from 8-way through 384 way MPI parallelism with one thread per process. We look at different choices for MPI processes which are either inside nodes or on separate nodes 5/25/2015

22 Parallelism within a Single Node of Madrid Cluster. A set of runs on peak data with a single node with 16 cores with either threads or MPI giving parallelism. Parallelism is either number of threads or number of MPI processes. Parallelism (#threads or #processes) 5/25/2015

METAGENOMICS -- SEQUENCE CLUSTERING Non-metric Spaces O(N 2 ) Algorithms – Illustrate Phase Transitions 5/25/201523

24 Start at T= “  ” with 1 Cluster Decrease T, Clusters emerge at instabilities 5/25/2015

255/25/2015

265/25/2015

446K sequences ~100 clusters 5/25/201527

METAGENOMICS -- SEQUENCE CLUSTERING Non-metric Spaces O(N 2 ) Algorithms – Compare Other Methods 5/25/201528

“Divergent” Data Sample 23 True Clusters 29 CDhit UClust Divergent Data Set UClust (Cuts 0.65 to 0.95) DAPWC Total # of clusters Total # of clusters uniquely identified (i.e. one original cluster goes to 1 uclust cluster ) Total # of shared clusters with significant sharing (one uclust cluster goes to > 1 real cluster) Total # of uclust clusters that are just part of a real cluster (11) 72(62) (numbers in brackets only have one member) Total # of real clusters that are 1 uclust cluster but uclust cluster is spread over multiple real clusters Total # of real clusters that have significant contribution from > 1 uclust cluster DA-PWC 5/25/2015

PROTEOMICS No clear clusters 5/25/201530

Protein Universe Browser for COG Sequences with a few illustrative biologically identified clusters 31 5/25/2015

Heatmap of biology distance (Needleman- Wunsch) vs 3D Euclidean Distances 32 If d a distance, so is f(d) for any monotonic f. Optimize choice of f 5/25/2015

O(N 2 ) ALGORITHMS? 5/25/201533

Algorithm Challenges See NRC Massive Data Analysis report O(N) algorithms for O(N 2 ) problems Parallelizing Stochastic Gradient Descent Streaming data algorithms – balance and interplay between batch methods (most time consuming) and interpolative streaming methods Graph algorithms – need shared memory? Machine Learning Community uses parameter servers; Parallel Computing (MPI) would not recommend this? – Is classic distributed model for “parameter service” better? Apply best of parallel computing – communication and load balancing – to Giraph/Hadoop/Spark Are data analytics sparse?; many cases are full matrices BTW Need Java Grande – Some C++ but Java most popular in ABDS, with Python, Erlang, Go, Scala (compiles to JVM) ….. 5/25/201534

O(N 2 ) interactions between green and purple clusters should be able to represent by centroids as in Barnes-Hut. Hard as no Gauss theorem; no multipole expansion and points really in 1000 dimension space as clustered before 3D projection O(N 2 ) green-green and purple- purple interactions have value but green-purple are “wasted” “clean” sample of 446K 5/25/201535

36 Use Barnes Hut OctTree, originally developed to make O(N 2 ) astrophysics O(NlogN), to give similar speedups in machine learning 5/25/2015

37 OctTree for 100K sample of Fungi We use OctTree for logarithmic interpolation (streaming data) 5/25/2015

Fungi Analysis 5/25/201538

Fungi Analysis Multiple Species from multiple places Several sources of sequences starting with 446K and eventually boiled down to ~10K curated sequences with 61 species Original sample – clustering and MDS Final sample – MDS and other clustering methods Note MSA and SWG gives similar results Some species are clearly split Some species are diffuse; others compact making a fixed distance cut unreliable – Easy for humans! MDS very clear on structure and clustering artifacts Why not do “high-value” clustering as interactive iteration driven by MDS? 5/25/201539

Fungi -- 4 Classic Clustering Methods 5/25/201540

5/25/201541

5/25/201542

5/25/ Same Species

5/25/ Same Species Different Locations

Parallel Data Mining 5/25/201545

Parallel Data Analytics Streaming algorithms have interesting differences but “Batch” Data analytics is “just classic parallel computing” with usual features such as SPMD and BSP Expect similar systematics to simulations where Static Regular problems are straightforward but Dynamic Irregular Problems are technically hard and high level approaches fail (see High Performance Fortran HPF) – Regular meshes worked well but – Adaptive dynamic meshes did not although “real people with MPI” could parallelize However using libraries is successful at either – Lowest: communication level – Higher: “core analytics” level Data analytics does not yet have “good regular parallel libraries” – Graph analytics has most attention 5/25/201546

Remarks on Parallelism I Maximum Likelihood or  2 both lead to objective functions like Minimize sum  items=1 N (Positive nonlinear function of unknown parameters for item i) Typically decompose items i and parallelize over both i and parameters to be determined Solve iteratively with (clever) first or second order approximation to shift in objective function – Sometimes steepest descent direction; sometimes Newton – Have classic Expectation Maximization structure – Steepest descent shift is sum over shift calculated from each point Classic method – take all (millions) of items in data set and move full distance – Stochastic Gradient Descent SGD – take randomly a few hundred of items in data set and calculate shifts over these and move a tiny distance – SGD cannot parallelize over items 475/25/2015

Remarks on Parallelism II Need to cover non vector semimetric and vector spaces for clustering and dimension reduction (N points in space) Semimetric spaces just have pairwise distances defined between points in space  (i, j) MDS Minimizes Stress and illustrates this  (X) =  i<j =1 N weight(i,j) (  (i, j) - d(X i, X j )) 2 Vector spaces have Euclidean distance and scalar products – Algorithms can be O(N) and these are best for clustering but for MDS O(N) methods may not be best as obvious objective function O(N 2 ) – Important new algorithms needed to define O(N) versions of current O(N 2 ) – “must” work intuitively and shown in principle Note matrix solvers often use conjugate gradient – converges in iterations – a big gain for matrix with a million rows. This removes factor of N in time complexity Ratio of #clusters to #points important; new clustering ideas if ratio >~ /25/2015

Problem Structure Note learning networks have huge number of parameters (11 billion in Stanford work) so that inconceivable to look at second derivative Clustering and MDS have lots of parameters but can be practical to look at second derivative and use Newton’s method to minimize Parameters are determined in distributed fashion but are typically needed globally – MPI use broadcast and “AllCollectives” implying Map-Collective is a useful programming model – AI community: use parameter server and access as needed. Non-optimal? 495/25/2015

MDS in more detail 5/25/201550

WDA-SMACOF “Best” MDS Semimetric spaces just have pairwise distances defined between points in space  (i, j) MDS Minimizes Stress with pairwise distances  (i, j)  (X) =  i<j =1 N weight(i,j) (  (i, j) - d(X i, X j )) 2 SMACOF clever Expectation Maximization method choses good steepest descent Improved by Deterministic Annealing reducing distance scale; DA does not impact compute time much and gives DA-SMACOF Classic SMACOF is O(N 2 ) for uniform weight and O(N 3 ) for non trivial weights but get nonuniform weight from – The preferred Sammon method weight(i,j) = 1/  (i, j) or – Missing distances put in as weight(i,j) = 0 Use conjugate gradient – converges in iterations – a big gain for matrix with a million rows. This removes factor of N in time complexity and gives WDA-SMACOF 515/25/2015

Timing of WDA SMACOF 20k to 100k AM Fungal sequences on 600 cores 5/25/201552

WDA-SMACOF Timing Input Data: 100k to 400k AM Fungal sequences Environment: 32 nodes (1024 cores) to 128 nodes (4096 cores) on BigRed2. Using Harp plug in for Hadoop (MPI Performance) 5/25/201553

Spherical Phylogram Take a set of sequences mapped to nD with MDS (WDA- SMACOF) (n=3 or ~20) – N=20 captures ~all features of dataset? Consider a phylogenetic tree and use neighbor joining formulae to calculate distances of nodes to sequences (or later other nodes) starting at bottom of tree Do a new MDS fixing mapping of sequences noting that sequences + nodes have defined distances Use RAxML or Neighbor Joining (N=20?) to find tree Random note: do not need Multiple Sequence Alignment; pairwise tools are easier to use and give reliably good results 5/25/201554

RAxML result visualized in FigTree. Spherical Phylogram visualized in PlotViz for MSA or SWG distances Spherical Phylograms MSA SWG 5/25/201555

Quality of 3D Phylogenetic Tree EM-SMACOF is basic SMACOF LMA was previous best method using Levenberg-Marquardt nonlinear  2 solver WDA-SMACOF finds best result 3 different distance measures Sum of branch lengths of the Spherical Phylogram generated in 3D space on two datasets 5/25/201556

Summary Always run MDS. Gives insight into data and performance of machine learning – Leads to a data browser as GIS gives for spatial data – 3D better than 2D – ~20D better than MSA? Clustering Observations – Do you care about quality or are you just cutting up space into parts – Deterministic Clustering always makes more robust – Continuous clustering enables hierarchy – Trimmed Clustering cuts off tails – Distinct O(N) and O(N 2 ) algorithms Use Conjugate Gradient 575/25/2015

Java Grande 5/25/201558

Java Grande We once tried to encourage use of Java in HPC with Java Grande Forum but Fortran, C and C++ remain central HPC languages. – Not helped by.com and Sun collapse in The pure Java CartaBlanca, a 2005 R&D100 award-winning project, was an early successful example of HPC use of Java in a simulation tool for non-linear physics on unstructured grids. Of course Java is a major language in ABDS and as data analysis and simulation are naturally linked, should consider broader use of Java Using Habanero Java (from Rice University) for Threads and mpiJava or FastMPJ for MPI, gathering collection of high performance parallel Java analytics – Converted from C# and sequential Java faster than sequential C# So will have either Hadoop+Harp or classic Threads/MPI versions in Java Grande version of Mahout 5/25/201559

Performance of MPI Kernel Operations Pure Java as in FastMPJ slower than Java interfacing to C version of MPI 5/25/201560

Java Grande and C# on 40K point DAPWC Clustering Very sensitive to threads v MPI 64 Way parallel 128 Way parallel 256 Way parallel TXP Nodes Total C# Java C# Hardware 0.7 performance Java Hardware 5/25/201561

Java and C# on 12.6K point DAPWC Clustering Java C# #Threads x #Processes per node # Nodes Total Parallelism Time hours 1x1 2x2 1x21x4 2x1 1x8 4x1 2x4 4x2 8x1 #Threads x #Processes per node C# Hardware 0.7 performance Java Hardware 5/25/201562

Data Analytics in SPIDAL 5/25/201563

Analytics and the DIKW Pipeline Data goes through a pipeline Raw data  Data  Information  Knowledge  Wisdom  Decisions Each link enabled by a filter which is “business logic” or “analytics” We are interested in filters that involve “sophisticated analytics” which require non trivial parallel algorithms – Improve state of art in both algorithm quality and (parallel) performance Design and Build SPIDAL (Scalable Parallel Interoperable Data Analytics Library) More Analytics Knowledge Information Analytics Information Data 5/25/201564

Strategy to Build SPIDAL Analyze Big Data applications to identify analytics needed and generate benchmark applications Analyze existing analytics libraries (in practice limit to some application domains) – catalog library members available and performance – Mahout low performance, R largely sequential and missing key algorithms, MLlib just starting Identify big data computer architectures Identify software model to allow interoperability and performance Design or identify new or existing algorithm including parallel implementation Collaborate application scientists, computer systems and statistics/algorithms communities 5/25/201565

Machine Learning in Network Science, Imaging in Computer Vision, Pathology, Polar Science, Biomolecular Simulations 66 AlgorithmApplicationsFeaturesStatusParallelism Graph Analytics Community detectionSocial networks, webgraph Graph. P-DMGML-GrC Subgraph/motif findingWebgraph, biological/social networksP-DMGML-GrB Finding diameterSocial networks, webgraphP-DMGML-GrB Clustering coefficientSocial networksP-DMGML-GrC Page rankWebgraphP-DMGML-GrC Maximal cliquesSocial networks, webgraphP-DMGML-GrB Connected componentSocial networks, webgraphP-DMGML-GrB Betweenness centralitySocial networks Graph, Non-metric, static P-Shm GML-GRA Shortest pathSocial networks, webgraphP-Shm Spatial Queries and Analytics Spatial relationship based queries GIS/social networks/pathology informatics Geometric P-DMPP Distance based queriesP-DMPP Spatial clusteringSeqGML Spatial modelingSeqPP GML Global (parallel) ML GrA Static GrB Runtime partitioning 5/25/2015

Some specialized data analytics in SPIDAL aa 67 AlgorithmApplicationsFeaturesStatusParallelism Core Image Processing Image preprocessing Computer vision/pathology informatics Metric Space Point Sets, Neighborhood sets & Image features P-DMPP Object detection & segmentation P-DMPP Image/object feature computation P-DMPP 3D image registrationSeqPP Object matching Geometric TodoPP 3D feature extractionTodoPP Deep Learning Learning Network, Stochastic Gradient Descent Image Understanding, Language Translation, Voice Recognition, Car driving Connections in artificial neural net P-DMGML PP Pleasingly Parallel (Local ML) Seq Sequential Available GRA Good distributed algorithm needed Todo No prototype Available P-DM Distributed memory Available P-Shm Shared memory Available 5/25/2015

Some Core Machine Learning Building Blocks 68 AlgorithmApplicationsFeaturesStatus//ism DA Vector Clustering Accurate ClustersVectorsP-DMGML DA Non metric Clustering Accurate Clusters, Biology, WebNon metric, O(N 2 )P-DMGML Kmeans; Basic, Fuzzy and Elkan Fast ClusteringVectorsP-DMGML Levenberg-Marquardt Optimization Non-linear Gauss-Newton, use in MDS Least SquaresP-DMGML SMACOF Dimension Reduction DA- MDS with general weights Least Squares, O(N 2 ) P-DMGML Vector Dimension Reduction DA-GTM and OthersVectorsP-DMGML TFIDF Search Find nearest neighbors in document corpus Bag of “words” (image features) P-DMPP All-pairs similarity search Find pairs of documents with TFIDF distance below a threshold TodoGML Support Vector Machine SVM Learn and ClassifyVectorsSeqGML Random Forest Learn and ClassifyVectorsP-DMPP Gibbs sampling (MCMC) Solve global inference problemsGraphTodoGML Latent Dirichlet Allocation LDA with Gibbs sampling or Var. Bayes Topic models (Latent factors)Bag of “words”P-DMGML Singular Value Decomposition SVD Dimension Reduction and PCAVectorsSeqGML Hidden Markov Models (HMM) Global inference on sequence models VectorsSeq PP & GML 5/25/2015

Some Futures Always run MDS. Gives insight into data – Leads to a data browser as GIS gives for spatial data Claim is algorithm change gave as much performance increase as hardware change in simulations. Will this happen in analytics? – Today is like parallel computing 30 years ago with regular meshs. We will learn how to adapt methods automatically to give “multigrid” and “fast multipole” like algorithms Need to start developing the libraries that support Big Data – Understand architectures issues – Have coupled batch and streaming versions – Develop much better algorithms Please join SPIDAL (Scalable Parallel Interoperable Data Analytics Library) community 695/25/2015