Presentation is loading. Please wait.

Presentation is loading. Please wait.

Big Data and Simulations: HPC and Clouds

Similar presentations


Presentation on theme: "Big Data and Simulations: HPC and Clouds"— Presentation transcript:

1 Big Data and Simulations: HPC and Clouds
Data-Intensive Science and Technologies Workshop RAL UK Geoffrey Fox September 14, 2016 Department of Intelligent Systems Engineering School of Informatics and Computing, Digital Science Center Indiana University Bloomington

2 Abstract I We review several questions at the intersection of Big Data, Big Simulations, Clouds and HPC. We base analysis on an analysis of many big data and simulation problems and a set of properties -- the Big Data Ogres -- characterizing them. We consider broad topics: What are the application and user requirements? e.g. is the data streaming, how similar are commercial and scientific requirements? What is execution structure of problems? e.g. is it dataflow or more like MPI? Should we use threads or processes? Is execution pleasingly parallel? 11/15/2018

3 Abstract II What about the many choices for infrastructure and middleware? Should we use classic HPC cluster, Docker or OpenStack? Where are Big Data (Apache) approaches superior/inferior to those familiar from Grid and HPC work? The choice of language -- C++, Java, Scala, Python, R highlights performance v. productivity trade-offs. What is actual performance of Big Data implementations and what are good benchmarks? Is software sustainability important and is the Apache model a good approach to this? How does the exascale initiative fit in See  and 11/15/2018

4 Why Connect (“Converge”) Big Data and HPC
Two major trends in computing systems are Growth in high performance computing (HPC) with an international exascale initiative (China in the lead) Big data phenomenon with an accompanying cloud infrastructure of well publicized dramatic and increasing size and sophistication. Note “Big Data” largely an industry initiative although software used is often open source HPC labels overlaps with “research”: USA HPC community largely responsible for Astronomy & Accelerator (LHC, Belle, Light Source ....) data analysis Merge HPC and Big Data to get More efficient sharing of large scale resources running simulations and data analytics Higher performance Big Data algorithms Richer software environment for research community building on many big data tools Easier sustainability model for HPC – HPC does not have resources to build and maintain a full software stack 11/15/2018

5 Convergence Points (Nexus) for HPC-Cloud-Big Data-Simulation
Nexus 1: Applications – Divide use cases into Data and Model and compare characteristics separately in these two components with 64 Convergence Diamonds (features) Nexus 2: Software – High Performance Computing (HPC) Enhanced Big Data Stack HPC-ABDS. 21 Layers adding high performance runtime to Apache systems (Hadoop is fast!). Establish principles to get good performance from Java or C programming languages Nexus 3: Hardware – Use Infrastructure as a Service IaaS and DevOps to automate deployment of software defined systems on hardware designed for functionality and performance e.g. appropriate disks, interconnect, memory. Don’t Cover this 11/15/2018

6 Application Nexus Use-case Data and Model NIST Collection
Big Data Ogres Convergence Diamonds 11/15/2018

7 Data and Model in Big Data and Simulations I
Need to discuss Data and Model as problems have both intermingled, but we can get insight by separating which allows better understanding of Big Data - Big Simulation “convergence” (or differences!) The Model is a user construction and it has a “concept”, parameters and gives results determined by the computation. We use term “model” in a general fashion to cover all of these. Big Data problems can be broken up into Data and Model For clustering, the model parameters are cluster centers while the data is set of points to be clustered For queries, the model is structure of database and results of this query while the data is whole database queried and SQL query For deep learning with ImageNet, the model is chosen network with model parameters as the network link weights. The data is set of images used for training or classification 11/15/2018

8 Data and Model in Big Data and Simulations II
Simulations can also be considered as Data plus Model Model can be formulation with particle dynamics or partial differential equations defined by parameters such as particle positions and discretized velocity, pressure, density values Data could be small when just boundary conditions Data large with data assimilation (weather forecasting) or when data visualizations are produced by simulation Big Data implies Data is large but Model varies in size e.g. LDA with many topics or deep learning has a large model Clustering or Dimension reduction can be quite small in model size Data often static between iterations (unless streaming); Model parameters vary between iterations 11/15/2018

9 11/15/2018 Online Use Case Form

10 51 Detailed Use Cases: Contributed July-September 2013 Covers goals, data features such as 3 V’s, software, hardware Government Operation(4): National Archives and Records Administration, Census Bureau Commercial(8): Finance in Cloud, Cloud Backup, Mendeley (Citations), Netflix, Web Search, Digital Materials, Cargo shipping (as in UPS) Defense(3): Sensors, Image surveillance, Situation Assessment Healthcare and Life Sciences(10): Medical records, Graph and Probabilistic analysis, Pathology, Bioimaging, Genomics, Epidemiology, People Activity models, Biodiversity Deep Learning and Social Media(6): Driving Car, Geolocate images/cameras, Twitter, Crowd Sourcing, Network Science, NIST benchmark datasets The Ecosystem for Research(4): Metadata, Collaboration, Language Translation, Light source experiments Astronomy and Physics(5): Sky Surveys including comparison to simulation, Large Hadron Collider at CERN, Belle Accelerator II in Japan Earth, Environmental and Polar Science(10): Radar Scattering in Atmosphere, Earthquake, Ocean, Earth Observation, Ice sheet Radar scattering, Earth radar mapping, Climate simulation datasets, Atmospheric turbulence identification, Subsurface Biogeochemistry (microbes to watersheds), AmeriFlux and FLUXNET gas sensors Energy(1): Smart grid Published by NIST as with common set of 26 features recorded for each use-case; “Version 2” being prepared 26 Features for each use case Biased to science 11/15/2018

11 Sample Features of 51 Use Cases I
PP (26) “All” Pleasingly Parallel or Map Only MR (18) Classic MapReduce MR (add MRStat below for full count) MRStat (7) Simple version of MR where key computations are simple reduction as found in statistical averages such as histograms and averages MRIter (23) Iterative MapReduce or MPI (Flink, Spark, Twister) Graph (9) Complex graph data structure needed in analysis Fusion (11) Integrate diverse data to aid discovery/decision making; could involve sophisticated algorithms or could just be a portal Streaming (41) Some data comes in incrementally and is processed this way Classify (30) Classification: divide data into categories S/Q (12) Index, Search and Query 11/15/2018

12 Sample Features of 51 Use Cases II
CF (4) Collaborative Filtering for recommender engines LML (36) Local Machine Learning (Independent for each parallel entity) – application could have GML as well GML (23) Global Machine Learning: Deep Learning, Clustering, LDA, PLSI, MDS, Large Scale Optimizations as in Variational Bayes, MCMC, Lifted Belief Propagation, Stochastic Gradient Descent, L-BFGS, Levenberg-Marquardt . Can call EGO or Exascale Global Optimization with scalable parallel algorithm Workflow (51) Universal GIS (16) Geotagged data and often displayed in ESRI, Microsoft Virtual Earth, Google Earth, GeoServer etc. HPC (5) Classic large-scale simulation of cosmos, materials, etc. generating (visualization) data Agent (2) Simulations of models of data-defined macroscopic entities represented as agents 11/15/2018

13 7 Computational Giants of NRC Massive Data Analysis Report
Big Data Models? G1: Basic Statistics e.g. MRStat G2: Generalized N-Body Problems G3: Graph-Theoretic Computations G4: Linear Algebraic Computations G5: Optimizations e.g. Linear Programming G6: Integration e.g. LDA and other GML G7: Alignment Problems e.g. BLAST 11/15/2018

14 HPC (Simulation) Benchmark Classics
Linpack or HPL: Parallel LU factorization for solution of linear equations; HPCG NPB version 1: Mainly classic HPC solver kernels MG: Multigrid CG: Conjugate Gradient FT: Fast Fourier Transform IS: Integer sort EP: Embarrassingly Parallel BT: Block Tridiagonal SP: Scalar Pentadiagonal LU: Lower-Upper symmetric Gauss Seidel Simulation Models 11/15/2018

15 13 Berkeley Dwarfs Largely Models for Data or Simulation
Dense Linear Algebra Sparse Linear Algebra Spectral Methods N-Body Methods Structured Grids Unstructured Grids MapReduce Combinational Logic Graph Traversal Dynamic Programming Backtrack and Branch-and-Bound Graphical Models Finite State Machines Largely Models for Data or Simulation First 6 of these correspond to Colella’s original. (Classic simulations) Monte Carlo dropped. N-body methods are a subset of Particle in Colella. Note a little inconsistent in that MapReduce is a programming model and spectral method is a numerical method. Need multiple facets to classify use cases! 11/15/2018

16 Classifying Use cases 11/15/2018

17 Classifying Use Cases The Big Data Ogres built on a collection of 51 big data uses gathered by the NIST Public Working Group where 26 properties were gathered for each application. This information was combined with other studies including the Berkeley dwarfs, the NAS parallel benchmarks and the Computational Giants of the NRC Massive Data Analysis Report. The Ogre analysis led to a set of 50 features divided into four views that could be used to categorize and distinguish between applications. The four views are Problem Architecture (Macro pattern); Execution Features (Micro patterns); Data Source and Style; and finally the Processing View or runtime features. We generalized this approach to integrate Big Data and Simulation applications into a single classification looking separately at Data and Model with the total facets growing to 64 in number, called convergence diamonds, and split between the same 4 views. A mapping of facets into work of the SPIDAL project has been given. 11/15/2018

18 11/15/2018

19 64 Features in 4 views for Unified Classification of Big Data and Simulation Applications
Simulations Analytics (Model for Big Data) Both (All Model) (Nearly all Data+Model) (Nearly all Data) (Mix of Data and Model) 11/15/2018

20 Examples in Problem Architecture View PA
The facets in the Problem architecture view include 5 very common ones describing synchronization structure of a parallel job: MapOnly or Pleasingly Parallel (PA1): the processing of a collection of independent events; MapReduce (PA2): independent calculations (maps) followed by a final consolidation via MapReduce; MapCollective (PA3): parallel machine learning dominated by scatter, gather, reduce and broadcast; MapPoint-to-Point (PA4): simulations or graph processing with many local linkages in points (nodes) of studied system. MapStreaming (PA5): The fifth important problem architecture is seen in recent approaches to processing real-time data. We do not focus on pure shared memory architectures PA6 but look at hybrid architectures with clusters of multicore nodes and find important performances issues dependent on the node programming model. Most of our codes are SPMD (PA-7) and BSP (PA-8). 11/15/2018

21 6 Forms of MapReduce Describes Architecture of - Problem (Model reflecting data) - Machine - Software 2 important variants (software) of Iterative MapReduce and Map-Streaming a) “In-place” HPC b) Flow for model and data 11/15/2018

22 Examples in Execution View EV
The Execution view is a mix of facets describing either data or model; PA was largely the overall Data+Model EV-M14 is Complexity of model (O(N2) for N points) seen in the non-metric space models EV-M13 such as one gets with DNA sequences. EV-M11 describes iterative structure distinguishing Spark, Flink, and Harp from the original Hadoop. The facet EV-M8 describes the communication structure which is a focus of our research as much data analytics relies on collective communication which is in principle understood but we find that significant new work is needed compared to basic HPC releases which tend to address point to point communication. The model size EV-M4 and data volume EV-D4 are important in describing the algorithm performance as just like in simulation problems, the grain size (the number of model parameters held in the unit – thread or process – of parallel computing) is a critical measure of performance. 11/15/2018

23 Examples in Data View DV
We can highlight DV-5 streaming where there is a lot of recent progress; DV-9 categorizes our Biomolecular simulation application with data produced by an HPC simulation DV-10 is Geospatial Information Systems covered by our spatial algorithms. DV-7 provenance, is an example of an important feature that we are not covering. The data storage and access DV-3 and D-4 is covered in our pilot data work. The Internet of Things DV-8 is not a focus of our project although our recent streaming work relates to this and our addition of HPC to Apache Heron and Storm is an example of the value of HPC-ABDS to IoT. 11/15/2018

24 Examples in Processing View PV
The Processing view PV characterizes algorithms and is only Model (no Data features) but covers both Big data and Simulation use cases. Graph PV-M13 and Visualization PV-M14 covered in SPIDAL. PV-M15 directly describes SPIDAL which is a library of core and other analytics. This project covers many aspects of PV-M4 to PV-M11 as these characterize the SPIDAL algorithms (such as optimization, learning, classification). We are of course NOT addressing PV-M16 to PV-M22 which are simulation algorithm characteristics and not applicable to data analytics. Our work largely addresses Global Machine Learning PV-M3 although some of our image analytics are local machine learning PV-M2 with parallelism over images and not over the analytics. Many of our SPIDAL algorithms have linear algebra PV-M12 at their core; one nice example is multi-dimensional scaling MDS which is based on matrix-matrix multiplication and conjugate gradient. 11/15/2018

25 Comparison of Data Analytics with Simulation I
Simulations (models) produce big data as visualization of results – they are data source Or consume often smallish data to define a simulation problem HPC simulation in (weather) data assimilation is data + model Pleasingly parallel often important in both Both are often SPMD and BSP Non-iterative MapReduce is major big data paradigm not a common simulation paradigm except where “Reduce” summarizes pleasingly parallel execution as in some Monte Carlos Big Data often has large collective communication Classic simulation has a lot of smallish point-to-point messages Motivates MapCollective model Simulations characterized often by difference or differential operators leading to nearest neighbor sparsity Some important data analytics can be sparse as in PageRank and “Bag of words” algorithms but many involve full matrix algorithm 11/15/2018

26 “Force Diagrams” for macromolecules and Facebook
02/04/2016

27 Comparison of Data Analytics with Simulation II
There are similarities between some graph problems and particle simulations with a particular cutoff force. Both are MapPoint-to-Point problem architecture Note many big data problems are “long range force” (as in gravitational simulations) as all points are linked. Easiest to parallelize. Often full matrix algorithms e.g. in DNA sequence studies, distance (i, j) defined by BLAST, Smith-Waterman, etc., between all sequences i, j. Opportunity for “fast multipole” ideas in big data. See NRC report Current Ogres/Diamonds do not have facets to designate underlying hardware: GPU v. Many-core (Xeon Phi) v. Multi-core as these define how maps processed; they keep map-X structure fixed; maybe should change as ability to exploit vector or SIMD parallelism could be a model facet. 11/15/2018

28 Comparison of Data Analytics with Simulation III
In image-based deep learning, neural network weights are block sparse (corresponding to links to pixel blocks) but can be formulated as full matrix operations on GPUs and MPI in blocks. In HPC benchmarking, Linpack being challenged by a new sparse conjugate gradient benchmark HPCG, while I am diligently using non- sparse conjugate gradient solvers in clustering and Multi-dimensional scaling. Simulations tend to need high precision and very accurate results – partly because of differential operators Big Data problems often don’t need high accuracy as seen in trend to low precision (16 or 32 bit) deep learning networks There are no derivatives and the data has inevitable errors Note parallel machine learning (GML not LML) can benefit from HPC style interconnects and architectures as seen in GPU-based deep learning So commodity clouds not necessarily best 11/15/2018

29 Software Nexus Application Layer On Big Data Software Components for Programming and Data Processing On HPC for runtime On IaaS and DevOps Hardware and Systems HPC-ABDS Java Grande 11/15/2018

30 HPC-ABDS 11/15/2018

31 Functionality of 21 HPC-ABDS Layers
Message Protocols: Distributed Coordination: Security & Privacy: Monitoring: IaaS Management from HPC to hypervisors: DevOps: Interoperability: File systems: Cluster Resource Management: Data Transport: A) File management B) NoSQL C) SQL In-memory databases & caches / Object-relational mapping / Extraction Tools Inter process communication Collectives, point-to-point, publish- subscribe, MPI: A) Basic Programming model and runtime, SPMD, MapReduce: B) Streaming: A) High level Programming: B) Frameworks Application and Analytics: Workflow-Orchestration: Lesson of large number (350). This is a rich software environment that HPC cannot “compete” with. Need to use and not regenerate Note level 13 Inter process communication added 11/15/2018

32 Some key ABDS Software Workflow: Apache Beam (Google Cloud Dataflow) supporting streaming and batch Analytics: TensorFlow, SPIDAL, R, Matlab Programming: Apache Flink, Spark, Hadoop Streaming: Apache Heron (supersedes Storm) Low-level Runtime: Take from HPC such as MPI Data Systems: Redis, Hbase, MongoDB, SQL Cluster Management: Yarn, Mesos, Slurm DevOps: Ansible, Cloudmesh mapping to HPC, Docker, Amazon, Azure, OpenStack Language: Python, Java (with Grande principles), C, C++ … 11/15/2018

33 Java Grande Revisited on 3 data analytics codes Clustering Multidimensional Scaling Latent Dirichlet Allocation all sophisticated algorithms 11/15/2018

34 Some large scale analytics
100,000 fungi Sequences Eventually 120 clusters 3D phylogenetic tree LCMS Mass Spectrometer Peak Clustering. Sample of 25 million points. 700 clusters Jan  December 2015 02/16/2016 Daily Stock Time Series in 3D

35 Java MPI performs better than FJ Threads 128 24 core Haswell nodes on SPIDAL 200K DA-MDS Code
Best FJ Threads intra node; MPI inter node Best MPI; inter and intra node MPI; inter/intra node; Java not optimized Speedup compared to 1 process per node on 48 nodes 11/15/2018

36 HPC-ABDS Parallel Computing
Both simulations and data analytics use similar parallel computing ideas Both do decomposition of both model and data Both tend use SPMD and often use BSP Bulk Synchronous Processing One has computing (called maps in big data terminology) and communication/reduction (more generally collective) phases Big data thinks of problems as multiple linked queries even when queries are small and uses dataflow model Simulation uses dataflow for multiple linked applications but small steps such as iterations are done in place Reduction in HPC (MPIReduce) done as optimized tree or pipelined communication between same processes that did computing Reduction in Hadoop or Flink done as separate map and reduce processes using dataflow This leads to 2 forms (In-Place and Flow) of Map-X mentioned earlier Interesting Fault Tolerance issues highlighted by Hadoop-MPI comparisons – not discussed here! 11/15/2018

37 Kmeans Clustering Flink and MPI one million 2D points fixed; various # centers 24 cores on 16 nodes
11/15/2018

38 Breaking Programs into Parts
Fine Grain Parallel Computing Data/model parameter decomposition Coarse Grain Dataflow HPC or ABDS 11/15/2018

39 HPC-ABDS Parallel Computing
For a given application, need to understand: Ratio of amount of computing to amount of communication Hardware compute/communication ratio Inefficient to use same runtime mechanism independent of characteristics Use In-Place implementations for parallel computing with high overhead and Flow for flexible low overhead cases Classic Dataflow is approach of Spark and Flink so need to add parallel in-place computing as done by Harp for Hadoop HPC-ABDS plan is to keep current user interfaces (say to Spark Flink Hadoop Storm Heron) and transparently use HPC to improve performance exploiting added level 13 in HPC-ABDS We have done this to Hadoop (next Slide), Spark, Storm, Heron Working on further HPC integration with ABDS 5/17/2016

40 Harp (Hadoop Plugin) brings HPC to ABDS
Basic Harp: Iterative HPC communication; scientific data abstractions Careful support of distributed data AND distributed model Avoids parameter server approach but distributes model over worker nodes and supports collective communication to bring global model to each node Applied first to Latent Dirichlet Allocation LDA with large model and data Shuffle M Collective Communication R MapCollective Model MapReduce Model YARN MapReduce V2 Harp MapReduce Applications MapCollective Applications 11/15/2018

41 Streaming Applications and Technology
11/15/2018

42 Adding HPC to Storm & Heron for Streaming
Robotics Applications Time series data visualization in real time Simultaneous Localization and Mapping N-Body Collision Avoidance Robot with a Laser Range Finder Robots need to avoid collisions when they move Map Built from Robot data Map High dimensional data to 3D visualizer Apply to Stock market data tracking 6000 stocks 11/15/2018

43 Hosted on HPC and OpenStack cloud
Data Pipeline Sending to pub-sub Persisting storage Streaming workflow A stream application with some tasks running in parallel Multiple streaming workflows Gateway Message Brokers RabbitMQ, Kafka Streaming Workflows Apache Heron and Storm End to end delays without any processing is less than 10ms Storm does not support “real parallel processing” within bolts – add optimized inter-bolt communication Hosted on HPC and OpenStack cloud 11/15/2018

44 Improvement of Storm (Heron) using HPC communication algorithms
Latency of binary tree, flat tree and bi-directional ring implementations compared to serial implementation. Different lines show varying # of parallel tasks with either TCP communications and shared memory communications(SHM). Original Time Speedup Ring Speedup Tree Speedup Binary 11/15/2018

45 Summary of Big Data - Big Simulation Convergence?
HPC-Clouds convergence? (easier than converging higher levels in stack) Can HPC continue to do it alone? Convergence Diamonds HPC-ABDS Software on differently optimized hardware infrastructure 11/15/2018

46 General Aspects of Big Data HPC Convergence
Applications, Benchmarks and Libraries 51 NIST Big Data Use Cases, 7 Computational Giants of the NRC Massive Data Analysis, 13 Berkeley dwarfs, 7 NAS parallel benchmarks Unified discussion by separately discussing data & model for each application; 64 facets– Convergence Diamonds -- characterize applications Characterization identifies hardware and software features for each application across big data, simulation; “complete” set of benchmarks (NIST) Software Architecture and its implementation HPC-ABDS: Cloud-HPC interoperable software: performance of HPC (High Performance Computing) and the rich functionality of the Apache Big Data Stack. Added HPC to Hadoop, Storm, Heron, Spark; could add to Beam and Flink Could work in Apache model contributing code Run same HPC-ABDS across all platforms but “data management” nodes have different balance in I/O, Network and Compute from “model” nodes Optimize to data and model functions as specified by convergence diamonds Do not optimize for simulation and big data Convergence Language: Make C++, Java, Scala, Python (R) … perform well Training: Students prefer to learn Big Data rather than HPC Sustainability: research/HPC communities cannot afford to develop everything (hardware and software) from scratch 11/15/2018


Download ppt "Big Data and Simulations: HPC and Clouds"

Similar presentations


Ads by Google