SALSA Group’s Collaborations with Microsoft SALSA Group Principal Investigator Geoffrey Fox Project Lead Judy Qiu Scott Beason,

Slides:



Advertisements
Similar presentations
Scalable High Performance Dimension Reduction
Advertisements

SALSA HPC Group School of Informatics and Computing Indiana University.
Introduction to Programming Paradigms Activity at Data Intensive Workshop Shantenu Jha represented by Geoffrey Fox
Twister4Azure Iterative MapReduce for Windows Azure Cloud Thilina Gunarathne Indiana University Iterative MapReduce for Azure Cloud.
Hybrid MapReduce Workflow Yang Ruan, Zhenhua Guo, Yuduo Zhou, Judy Qiu, Geoffrey Fox Indiana University, US.
High Performance Dimension Reduction and Visualization for Large High-dimensional Data Analysis Jong Youl Choi, Seung-Hee Bae, Judy Qiu, and Geoffrey Fox.
SALSASALSASALSASALSA Using MapReduce Technologies in Bioinformatics and Medical Informatics Computing for Systems and Computational Biology Workshop SC09.
SALSASALSASALSASALSA Using Cloud Technologies for Bioinformatics Applications MTAGS Workshop SC09 Portland Oregon November Judy Qiu
Interpolative Multidimensional Scaling Techniques for the Identification of Clusters in Very Large Sequence Sets April 27, 2011.
Authors: Thilina Gunarathne, Tak-Lon Wu, Judy Qiu, Geoffrey Fox Publish: HPDC'10, June 20–25, 2010, Chicago, Illinois, USA ACM Speaker: Jia Bao Lin.
SALSASALSASALSASALSA Cloud Technologies and Their Applications March 26, 2010 Indiana University Bloomington Judy Qiu
Parallel Data Analysis from Multicore to Cloudy Grids Indiana University Geoffrey Fox, Xiaohong Qiu, Scott Beason, Seung-Hee.
MapReduce in the Clouds for Science CloudCom 2010 Nov 30 – Dec 3, 2010 Thilina Gunarathne, Tak-Lon Wu, Judy Qiu, Geoffrey Fox {tgunarat, taklwu,
Dimension Reduction and Visualization of Large High-Dimensional Data via Interpolation Seung-Hee Bae, Jong Youl Choi, Judy Qiu, and Geoffrey Fox School.
SALSASALSA Judy Qiu Research Computing UITS, Indiana University.
SALSASALSASALSASALSA Digital Science Center June 25, 2010, IIT Geoffrey Fox Judy Qiu School.
SALSASALSA Programming Abstractions for Multicore Clouds eScience 2008 Conference Workshop on Abstractions for Distributed Applications and Systems December.
SALSASALSASALSASALSA High Performance Biomedical Applications Using Cloud Technologies HPC and Grid Computing in the Cloud Workshop (OGF27 ) October 13,
Iterative computation is a kernel function to many data mining and data analysis algorithms. Missing in current MapReduce frameworks is collective communication,
Service Aggregated Linked Sequential Activities GOALS: Increasing number of cores accompanied by continued data deluge Develop scalable parallel data mining.
Applying Twister to Scientific Applications CloudCom 2010 Indianapolis, Indiana, USA Nov 30 – Dec 3, 2010.
SALSASALSASALSASALSA Hybrid Cloud and Cluster Computing Paradigms for Scalable Data Intensive Applications April 15, 2011 University of Alabama Judy Qiu.
SALSASALSA Judy Qiu Assistant Director, Pervasive Technology Institute.
Science in Clouds SALSA Team salsaweb/salsa Community Grids Laboratory, Digital Science Center Pervasive Technology Institute Indiana University.
SALSASALSA Twister: A Runtime for Iterative MapReduce Jaliya Ekanayake Community Grids Laboratory, Digital Science Center Pervasive Technology Institute.
Generative Topographic Mapping in Life Science Jong Youl Choi School of Informatics and Computing Pervasive Technology Institute Indiana University
SALSASALSASALSASALSA Cloud Technologies and Their Applications March 26, 2010 Indiana University Bloomington Judy Qiu
Service Aggregated Linked Sequential Activities GOALS: Increasing number of cores accompanied by continued data deluge. Develop scalable parallel data.
Applications and Runtime for multicore/manycore March Geoffrey Fox Community Grids Laboratory Indiana University 505 N Morton Suite 224 Bloomington.
1 Performance of a Multi-Paradigm Messaging Runtime on Multicore Systems Poster at Grid 2007 Omni Austin Downtown Hotel Austin Texas September
SALSASALSASALSASALSA Cloud Technologies and Their Applications March 26, 2010 Indiana University Bloomington Judy Qiu
Portable Parallel Programming on Cloud and HPC: Scientific Applications of Twister4Azure Thilina Gunarathne Bingjing Zhang, Tak-Lon.
SALSASALSASALSASALSA Design Pattern for Scientific Applications in DryadLINQ CTP DataCloud-SC11 Hui Li Yang Ruan, Yuduo Zhou Judy Qiu, Geoffrey Fox.
Parallel Applications And Tools For Cloud Computing Environments Azure MapReduce Large-scale PageRank with Twister Twister BLAST Thilina Gunarathne, Stephen.
S CALABLE H IGH P ERFORMANCE D IMENSION R EDUCTION Seung-Hee Bae.
SALSASALSASALSASALSA CloudComp 09 Munich, Germany Jaliya Ekanayake, Geoffrey Fox School of Informatics and Computing Pervasive.
SALSA HPC Group School of Informatics and Computing Indiana University.
SALSASALSASALSASALSA FutureGrid Venus-C June Geoffrey Fox
1 Performance Measurements of CCR and MPI on Multicore Systems Expanded from a Poster at Grid 2007 Austin Texas September Xiaohong Qiu Research.
Community Grids Lab. Indiana University, Bloomington Seung-Hee Bae.
SALSASALSASALSASALSA Cloud Technologies and Applications Indiana University SALSA Group
SALSASALSASALSASALSA Scalable Programming and Algorithms for Data Intensive Life Science Applications Data Intensive Seattle, WA Judy Qiu
Multidimensional Scaling by Deterministic Annealing with Iterative Majorization Algorithm Seung-Hee Bae, Judy Qiu, and Geoffrey Fox SALSA group in Pervasive.
MPI and MapReduce CCGSC 2010 Flat Rock NC September Geoffrey Fox
Service Aggregated Linked Sequential Activities: GOALS: Increasing number of cores accompanied by continued data deluge Develop scalable parallel data.
SALSASALSASALSASALSA Clouds Ball Aerospace March Geoffrey Fox
SALSA HPC Group School of Informatics and Computing Indiana University.
SALSASALSASALSASALSA Multicore and Cloud Technologies for Data Intensive Applications Ballantine Hall 006, Indiana University Bloomington October 23, 2009.
Looking at Use Case 19, 20 Genomics 1st JTC 1 SGBD Meeting SDSC San Diego March Judy Qiu Shantenu Jha (Rutgers) Geoffrey Fox
Cloud Computing Paradigms for Pleasingly Parallel Biomedical Applications Thilina Gunarathne, Tak-Lon Wu Judy Qiu, Geoffrey Fox School of Informatics,
SALSA Group Research Activities April 27, Research Overview  MapReduce Runtime  Twister  Azure MapReduce  Dryad and Parallel Applications 
MATRIX MULTIPLY WITH DRYAD B649 Course Project Introduction.
SALSASALSASALSASALSA Digital Science Center February 12, 2010, Bloomington Geoffrey Fox Judy Qiu
Parallel Applications And Tools For Cloud Computing Environments CloudCom 2010 Indianapolis, Indiana, USA Nov 30 – Dec 3, 2010.
Memcached Integration with Twister Saliya Ekanayake - Jerome Mitchell - Yiming Sun -
SALSASALSASALSASALSA Data Intensive Biomedical Computing Systems Statewide IT Conference October 1, 2009, Indianapolis Judy Qiu
SALSASALSASALSASALSA Cloud Technologies and Their Applications March 26, 2010 Indiana University Bloomington Judy Qiu
SALSASALSASALSASALSA Cloud Technologies and Their Applications May 17, 2010 Melbourne, Australia Judy Qiu
SALSASALSA Dynamic Virtual Cluster provisioning via XCAT on iDataPlex Supports both stateful and stateless OS images iDataplex Bare-metal Nodes Linux Bare-
SALSASALSASALSASALSA IU Twister Supports Data Intensive Science Applications School of Informatics and Computing Indiana University.
SALSASALSASALSASALSA Data Intensive Biomedical Computing Systems Statewide IT Conference October 1, 2009, Indianapolis Judy Qiu
SALSA HPC Group School of Informatics and Computing Indiana University Workshop on Petascale Data Analytics: Challenges, and.
Our Objectives Explore the applicability of Microsoft technologies to real world scientific domains with a focus on data intensive applications Expect.
Geoffrey Fox, Huapeng Yuan, Seung-Hee Bae Xiaohong Qiu
Applying Twister to Scientific Applications
Biology MDS and Clustering Results
SC09 Doctoral Symposium, Portland, 11/18/2009
Adaptive Interpolation of Multidimensional Scaling
Towards High Performance Data Analytics with Java
Presentation transcript:

SALSA Group’s Collaborations with Microsoft SALSA Group Principal Investigator Geoffrey Fox Project Lead Judy Qiu Scott Beason, Jaliya Ekanayake, Thilina Gunarathne, Jong Youl Choi, Seung-Hee Bae, Yang Ruan, Hui Li, Bingjing Zhang, Saliya Ekanayake, Stephen Wu Community Grids Laboratory Digital Science Center Pervasive Technology Institute Indiana University

Our Objectives Explore the applicability of Microsoft technologies to real world scientific domains with a focus on data intensive applications o Expect data deluge will demand multicore enabled data analysis/mining o Detailed objectives modified based on input from Microsoft such as interest in CCR, Dryad and TPL Evaluate and apply these technologies in demonstration systems o Threading: CCR, TPL o Service model and workflow: DSS and Robotics toolkit o MapReduce: Dryad/DryadLINQ compared to Hadoop and Azure o Classical parallelism: Windows HPCS and MPI.NET, o XNA Graphics based visualization Work performed using C# Provide feedback to Microsoft Broader Impact o Papers, presentations, tutorials, classes, workshops, and conferences o Provide our research work as services to collaborators and general science community

Approach Use interesting applications (working with domain experts) as benchmarks including emerging areas like life sciences and classical applications such as particle physics o Bioinformatics - Cap3, Alu, Metagenomics, PhyloD o Cheminformatics - PubChem o Particle Physics - LHC Monte Carlo o Data Mining kernels - K-means, Deterministic Annealing Clustering, MDS, GTM, Smith-Waterman Gotoh Evaluation Criterion for Usability and Developer Productivity o Initial learning curve o Effectiveness of continuing development o Comparison with other technologies Performance on both single systems and clusters

The term SALSA or Service Aggregated Linked Sequential Activities, describes our approach to multicore computing where we used services as modules to capture key functionalities implemented with multicore threading. o This will be expanded as a proposed approach to parallel computing where one produces libraries of parallelized components and combines them with a generalized service integration (workflow) model We have adopted a multi-paradigm runtime (MPR) approach to support key parallel models with focus on MapReduce, MPI collective messaging, asynchronous threading, coarse grain functional parallelism or workflow. We have developed innovative data mining algorithms emphasizing robustness essential for data intensive applications. Parallel algorithms have been developed for shared memory threading, tightly coupled clusters and distributed environments. These have been demonstrated in kernel and real applications. Overview of Multicore SALSA Project at IU

Major Achievements Analysis of CCR and DSS within SALSA paradigm with very detailed performance work on CCR Detailed analysis of Dryad and comparison with Hadoop and MPI. Initial comparison with Azure Comparison of TPL and CCR approaches to parallel threading Applications to several areas including particle physics and especially life sciences Demonstration that Windows HPC Clusters can efficiently run large scale data intensive applications Development of high performance Windows 3D visualization of points from dimension reduction of high dimension datasets to 3D. These are used as Cheminformatics and Bioinformatics dataset browsers Proposed extensions of MapReduce to perform datamining efficiently Identification of datamining as important application with new parallel algorithms for Multi Dimensional Scaling MDS, Generative Topographic Mapping GTM, and Clustering for cases where vectors are defined or where one only knows pairwise dissimilarities between dataset points. Extension of robust fast deterministic annealing to clustering (vector and pairwise), MDS and GTM.

Broader Impact Major Reports delivered to Microsoft on o CCR/DSS o Dryad o TPL comparison with CCR (short) Strong publication record (book chapters, journal papers, conference papers, presentations, technical reports) about TPL/CCR, Dryad, and Windows HPC. Promoted engagement of undergraduate students in new programming models using Dryad and TPL/CCR through class, REU, MSI program. To provide training on MapReduce (Dryad and Hadoop) for Big Data for Science to graduate students of 24 institutes worldwide through NCSA virtual summer school Organization of the Multicore workshop at CCGrid 2010, the Computation Life Sciences workshop at HPDC 2010, and the International Cloud Computing Conference 2010.

Typical CCR Comparison with TPL Hybrid internal threading/MPI as intra-node model works well on Windows HPC cluster Within a single node TPL or CCR outperforms MPI for computation intensive applications like clustering of Alu sequences (“all pairs” problem) TPL outperforms CCR in major applications Efficiency = 1 / (1 + Overhead)

Clustering by Deterministic Annealing (Parallel Overhead = [PT(P) – T(1)]/T(1), where T time and P number of parallel units) Parallel Patterns (ThreadsxProcessesxNodes) Parallel Overhead Thread MPI Thread Thread MPI Thread Thread MPI Threading versus MPI on node Always MPI between nodes Note MPI best at low levels of parallelism Threading best at Highest levels of parallelism (64 way breakeven) Uses MPI.Net as a wrapper of MS-MPI MPI

MachineOSRuntimeGrainsParallelismMPI Latency Intel8 (8 core, Intel Xeon CPU, E5345, 2.33 Ghz, 8MB cache, 8GB memory) (in 2 chips) Redhat MPJE(Java)Process8181 MPICH2 (C)Process840.0 MPICH2:FastProcess839.3 NemesisProcess84.21 Intel8 (8 core, Intel Xeon CPU, E5345, 2.33 Ghz, 8MB cache, 8GB memory) Fedora MPJEProcess8157 mpiJavaProcess8111 MPICH2Process864.2 Intel8 (8 core, Intel Xeon CPU, x5355, 2.66 Ghz, 8 MB cache, 4GB memory) VistaMPJEProcess8170 FedoraMPJEProcess8142 FedorampiJavaProcess8100 VistaCCR (C#)Thread820.2 AMD4 (4 core, AMD Opteron CPU, 2.19 Ghz, processor 275, 4MB cache, 4GB memory) XPMPJEProcess4185 Redhat MPJEProcess4152 mpiJavaProcess499.4 MPICH2Process439.3 XPCCRThread416.3 Intel4 (4 core, Intel Xeon CPU, 2.80GHz, 4MB cache, 4GB memory) XPCCRThread425.8 MPI Exchange Latency in µs (20-30 µs computation between messaging) CCR outperforms Java always and even standard C except for optimized Nemesis Performance of CCR vs MPI for MPI Exchange Communication Typical CCR Performance Measurement

Dimension Reduction Algorithms Multidimensional Scaling (MDS) [1] o Given the proximity information among points. o Optimization problem to find mapping in target dimension of the given data based on pairwise proximity information while minimize the objective function. o Objective functions: STRESS (1) or SSTRESS (2) o Only needs pairwise distances  ij between original points (typically not Euclidean) o d ij (X) is Euclidean distance between mapped (3D) points Generative Topographic Mapping (GTM) [2] o Find optimal K-representations for the given data (in 3D), known as K-cluster problem (NP-hard) o Original algorithm use EM method for optimization o Deterministic Annealing algorithm can be used for finding a global solution o Objective functions is to maximize log- likelihood: [1] I. Borg and P. J. Groenen. Modern Multidimensional Scaling: Theory and Applications. Springer, New York, NY, U.S.A., [2] C. Bishop, M. Svens´en, and C. Williams. GTM: The generative topographic mapping. Neural computation, 10(1):215–234, 1998.

Biology MDS and Clustering Results Alu Families This visualizes results of Alu repeats from Chimpanzee and Human Genomes. Young families (green, yellow) are seen as tight clusters. This is projection of MDS dimension reduction to 3D of repeats – each with about 400 base pairs Metagenomics This visualizes results of dimension reduction to 3D of gene sequences from an environmental sample. The many different genes are classified by clustering algorithm and visualized by MDS dimension reduction

High Performance Data Visualization Developed parallel MDS and GTM algorithm to visualize large and high-dimensional data Processed 0.1 million PubChem data having 166 dimensions Parallel interpolation can process up to 2M PubChem points MDS for 100k PubChem data 100k PubChem data having 166 dimensions are visualized in 3D space. Colors represent 2 clusters separated by their structural proximity. GTM for 930k genes and diseases Genes (green color) and diseases (others) are plotted in 3D space, aiming at finding cause-and-effect relationships. GTM with interpolation for 2M PubChem data 2M PubChem data is plotted in 3D with GTM interpolation approach. Red points are 100k sampled data and blue points are 4M interpolated points. [3] PubChem project,

Applications using Dryad & DryadLINQ (1) Perform using DryadLINQ and Apache Hadoop implementations Single “Select” operation in DryadLINQ “Map only” operation in Hadoop CAP3 [1] - Expressed Sequence Tag assembly to re-construct full-length mRNA Input files (FASTA) Output files CAP3 DryadLINQ [4] X. Huang, A. Madan, “CAP3: A DNA Sequence Assembly Program,” Genome Research, vol. 9, no. 9, pp , 1999.

Applications using Dryad & DryadLINQ (2) Derive associations between HLA alleles and HIV codons and between codons themselves PhyloD [2] project from Microsoft Research Scalability of DryadLINQ PhyloD Application [5] Microsoft Computational Biology Web Tools, Output of PhyloD shows the associations

All-Pairs[3] Using DryadLINQ Calculate Pairwise Distances (Smith Waterman Gotoh) 125 million distances 4 hours & 46 minutes 125 million distances 4 hours & 46 minutes Calculate pairwise distances for a collection of genes (used for clustering, MDS) Fine grained tasks in MPI Coarse grained tasks in DryadLINQ Performed on 768 cores (Tempest Cluster) [5] Moretti, C., Bui, H., Hollingsworth, K., Rich, B., Flynn, P., & Thain, D. (2009). All-Pairs: An Abstraction for Data Intensive Computing on Campus Grids. IEEE Transactions on Parallel and Distributed Systems, 21,

Matrix Multiplication & K-Means Clustering Using Cloud Technologies K-Means clustering on 2D vector data Matrix multiplication in MapReduce model DryadLINQ and Hadoop, show higher overheads Twister (MapReduce++) implementation performs closely with MPI K-Means Clustering Matrix Multiplication Parallel Overhead Matrix Multiplication Average Time K-means Clustering

Dryad & DryadLINQ Higher Jumpstart cost o User needs to be familiar with LINQ constructs Higher continuing development efficiency o Minimal parallel thinking o Easy querying on structured data (e.g. Select, Join etc..) Many scientific applications using DryadLINQ including a High Energy Physics data analysis Comparable performance with Apache Hadoop o Smith Waterman Gotoh 250 million sequence alignments, performed comparatively or better than Hadoop & MPI Applications with complex communication topologies are harder to implement

Application Classes 1 SynchronousLockstep Operation as in SIMD architectures 2 Loosely Synchronous Iterative Compute-Communication stages with independent compute (map) operations for each CPU. Heart of most MPI jobs MPP 3 AsynchronousCompute Chess; Combinatorial Search often supported by dynamic threads MPP 4 Pleasingly ParallelEach component independent – in 1988, Fox estimated at 20% of total number of applications Grids 5 MetaproblemsCoarse grain (asynchronous) combinations of classes 1)- 4). The preserve of workflow. Grids 6 MapReduce++It describes file(database) to file(database) operations which has subcategories including. 1)Pleasingly Parallel Map Only 2)Map followed by reductions 3)Iterative “Map followed by reductions” – Extension of Current Technologies that supports much linear algebra and datamining Clouds Hadoop/ Dryad Twister Old classification of Parallel software/hardware in terms of 5 (becoming 6) “Application architecture” Structures)

Twister(MapReduce++) Streaming based communication Intermediate results are directly transferred from the map tasks to the reduce tasks – eliminates local files Cacheable map/reduce tasks Static data remains in memory Combine phase to combine reductions User Program is the composer of MapReduce computations Extends the MapReduce model to iterative computations Data Split D MR Driver User Program Pub/Sub Broker Network D File System M R M R M R M R Worker Nodes M R D Map Worker Reduce Worker MRDeamon Data Read/Write Communication Reduce (Key, List ) Iterate Map(Key, Value) Combine (Key, List ) User Program Close() Configure() Static data Static data δ flow Different synchronization and intercommunication mechanisms used by the parallel runtimes

Dynamic Virtual Clusters Switchable clusters on the same hardware (~5 minutes between different OS such as Linux+Xen to Windows+HPCS) Support for virtual clusters SW-G : Smith Waterman Gotoh Dissimilarity Computation as an pleasingly parallel problem suitable for MapReduce style applications Pub/Sub Broker Network Summarizer Switcher Monitoring Interface iDataplex Bare- metal Nodes XCAT Infrastructure Virtual/Physical Clusters Monitoring & Control Infrastructure iDataplex Bare-metal Nodes (32 nodes) iDataplex Bare-metal Nodes (32 nodes) XCAT Infrastructure Linux Bare- system Linux Bare- system Linux on Xen Windows Server 2008 Bare-system SW-G Using Hadoop SW-G Using DryadLINQ Monitoring Infrastructure Dynamic Cluster Architecture

SALSA HPC Dynamic Virtual Clusters Demo At top, these 3 clusters are switching applications on fixed environment. Takes ~30 Seconds. At bottom, this cluster is switching between Environments – Linux; Linux +Xen; Windows + HPCS. Takes about ~7 minutes. It demonstrates the concept of Science on Clouds using a FutureGrid cluster.