SALSASALSASALSASALSA Large Scale DNA Sequence Analysis and Biomedical Computing using MapReduce, MPI and Threading Workshop on Enabling Data-Intensive.

Slides:



Advertisements
Similar presentations
Scalable High Performance Dimension Reduction
Advertisements

SALSA HPC Group School of Informatics and Computing Indiana University.
High Performance Dimension Reduction and Visualization for Large High-dimensional Data Analysis Jong Youl Choi, Seung-Hee Bae, Judy Qiu, and Geoffrey Fox.
SALSASALSASALSASALSA Using MapReduce Technologies in Bioinformatics and Medical Informatics Computing for Systems and Computational Biology Workshop SC09.
SALSASALSASALSASALSA Chemistry in the Digital Age Workshop, Penn State University, June 11, 2009 Geoffrey Fox
SALSASALSASALSASALSA Using Cloud Technologies for Bioinformatics Applications MTAGS Workshop SC09 Portland Oregon November Judy Qiu
Interpolative Multidimensional Scaling Techniques for the Identification of Clusters in Very Large Sequence Sets April 27, 2011.
Authors: Thilina Gunarathne, Tak-Lon Wu, Judy Qiu, Geoffrey Fox Publish: HPDC'10, June 20–25, 2010, Chicago, Illinois, USA ACM Speaker: Jia Bao Lin.
Reduced Support Vector Machine
1 Clouds and Sensor Grids CTS2009 Conference May Alex Ho Anabas Inc. Geoffrey Fox Computer Science, Informatics, Physics Chair Informatics Department.
Dimension reduction : PCA and Clustering Christopher Workman Center for Biological Sequence Analysis DTU.
FLANN Fast Library for Approximate Nearest Neighbors
Parallel Data Analysis from Multicore to Cloudy Grids Indiana University Geoffrey Fox, Xiaohong Qiu, Scott Beason, Seung-Hee.
Dimension Reduction and Visualization of Large High-Dimensional Data via Interpolation Seung-Hee Bae, Jong Youl Choi, Judy Qiu, and Geoffrey Fox School.
SALSASALSA Judy Qiu Research Computing UITS, Indiana University.
SALSASALSASALSASALSA Digital Science Center June 25, 2010, IIT Geoffrey Fox Judy Qiu School.
SALSASALSA Programming Abstractions for Multicore Clouds eScience 2008 Conference Workshop on Abstractions for Distributed Applications and Systems December.
SALSASALSASALSASALSA Performance Analysis of High Performance Parallel Applications on Virtualized Resources Jaliya Ekanayake and Geoffrey Fox Indiana.
SALSASALSASALSASALSA High Performance Biomedical Applications Using Cloud Technologies HPC and Grid Computing in the Cloud Workshop (OGF27 ) October 13,
Iterative computation is a kernel function to many data mining and data analysis algorithms. Missing in current MapReduce frameworks is collective communication,
Service Aggregated Linked Sequential Activities GOALS: Increasing number of cores accompanied by continued data deluge Develop scalable parallel data mining.
Applying Twister to Scientific Applications CloudCom 2010 Indianapolis, Indiana, USA Nov 30 – Dec 3, 2010.
SALSASALSASALSASALSA AOGS, Singapore, August 11-14, 2009 Geoffrey Fox 1,2 and Marlon Pierce 1
SALSASALSA Judy Qiu Assistant Director, Pervasive Technology Institute.
Science in Clouds SALSA Team salsaweb/salsa Community Grids Laboratory, Digital Science Center Pervasive Technology Institute Indiana University.
SALSASALSASALSASALSA Proposal Review Meeting with CTSI Translating Research Into Practice Project Development Team, July 8, 2009, IUPUI Gil Liu, Judy Qiu,
SALSASALSA Twister: A Runtime for Iterative MapReduce Jaliya Ekanayake Community Grids Laboratory, Digital Science Center Pervasive Technology Institute.
Network Aware Resource Allocation in Distributed Clouds.
Generative Topographic Mapping in Life Science Jong Youl Choi School of Informatics and Computing Pervasive Technology Institute Indiana University
Service Aggregated Linked Sequential Activities GOALS: Increasing number of cores accompanied by continued data deluge. Develop scalable parallel data.
1 Performance of a Multi-Paradigm Messaging Runtime on Multicore Systems Poster at Grid 2007 Omni Austin Downtown Hotel Austin Texas September
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
Generative Topographic Mapping by Deterministic Annealing Jong Youl Choi, Judy Qiu, Marlon Pierce, and Geoffrey Fox School of Informatics and Computing.
SALSASALSASALSASALSA MSR Internship – Final Presentation Jaliya Ekanayake School of Informatics and Computing Indiana University.
SALSASALSASALSASALSA Cloud Technologies for Data Intensive Computing Cloud Computing and Collaborative Technologies in the Geosciences September 17-18,
SALSASALSASALSASALSA Design Pattern for Scientific Applications in DryadLINQ CTP DataCloud-SC11 Hui Li Yang Ruan, Yuduo Zhou Judy Qiu, Geoffrey Fox.
Parallel Applications And Tools For Cloud Computing Environments Azure MapReduce Large-scale PageRank with Twister Twister BLAST Thilina Gunarathne, Stephen.
S CALABLE H IGH P ERFORMANCE D IMENSION R EDUCTION Seung-Hee Bae.
SALSA HPC Group School of Informatics and Computing Indiana University.
1 Performance Measurements of CCR and MPI on Multicore Systems Expanded from a Poster at Grid 2007 Austin Texas September Xiaohong Qiu Research.
Community Grids Lab. Indiana University, Bloomington Seung-Hee Bae.
Performance Model for Parallel Matrix Multiplication with Dryad: Dataflow Graph Runtime Hui Li School of Informatics and Computing Indiana University 11/1/2012.
Multidimensional Scaling by Deterministic Annealing with Iterative Majorization Algorithm Seung-Hee Bae, Judy Qiu, and Geoffrey Fox SALSA group in Pervasive.
Service Aggregated Linked Sequential Activities: GOALS: Increasing number of cores accompanied by continued data deluge Develop scalable parallel data.
SALSASALSASALSASALSA Clouds Ball Aerospace March Geoffrey Fox
X-Informatics MapReduce February Geoffrey Fox Associate Dean for Research.
Shanghai Many-Core Workshop, March Judy Qiu Research.
1 Multicore for Science Multicore Panel at eScience 2008 December Geoffrey Fox Community Grids Laboratory, School of informatics Indiana University.
Radix Sort and Hash-Join for Vector Computers Ripal Nathuji 6.893: Advanced VLSI Computer Architecture 10/12/00.
Looking at Use Case 19, 20 Genomics 1st JTC 1 SGBD Meeting SDSC San Diego March Judy Qiu Shantenu Jha (Rutgers) Geoffrey Fox
Cloud Computing Paradigms for Pleasingly Parallel Biomedical Applications Thilina Gunarathne, Tak-Lon Wu Judy Qiu, Geoffrey Fox School of Informatics,
SALSA Group Research Activities April 27, Research Overview  MapReduce Runtime  Twister  Azure MapReduce  Dryad and Parallel Applications 
SALSASALSASALSASALSA Digital Science Center February 12, 2010, Bloomington Geoffrey Fox Judy Qiu
Parallel Applications And Tools For Cloud Computing Environments CloudCom 2010 Indianapolis, Indiana, USA Nov 30 – Dec 3, 2010.
HPC in the Cloud – Clearing the Mist or Lost in the Fog Panel at SC11 Seattle November Geoffrey Fox
Memcached Integration with Twister Saliya Ekanayake - Jerome Mitchell - Yiming Sun -
SALSASALSASALSASALSA Data Intensive Biomedical Computing Systems Statewide IT Conference October 1, 2009, Indianapolis Judy Qiu
SALSASALSASALSASALSA Cloud Technologies for Data Intensive Biomedical Computing OGF27 Workshop October 13, 2009, Banff Judy Qiu
Optimization Indiana University July Geoffrey Fox
SALSASALSA Dynamic Virtual Cluster provisioning via XCAT on iDataPlex Supports both stateful and stateless OS images iDataplex Bare-metal Nodes Linux Bare-
Yang Ruan PhD Candidate Salsahpc Group Community Grid Lab Indiana University.
SALSASALSASALSASALSA Data Intensive Biomedical Computing Systems Statewide IT Conference October 1, 2009, Indianapolis Judy Qiu
Item Based Recommender System SUPERVISED BY: DR. MANISH KUMAR BAJPAI TARUN BHATIA ( ) VAIBHAV JAISWAL( )
Early Experience with Cloud Technologies
Our Objectives Explore the applicability of Microsoft technologies to real world scientific domains with a focus on data intensive applications Expect.
Geoffrey Fox, Huapeng Yuan, Seung-Hee Bae Xiaohong Qiu
Applying Twister to Scientific Applications
Biology MDS and Clustering Results
SC09 Doctoral Symposium, Portland, 11/18/2009
Presentation transcript:

SALSASALSASALSASALSA Large Scale DNA Sequence Analysis and Biomedical Computing using MapReduce, MPI and Threading Workshop on Enabling Data-Intensive Computing: from Systems to Applications July 30-31, 2009, Pittsburgh Judy Qiu Community Grids Laboratory, Digital Science Center Indiana University

SALSASALSA Collaboration in SALSA Project Indiana University SALSA Team Geoffrey Fox Xiaohong Qiu Scott Beason Jaliya Ekanayake Thilina Gunarathne Jong Youl Choi Yang Ruan Seung-Hee Bae Microsoft Research Technology Collaboration Dryad Roger Barga Christophe Poulain CCR (Threading) George Chrysanthakopoulos DSS Henrik Frystyk Nielsen Others Application Collaboration Bioinformatics, CGB Haiku Tang, Mina Rho, Peter Cherbas, Qunfeng Dong IU Medical School Gilbert Liu Demographics (GIS) Neil Devadasan Cheminformatics Rajarshi Guha (NIH), David Wild Physics CMS group at Caltech (Julian Bunn) Community Grids Lab and UITS RT – PTI

SALSASALSA Data Intensive (Science) Applications 1) Data starts on some disk/sensor/instrument – It needs to be partitioned; often partitioning natural from source of data 2) One runs a filter of some sort extracting data of interest and (re)formatting it – Pleasingly parallel with often “millions” of jobs – Communication latencies can be many milliseconds and can involve disks 3) Using same (or map to a new) decomposition, one runs a parallel application that could require iterative steps between communicating processes or could be pleasing parallel – Communication latencies may be at most some microseconds and involves shared memory or high speed networks Workflow links 1) 2) 3) with multiple instances of 2) 3) – Pipeline or more complex graphs Filters are “Maps” or “Reductions” in MapReduce language

SALSASALSA “File/Data Repository” Parallelism Instruments Disks Computers/Disks Map 1 Map 2 Map 3 Reduce Communication via Messages/Files Map = (data parallel) computation reading and writing data Reduce = Collective/Consolidation phase e.g. forming multiple global sums as in histogram Portals /Users

SALSASALSA Data Analysis Examples LHC Particle Physics analysis: File parallel over events – Filter1: Process raw event data into “events with physics parameters” – Filter2: Process physics into histograms using ROOT or equivalent – Reduce2: Add together separate histogram counts – Filter 3: Visualize Bioinformatics - Gene Families: Data parallel over sequences – Filter1: Calculate similarities (distances) between sequences – Filter2: Align Sequences (if needed) – Filter3: Cluster to find families and/or other statistical tools – Filter 4: Apply Dimension Reduction to 3D – Filter5: Visualize

SALSASALSA Particle Physics (LHC) Data Analysis MapReduce for LHC data analysis LHC data analysis, execution time vs. the volume of data (fixed compute resources) Root running in distributed fashion allowing analysis to access distributed data

SALSASALSA Reduce Phase of Particle Physics “Find the Higgs” using Dryad Combine Histograms produced by separate Root “Maps” (of event data to partial histograms) into a single Histogram delivered to Client

SALSASALSA Notes on Performance Speed up = T(1)/T(P) =  (efficiency ) P with P processors Overhead f = (PT(P)/T(1)-1) = (1/  -1) is linear in overheads and usually best way to record results if overhead small For MPI communication f  ratio of data communicated to calculation complexity = n -0.5 for matrix multiplication where n (grain size) matrix elements per node MPI Communication Overheads decrease in size as problem sizes n increase (edge over area rule) Dataflow communicates all data – Overhead does not decrease Scaled Speed up: keep grain size n fixed as P increases Conventional Speed up: keep Problem size fixed n  1/P VMs and Windows Threads have runtime fluctuation /synchronization overheads

SALSASALSA Gene Sequencing Application This is first filter in Alu Gene Sequence study – find Smith Waterman dissimilarities between genes Essentially embarrassingly parallel Note MPI faster than threading All 35,229 sequences require 624,404,791 pairwise distances = 2.5 hours with some optimization This includes calculation and needed I/O to redistribute data) Parallel Overhead = (Number of Processes/Speedup) - 1 Two data set sizes

SALSASALSA Some Other File Parallel Examples from Indiana University Biology Dept. EST (Expressed Sequence Tag) Assembly: 2 million mRNA sequences generates files taking 15 hours on 400 TeraGrid nodes (CAP3 run dominates) MultiParanoid/InParanoid gene sequence clustering: 476 core years just for Prokaryotes Population Genomics: (Lynch) Looking at all pairs separated by up to 1000 nucleotides Sequence-based transcriptome profiling: (Cherbas, Innes) MAQ, SOAP Systems Microbiology (Brun) BLAST, InterProScan Metagenomics (Fortenberry, Nelson) Pairwise alignment of s sequence data took 12 hours on TeraGrid All can use Dryad

SALSASALSA CAP3 Results Results obtained using using two clusters running at IU and Microsoft. Each cluster has 32 nodes and so each node has 8 cores. There is a total of 256 cores. Cap3 is a sequence assembly program that operates on a collection of gene sequence files which produce several output files. In parallel implementations, the input files are processed concurrently and the outputs are saved in a predefined location. As a comparison, we have implemented this application using Hadoop, CGL-MapReduce (enhanced Hadoop) and Dryad.

SALSASALSA CAP3 Results Number of CAP3 Files

SALSASALSA Data Intensive Architecture Prepare for Viz MDS Initial Processing Instruments User Data Users Files Higher Level Processing Such as R PCA, Clustering Correlations … Maybe MPI Visualization User Portal Knowledge Discovery

SALSASALSA Why Gather/ Scatter Operation Important There is a famous factor of 2 in many O(N 2 ) parallel algorithms We initially calculate in parallel Distance(i,j) between points (sequences) i and j. – Done in parallel over all processor nodes for say i < j However later parallel algorithms may want specific Distance(i,j) in specific machines Our MDS and PWClustering algorithms require each of N processes has 1/N of sequences and for this subset {i} Distance({i},j) for ALL j. i.e. wants both Distance(i,j) and Distance(j,i) stored (in different processors/disk) The different distributions of Distance(i,j) across processes is in MPI called a scatter or gather operation. This time is included in previous SW timings and is about half total time – We did NOT get good performance here from either MPI (it should be a seconds on Petabit/sec Infiniband switch) or Dryad – We will make needed primitives precise and greatly improve performance here

SALSASALSA High Performance Robust Algorithms We suggest that the data deluge will demand more robust algorithms in many areas and these algorithms will be highly I/O and compute intensive Clustering N= 200,000 sequences using deterministic annealing will require around 750 cores and this need scales like N 2 NSF Track 1 – Blue Waters in 2011 – could be saturated by 5,000,000 point clustering

SALSASALSA High end Multi Dimension scaling MDS Given dissimilarities D(i,j), find the best set of vectors x i in d (any number) dimensions minimizing  i,j weight(i,j) (D(i,j) – |x i – x j | n ) 2 (*) Weight chosen to refelect importance of point or perhaps a desire (Sammon’s method) to fit smaller distance more than larger ones n is typically 1 (Euclidean distance) but 2 also useful Normal approach is Expectation Maximation and we are exploring adding deterministic annealing to improve robustness Currently mainly note (*) is “just”  2 and one can use very reliable nonlinear optimizers – We have good results with Levenberg–Marquardt approach to  2 solution (adding suitable multiple of unit matrix to nonlinear second derivative matrix). However EM also works well We have some novel features – Fully parallel over unknowns x i – Allow “incremental use”; fixing MDS from a subset of data and adding new points – Allow general d, n and weight(i,j) – Can optimally align different versions of MDS (e.g. different choices of weight(i,j) to allow precise comparisons Feeds directly to powerful Point Visualizer

SALSASALSA Deterministic Annealing Clustering Clustering methods like Kmeans very sensitive to false minima but some 20 years ago an EM (Expectation Maximization) method using annealing (deterministic NOT Monte Carlo) developed by Ken Rose (UCSB), Fox and others Annealing is in distance resolution – Temperature T looks at distance scales of order T 0.5. Method automatically splits clusters where instability detected Highly efficient parallel algorithm Points are assigned probabilities for belonging to a particular cluster Original work based in a vector space e.g. cluster has a vector as its center Major advance 10 years ago in Germany showed how one could use vector free approach – just the distances D(i,j) at cost of O(N 2 ) complexity. We have extended this and implemented in threading and/or MPI We will release this as a service later this year followed by vector version – Gene Sequence applications naturally fit vector free approach.

SALSASALSA Key Features of our Approach Initially we will make key capabilities available as services that we eventually be implemented on virtual clusters (clouds) to address very large problems – Basic Pairwise dissimilarity calculations – R (done already by us and others) – MDS in various forms – Vector and Pairwise Deterministic annealing clustering Point viewer (Plotviz) either as download (to Windows!) or as a Web service Note all our code written in C# (high performance managed code) and runs on Microsoft HPCS 2008 (with Dryad extensions)

SALSASALSA Various Alu Sequence Results showing Clustering and MDS 4500 Points : Pairwise Aligned 4500 Points : Clustal MSAMap distances to 4D Sphere before MDS 3000 Points : Clustal MSA Kimura2 Distance

SALSASALSA Pairwise Clustering of Sequences Initial Clustering of Sequences showing first four clusters identified with different colors The Pairwise clustering using MDS on same sample to display results. It used all 768 cores from Tempest Windows cluster Further work will improve clustering. Investigate sensitivity to alignment (Smith Waterman) and give performance details

SALSASALSA PWDA Parallel Pairwise data clustering by Deterministic Annealing run on 24 core computer Parallel Pattern (Thread X Process X Node) Threading Intra-node MPI Inter-node MPI Parallel Overhead

SALSASALSA Parallel Pairwise Clustering PWDA Speedup Tests on eight 16-core Systems (6 Clusters, 10,000 Patient Records) Threading with Short Lived CCR Threads Parallel Patterns (# Thread /process) x (# MPI process /node) x (# node)

SALSASALSA

SALSASALSA MDS of 635 Census Blocks with 97 Environmental Properties Shows expected Correlation with Principal Component – color varies from greenish to reddish as projection of leading eigenvector changes value Ten color bins used MDS and Primary PCA Vector

SALSASALSA Canonical Correlation Choose vectors a and b such that the random variables U = a T.X and V = b T.Y maximize the correlation  = cor(a T.X, b T.Y). X Environmental Data Y Patient Data Use R to calculate  = 0.76

SALSASALSA Projection of First Canonical Coefficient between Environment and Patient Data onto Environmental MDS Keep smallest 30% (green-blue) and top 30% (red-orchid) in numerical value Remove small values < 5% mean in absolute value MDS and Canonical Correlation

SALSASALSA References K. Rose, "Deterministic Annealing for Clustering, Compression, Classification, Regression, and Related Optimization Problems," Proceedings of the IEEE, vol. 80, pp , November 1998 T Hofmann, JM Buhmann Pairwise data clustering by deterministic annealing, IEEE Transactions on Pattern Analysis and Machine Intelligence 19, pp Hansjörg Klock and Joachim M. Buhmann Data visualization by multidimensional scaling: a deterministic annealing approach Pattern Recognition Volume 33, Issue 4, April 2000, Pages Granat, R. A., Regularized Deterministic Annealing EM for Hidden Markov Models, Ph.D. Thesis, University of California, Los Angeles, We use for Earthquake prediction Geoffrey Fox, Seung-Hee Bae, Jaliya Ekanayake, Xiaohong Qiu, and Huapeng Yuan, Parallel Data Mining from Multicore to Cloudy Grids, Proceedings of HPC 2008 High Performance Computing and Grids Workshop, Cetraro Italy, July Project website: