BioDCV: a grid-enabled complete validation setup for functional profiling Trieste, Feb 2006 Silvano Paoli, Davide Albanese, Giuseppe.

Slides:



Advertisements
Similar presentations
Slides Prepared from the CI-Tutor Courses at NCSA By S. Masoud Sadjadi School of Computing and Information Sciences Florida.
Advertisements

SALSA HPC Group School of Informatics and Computing Indiana University.
Setting up of condor scheduler on computing cluster Raman Sehgal NPD-BARC.
Correlation Aware Feature Selection Annalisa Barla Cesare Furlanello Giuseppe Jurman Stefano Merler Silvano Paoli Berlin – 8/10/2005.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
Workload Management Massimo Sgaravatto INFN Padova.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
FESR Consorzio COMETA - Progetto PI2S2 Using MPI to run parallel jobs on the Grid Marcello Iacono Manno Consorzio COMETA
BioDCV: a grid-enabled complete validation setup for functional profiling Wannsee Retreat, October Cesare Furlanello with Silvano.
INTRODUCTION GOAL: to provide novel types of interaction between classification systems and MIAME-compliant databases We present a prototype module aimed.
GRACE Project IST EGAAP meeting – Den Haag, 25/11/2004 Giuseppe Sisto – Telecom Italia Lab.
Tools and Utilities for parallel and serial codes in ENEA-GRID environment CRESCO Project: Salvatore Raia SubProject I.2 C.R. ENEA-Portici. 11/12/2007.
Glite I/O Storm Testing in EDG-LCG Framework Elena Slabospitskaya, Vadim Petukhov, (IHEP, Russia) Gilbert Grosdidier, (CNRC, France) NEC'2005, Sept 16.
EGEE is a project funded by the European Union under contract IST Input from Generic and Testing Roberto Barbera NA4 Generic Applications Coordinator.
A Metadata Based Approach For Supporting Subsetting Queries Over Parallel HDF5 Datasets Vignesh Santhanagopalan Graduate Student Department Of CSE.
March 3rd, 2006 Chen Peng, Lilly System Biology1 Cluster and SGE.
BaBar Grid Computing Eleonora Luppi INFN and University of Ferrara - Italy.
The gLite API – PART I Giuseppe LA ROCCA INFN Catania ACGRID-II School 2-14 November 2009 Kuala Lumpur - Malaysia.
Computational grids and grids projects DSS,
:: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 1 MPI On Grids September 3 rd, GridKA School 2009.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
Distribution After Release Tool Natalia Ratnikova.
A Framework for Elastic Execution of Existing MPI Programs Aarthi Raveendran Graduate Student Department Of CSE 1.
Grid Technologies  Slide text. What is Grid?  The World Wide Web provides seamless access to information that is stored in many millions of different.
DataGrid WP1 Massimo Sgaravatto INFN Padova. WP1 (Grid Workload Management) Objective of the first DataGrid workpackage is (according to the project "Technical.
1 DIRAC – LHCb MC production system A.Tsaregorodtsev, CPPM, Marseille For the LHCb Data Management team CHEP, La Jolla 25 March 2003.
INFSO-RI Enabling Grids for E-sciencE BioDCV: a grid-enabled complete validation setup for functional profiling EGEE User Forum.
Group 1 : Grid Computing Laboratory of Information Technology Supervisors: Alexander Ujhinsky Nikolay Kutovskiy.
SALSA HPC Group School of Informatics and Computing Indiana University.
Resource Brokering in the PROGRESS Project Juliusz Pukacki Grid Resource Management Workshop, October 2003.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Giuseppe Codispoti INFN - Bologna Egee User ForumMarch 2th BOSS: the CMS interface for job summission, monitoring and bookkeeping W. Bacchi, P.
Certification and test activity IT ROC/CIC Deployment Team LCG WorkShop on Operations, CERN 2-4 Nov
What is SAM-Grid? Job Handling Data Handling Monitoring and Information.
Parallelization of likelihood functions for data analysis Alfio Lazzaro CERN openlab Forum on Concurrent Programming Models and Frameworks.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
SAM Sensors & Tests Judit Novak CERN IT/GD SAM Review I. 21. May 2007, CERN.
Enabling Grids for E-sciencE ITC-irst for NA4 biomed meeting at EGEE conference: Ginevra 2006 BioDCV - Features 1.Application for analysis of microarray.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
EGRID The EGRID project S.Cozzini Athens, 21 rs April 2005.
Università di Perugia Enabling Grids for E-sciencE Status of and requirements for Computational Chemistry NA4 – SA1 Meeting – 6 th April.
Gennaro Tortone, Sergio Fantinel – Bologna, LCG-EDT Monitoring Service DataTAG WP4 Monitoring Group DataTAG WP4 meeting Bologna –
D.Spiga, L.Servoli, L.Faina INFN & University of Perugia CRAB WorkFlow : CRAB: CMS Remote Analysis Builder A CMS specific tool written in python and developed.
WMS baseline issues in Atlas Miguel Branco Alessandro De Salvo Outline  The Atlas Production System  WMS baseline issues in Atlas.
Tutorial on Science Gateways, Roma, Catania Science Gateway Framework Motivations, architecture, features Riccardo Rotondo.
Tutorial on "GRID Computing“ EMBnet Conference 2008 CNR - ITB GRID distribution supporting chaotic map clustering on large mixed microarray.
II EGEE conference Den Haag November, ROC-CIC status in Italy
SAM architecture EGEE 07 Service Availability Monitor for the LHC experiments Simone Campana, Alessandro Di Girolamo, Nicolò Magini, Patricia Mendez Lorenzo,
Enabling Grids for E-sciencE LRMN ThIS on the Grid Sorina CAMARASU.
FESR Consorzio COMETA - Progetto PI2S2 Using MPI to run parallel jobs on the Grid Marcello Iacono Manno Consorzio Cometa
BaBar & Grid Eleonora Luppi for the BaBarGrid Group TB GRID Bologna 15 febbraio 2005.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
INFSO-RI Enabling Grids for E-sciencE EGEE is a project funded by the European Union under contract IST Report from.
EGRID Project: Experience Report Implementation of a GRID Infrastructure for the Analysis of Economic and Financial data.
Workload Management Workpackage
BaBar-Grid Status and Prospects
The EDG Testbed Deployment Details
Eleonora Luppi INFN and University of Ferrara - Italy
E.Corso, S.Cozzini, A.Leto, R. Murri, A. Terpin, C. Zoicas
Progress on NA61/NA49 software virtualisation Dag Toppe Larsen Wrocław
LCG 3D Distributed Deployment of Databases
BDII Performance Tests
Brief overview on GridICE and Ticketing System
Sergio Fantinel, INFN LNL/PD
Ruslan Fomkin and Tore Risch Uppsala DataBase Laboratory
CRESCO Project: Salvatore Raia
CompChem VO: User experience using MPI
Job Application Monitoring (JAM)
L. Glimcher, R. Jin, G. Agrawal Presented by: Leo Glimcher
Presentation transcript:

BioDCV: a grid-enabled complete validation setup for functional profiling Trieste, Feb Silvano Paoli, Davide Albanese, Giuseppe Jurman, Annalisa Barla, Stefano Merler, Roberto Flor, Stefano Cozzini, James Reid, Cesare Furlanello

Predictive Profiling QUESTIONS for a discriminating molecular signature: predict disease state predict disease state identify patterns regarding subclasses of patients identify patterns regarding subclasses of patients Group A Group B Array (gene expression Affy) B Over-expression in group B Over-expression in group A B genes Under-expression in group B samples A PANEL OF DISCRIMINATING GENES?

Algorithms and software systems for 1. 1.Predictive classification, feature selection, discovery Algorithms and software systems for 1. 1.Predictive classification, feature selection, discovery Our BioDCV system: a set-up based on the E-RFE algorithm for Support Vector Machines (SVM) Control of selection bias, a serious experimental design issue in the use of prognostic molecular signatures Subtype identification for studies of disease evolution and response to treatment Our BioDCV system: a set-up based on the E-RFE algorithm for Support Vector Machines (SVM) Control of selection bias, a serious experimental design issue in the use of prognostic molecular signatures Subtype identification for studies of disease evolution and response to treatment Predictive classification and functional profiling

“In conclusion, the list of genes included in a molecular signature (based on one training set and the proportion of misclassifications seen in one validation set) depends greatly on the selection of the patients in training sets.” “Five of the seven largest published studies addressing cancer prognosis did not classify patients better than chance. This result suggests that these publications were overoptimistic.” John P A Ioannidis February 5, 2005 Selection bias

To avoid selection bias (p>>n): a COMPLETE VALIDATION SCHEME* externally a stratified random partitioning, internally a model selection based on a K-fold cross-validation  3 x 10 5 SVM models (+ random labels  2 x 10 6 ) ** ** Binary classification, on a genes x 45 cDNA array, 400 loops * Ambroise & McLachlan, 2002, Simon et. al 2003, Furlanello et. al 2003 OFS-M: Model tuning and Feature ranking ONF: Optimal gene panel estimator ATE: Average Test Error The BioDCV setup (E-RFE SVM)

Starting from a suite of C modules and Perl/shell scripts running on a local HPC resource … 1. Optimize modules and scripts:* database management of data, of model structures, of system outputs, scripts for OpenMosix Linux Clusters database management of data, of model structures, of system outputs, scripts for OpenMosix Linux Clusters 2. Wrap BioDCV into a grid application Learn about grid computing Learn about grid computing Port the serial version on a computational grid testbed Port the serial version on a computational grid testbed Analyze/verify results: identify needs/problems Analyze/verify results: identify needs/problems 3. Wrap with C MPI scripts Build the MPI mechanism Build the MPI mechanism Experiment on the testbed Experiment on the testbed Submit on production grid Submit on production grid Test scalability Test scalability 4. Production 5. Egee Biomed/NA4 Roadmap for a new grid application February 2005 March 05: Up and Running! Nov 04 - January 2005 Sept-Dec 2004 Sept 05: 1500 jobs, 500+ computing days on production grid Feb 2006: Egee

Rewrite shell/Perl scripts in C language Rewrite shell/Perl scripts in C language control I/O costs, control I/O costs, a process granularity optimal for temporary data allocation without tmp files a process granularity optimal for temporary data allocation without tmp files convenient for migrations convenient for migrations SQLite interface (Database engine library) SQLite interface (Database engine library) SQLite is small, self-contained, embeddable SQLite is small, self-contained, embeddable It provides a relational access to model and data structures (inputs, outputs, diagnostics) It provides a relational access to model and data structures (inputs, outputs, diagnostics) It supports transactions and multiple connections, databases up to 2 terabytes in size It supports transactions and multiple connections, databases up to 2 terabytes in size local copy (db file): + model definitions + a copy of of data + indexes defining the partition of the replicate sample(s) 1. Optimize modules and scripts

BioDCV (1)exp : experiment design through configuration of the setup database (2)scheduler : script submitting jobs (run) on each available processor. Platform dependent. (3)run : performs fractions of the complete validation procedures on several data splits. Local db is created (4)unify : the local datasets are merged with setup after completing the validations tasks. A complete dataset collecting all the relevant parameters is created.

Why porting into the grid? Why porting into the grid? Because we do not have “enough” computational resources… Because we do not have “enough” computational resources… How to port the BioDCV in grid? How to port the BioDCV in grid? PRELIMINARY PRELIMINARY Identify a collaborator with experience in grid computing (e.g. the Egrid Project hosted at ICTP ) Identify a collaborator with experience in grid computing (e.g. the Egrid Project hosted at ICTP ) Train human resources (SP  Trieste) Train human resources (SP  Trieste) Join the Egrid testbed (installing a grid site in Trento) Join the Egrid testbed (installing a grid site in Trento) HANDS-ON HANDS-ON Porting of the serial application on the testbed Porting of the serial application on the testbed patch code as needed: code portability is mandatory to make life easier patch code as needed: code portability is mandatory to make life easier Identify requirements/problems Identify requirements/problems 2. Wrapping into a grid application

A few EDG/LCG definitions Storage Element (SE): stores the user data in the grid and makes it available for subsequent elaboration Computing Element (CE): where the grid user programs are delivered for elaboration: this is usually a front-end to several elementary Worker Node machines Worker Node (WN): machines where the user programs are actually executed, possibly with multiple CPUs User Interface (UI): machine to access the GRID CE SE m TByte WNs N CPUs site

The local testbed in Trieste ( now gridats) The local testbed in Trieste ( now gridats) Small computational grid based on EDG middleware + Egrid add-ons Small computational grid based on EDG middleware + Egrid add-ons Designed for testing/training/porting of applications Designed for testing/training/porting of applications Full compatibility with Grid.it middleware Full compatibility with Grid.it middleware The production infrastructure: The production infrastructure: A Virtual Organization within Grid.it, with its own services A Virtual Organization within Grid.it, with its own services Star topology with central node in Padova ( only in version 1) Star topology with central node in Padova ( only in version 1) CE SE 2.8 TByte WNs 100 cpus Padova CE+SE+WN Trento CE+SE+WN Roma CE+SE+WN Trieste CE+SE+WN Firenze CE+SE+WN Palermo The ICTP Egrid project infrastructures

Porting the serial application Porting the serial application Easy task due to portability (no actual work needed) Easy task due to portability (no actual work needed) No software/library dependencies No software/library dependencies Testing/Evaluation Testing/Evaluation Problems identified: Problems identified: Job submission overhead due to EDG mechanisms Job submission overhead due to EDG mechanisms Managing multiple (~hundreds/thousands) jobs is difficult and cumbersome Managing multiple (~hundreds/thousands) jobs is difficult and cumbersome Answer: parallellize jobs on the GRID via MPI Answer: parallellize jobs on the GRID via MPI Single submission Single submission Multiple executions Multiple executions Hands on

How can we use C MPI? How can we use C MPI? Prepare two wrappers, and an unifier Prepare two wrappers, and an unifier one shell script to submit jobs (BioDCV.sh) one shell script to submit jobs (BioDCV.sh) one C MPI program (Mpba-mpi) one C MPI program (Mpba-mpi) one shell script to integrate results (BioDCV-union.sh) one shell script to integrate results (BioDCV-union.sh) BioDCV.sh in action: BioDCV.sh in action: copies file from and to Storage Element (SE) and distributes the microarray dataset to all WNs. copies file from and to Storage Element (SE) and distributes the microarray dataset to all WNs. It then starts the C MPI wrapper which spawns several runs of the BioDCV program (optimize for resources) It then starts the C MPI wrapper which spawns several runs of the BioDCV program (optimize for resources) When all BioDCV runs are completed, the wrapper copies all the results (SQLite files) from the WNs to the starting SE. When all BioDCV runs are completed, the wrapper copies all the results (SQLite files) from the WNs to the starting SE. MPBA-MPI executes the BioDCV runs in parallel MPBA-MPI executes the BioDCV runs in parallel BioDCV-union.sh collates results in one SQLite file (  R) BioDCV-union.sh collates results in one SQLite file (  R) C MPI 3. Wrap with C MPI scripts

Using BioDCV in Egrid UI Egrid Live CD* Resource broker (PD-TN) CE SE 2.8 TByte WNs 100 cpus CE+SE+WN Padova Trieste Palermo Trento.. “Edg-job-submit bioDCV.jdl” site a bootable Linux live-cd distribution with a complete suit of GRID tools by Egrid (ICTP Trieste)

[ Type = "Job"; JobType = "MPICH"; NodeNumber = 64; Executable = “BioDCV.sh"; Arguments = “Mpba-mpi 64 lfn:/utenti/spaoli/sarcoma.db 400"; StdOutput = "test.out"; StdError = "test.err"; InputSandbox = {“BioDCV.sh",“Mpba-mpi","run", "run.sh"}; OutputSandbox = {"test.err","test.out","executable.out"}; Requirements = other.GlueCEInfoLRMSType == "PBS" || other.GlueCEInfoLRMSType == "LSF"; ] BioDCV.jdl A Job Description

Second step:... WN 1 WN 2 WN 3 WN n Mpba-mpi and Sarcoma.db are distributed to all the involved WNs BioDCV Sarcoma.db WN 1 BioDCV.sh runs on Request file sarcoma.db SE First step: BioDCV.sh copies data from SE to the WN Using BioDCV in Egrid (II)

... SE Fourth step: Output WN 1 BioDCV.sh copies all results (SQLite files) from the WNs to the starting SE BioDCV.sh runs on... Third step: WN 1 BioDCV is executed on all involved WNs by MPI Mpba-mpi runs on WN2 BioDCV runs on Mpba-mpi on WN3 Mpba-mpi on WN n Job completed Using BioDCV in Egrid (III)

RUNNING ON THE TESTBED (EGRID.IT) MARCH 2005 CPU no.Computing (sec) Copying files (sec) Total time (secondi) , , CPUs: Intel 2.80 GHz SCALING UP TESTS a. b. INT-IFOM Sarcoma dataset 7143 genes 35 samples a. Colon cancer dataset 2000 genes 62 samples b.

The pros: MPI execution on the GRID in a few days.. MPI execution on the GRID in a few days.. The tests showed scalable behavior of our grid application for increasing numbers of CPUs The tests showed scalable behavior of our grid application for increasing numbers of CPUs Grid computing reduces significantly production times and allows to tackle larger problems (see next slide) Grid computing reduces significantly production times and allows to tackle larger problems (see next slide) The cons: Data movements limit the scalability for a large number of CPU’s Data movements limit the scalability for a large number of CPU’s Note: this is a GRID.it limitation: there is no shared Filesystem between the WNs, so each file needs to be copied everywhere! Note: this is a GRID.it limitation: there is no shared Filesystem between the WNs, so each file needs to be copied everywhere! To hide the latency (ideas): To hide the latency (ideas): Smart data distribution from MWN to WN’s: Smart data distribution from MWN to WN’s: Reduce the amount of data to be moved Reduce the amount of data to be moved Proportionate BioDCV subtasks to local cache Proportionate BioDCV subtasks to local cache Data transferred via MPI communication Data transferred via MPI communication Requires MPI coding and some MPI adaptation of the code) Requires MPI coding and some MPI adaptation of the code) Results of Phase 1

Phase 2: Improving the system Reduce the amount of data to be moved 1.Redesign “per run”: SVM models (about 200) and SVM models (about 200) and results, evaluation results, evaluation Variables for semisupervised analysis Variables for semisupervised analysis all managed within one data structure 2.A large part of the sampletracking semisupervised analysis, is now managed within BioDCV (about 2000 files, 300MB) i.e. stored through SQLite. 3.Randomization of labels is fully automated 4.The SVM library is now an external library: Modular use of machine learning methods Modular use of machine learning methods Now being added: a PDA module Now being added: a PDA module 5.BioDCV now under GPL (code curation …) 6.Distributed at BioDCV with a SubVersion server since September 2005 PDA: Penalized Discriminant Analysis

1.At work on several clusters: MPBA-old: 50 P3 CPUs, 1GHz MPBA-old: 50 P3 CPUs, 1GHz MPBA-new: 6 Xeon CPUs, 2,8 GHz MPBA-new: 6 Xeon CPUs, 2,8 GHz ECT* (BEN): up to 32 (of 100) CPU Xeon, 2.8GHz ECT* (BEN): up to 32 (of 100) CPU Xeon, 2.8GHz ICTP cluster: up to 8 (of 60) P4, 2GHz, Myrinet ICTP cluster: up to 8 (of 60) P4, 2GHz, Myrinet 2.GRID experiences A.Egrid “production grid” (INFN Padua): up to 64 (of 100) Cpu Xeon, 2-3GHz Microarray data: Sarcoma, HS random, Morishita, Wang, … B.LESSONS LEARNED: i.the latest version reduces latencies (system times) due to file copying and management  CPU saturation ii.Life quality (and more studies): huge reduction of file installing and retrieving from facilities and WITHIN facilities iii.Forgetting the severe limitation of file system (AFS, …) iv.Now installing 2.6 LCG2 (CERN release September 2005) CLUSTER AND GRID ISSUES

In this section we present two experiments designed to measure the performance of the BioDCV parallel application in two different computing available environments: a standard linux cluster and computational grid. In this section we present two experiments designed to measure the performance of the BioDCV parallel application in two different computing available environments: a standard linux cluster and computational grid. SPEED-UP SPEED-UP In Benchmark 1, we study the scalability of our application as function of the number of CPUs (from 1 to 64). FOOTPRINT FOOTPRINT In Benchmark 2, we characterize the BioDCV appllication with respect to different dataset, i.e. for different number of feature (d) and number of samples (N) for the complete validation experiment (for a fixed number of 32 CPUs). NOVEMBER/DECEMBER 2005 – TEST CLUSTER AND GRID

Tasks for BioDCV (E-RFE SVM) - DICEMBER 2005 Liver cancer: 213 samples, 107 tumors from liver cancer +106 non tumoral/normal, 1993 genes, ATAC-PCR (Sese et. al, 2000) Breast cancer: Wang et al. 2005: 238 samples (286 lymph-node-negative), Affimetrix, genes Chang et al. 2005: 295 samples (151 lymph-node-negative, 144 pos), cDNA genes IFOM: 62 BRCA (4 subclasses) Pediatric Leukemia: 327 samples, genes (7 classes, binary: ), Yeoh et al Sarcoma: 37 samples, 7143 genes (4 classes) Benchmark1 Benchmark2

DICEMBER SPEED-UP (Cluster + and GRID)

(DICEMBER 2005) FOOTPRINT GRID dN x 10e-7 Time (s) T_tns E_g 10 x L_i 10 x U 10 x S BRCAChang Morishita PL Sarcoma Wang FOOTPRINT dN: #genes x #samples

Two experiments for 139 CPU day in Egrid infrastructure Two experiments for 139 CPU day in Egrid infrastructure In benchmark 1, we obtain a speed-up curve very close to linear In benchmark 1, we obtain a speed-up curve very close to linear In Benchmark 2, effective execution time increases linearly with the dataset footprint, i.e. the product of number of genes and number of samples In Benchmark 2, effective execution time increases linearly with the dataset footprint, i.e. the product of number of genes and number of samples Performance penalty payed for data transfer from WNs and queuing policy (best effort) respect on a local linux cluster Performance penalty payed for data transfer from WNs and queuing policy (best effort) respect on a local linux cluster We will investigate if substituting an MPI approach with a model that submit N different single CPU jobs We will investigate if substituting an MPI approach with a model that submit N different single CPU jobs BioDCV system on LCG/EGEE computational grid can be used in pratical large scale experiments BioDCV system on LCG/EGEE computational grid can be used in pratical large scale experiments Next is porting our system under EGEE’s Biomed VO. Next is porting our system under EGEE’s Biomed VO. NOVEMBER/DECEMBER 2005 –DISCUSSION

BASIC CLASSIFICATION: MODELS, lists, additional tools Tools for researchers: subtype discovery, outlier detection Connection to data (DB–MIAME) BASIC CLASSIFICATION: MODELS, lists, additional tools Tools for researchers: subtype discovery, outlier detection Connection to data (DB–MIAME) Challenges (AIRC-BICG) HPC-Interaction: access through web portal to GRID HPC www A p a c h e

Acknowledgments ITC-irst, Trento Alessandro Soraruf ICTP E-GRID Project, Trieste Angelo Leto Cristian Zoicas Riccardo Murri Ezio Corso Alessio Terpin ITC-irst, Trento Alessandro Soraruf ICTP E-GRID Project, Trieste Angelo Leto Cristian Zoicas Riccardo Murri Ezio Corso Alessio Terpin IFOM-FIRC and INT, Milano Manuela Gariboldi Marco A. Pierotti Grants: BICG (AIRC) Democritos Data: IFOM-FIRCC Cardiogenomics PGA IFOM-FIRC and INT, Milano Manuela Gariboldi Marco A. Pierotti Grants: BICG (AIRC) Democritos Data: IFOM-FIRCC Cardiogenomics PGA