PRAGMA9 – Demo Bioinformatics applications inside Gfarm using meta-scheduler (CSF) and local schedulers (LSF/SGE/etc) Dr. Xiaohui Wei, JLU, China Dr. Wilfred.

Slides:



Advertisements
Similar presentations
Resource WG Breakout. Agenda How we will support/develop data grid testbed and possible applications (1 st day) –Introduction of Gfarm (Osamu) –Introduction.
Advertisements

CSF4 Meta-Scheduler Tutorial 1st PRAGMA Institute Zhaohui Ding or
Resources WG Update PRAGMA 9 Hyderabad. Status (in 1 slide) Applications QMMD (AIST) Savannah (MU) iGAP (SDSC, AIST) Middleware Gfarm (AIST) Community.
Resource WG Update PRAGMA 8 Singapore. Routine Use - Users make a system work.
Motivation 1.Application resources setup – make it easy 2.Transform PRAGMA grid – add on demand –Continue using Grid resources –Add cloud resources Current.
Steering Committee Meeting Summary PRAGMA 18 4 March 2010.
11 Application of CSF4 in Avian Flu Grid: Meta-scheduler CSF4. Lab of Grid Computing and Network Security Jilin University, Changchun, China Hongliang.
1 Dr. Xiaohui Wei College of Computer Science and Technology, Jilin University, China CSF4 Tutorial The 3rd PRAGMA Institute, Penang Malaysia,
Resources WG Update PRAGMA 9 Hyderabad. Status (in 1 slide) Applications QMMD (AIST) Savannah (MU) iGAP (SDSC, AIST) Middleware Gfarm (AIST) Community.
PRAGMA New Membership Application
Resource WG Summary Mason Katz, Yoshio Tanaka. Next generation resources on PRAGMA Status – Next generation resource (VM-based) in PRAGMA by UCSD (proof.
National Institute of Advanced Industrial Science and Technology Advance Reservation-based Grid Co-allocation System Atsuko Takefusa, Hidemoto Nakada,
CSF4 Meta-Scheduler PRAGMA13 Zhaohui Ding or College of Computer.
PRAGMA BioSciences Portal Raj Chhabra Susumu Date Junya Seo Yohei Sawai.
Gfarm v2 and CSF4 Osamu Tatebe University of Tsukuba Xiaohui Wei Jilin University SC08 PRAGMA Presentation at NCHC booth Nov 19,
BES++ - Standards Adoption through Open Source Chris Smith Platform Computing.
OMII-UK Steven Newhouse, Director. © 2 OMII-UK aims to provide software and support to enable a sustained future for the UK e-Science community and its.
CSF4, SGE and Gfarm Integration Zhaohui Ding Jilin University.
NBCR Science Gateway: Transparent Access to Remote Resources Through Rich Desktop Clients NBCR Science Gateway: Transparent Access to Remote Resources.
PRAGMA19, Sep. 15 Resources breakout Migration from Globus-based Grid to Cloud Mason Katz, Yoshio Tanaka.
GRID workload management system and CMS fall production Massimo Sgaravatto INFN Padova.
Globus Toolkit 4 hands-on Gergely Sipos, Gábor Kecskeméti MTA SZTAKI
Globus 4 Guy Warner NeSC Training.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks GINGIN Grid Interoperation on Data Movement.
TeraGrid Information Services December 1, 2006 JP Navarro GIG Software Integration.
December 8 & 9, 2005, Austin, TX SURA Cyberinfrastructure Workshop Series: Grid Technology: The Rough Guide Configuring Resources for the Grid Jerry Perez.
Open Science Grid Software Stack, Virtual Data Toolkit and Interoperability Activities D. Olson, LBNL for the OSG International.
PRAGMA20 – PRAGMA 21 Collaborative Activities Resources Working Group.
OPEN GRID SERVICES ARCHITECTURE AND GLOBUS TOOLKIT 4
GRAM: Software Provider Forum Stuart Martin Computational Institute, University of Chicago & Argonne National Lab TeraGrid 2007 Madison, WI.
Grids and Portals for VLAB Marlon Pierce Community Grids Lab Indiana University.
Grid Resource Allocation and Management (GRAM) Execution management Execution management –Deployment, scheduling and monitoring Community Scheduler Framework.
Computational grids and grids projects DSS,
GRAM5 - A sustainable, scalable, reliable GRAM service Stuart Martin - UC/ANL.
Resource Monitoring & Service Discovery in GeneGrid Sachin Wasnik Belfast e-Science Centre.
BOSCO Architecture Derek Weitzel University of Nebraska – Lincoln.
ChinaGrid Experience with GT4 Hai Jin Huazhong University of Science and Technology
CSF4 Meta-Scheduler Name: Zhaohui Ding, Xiaohui Wei
A Hierarchical MapReduce Framework Yuan Luo and Beth Plale School of Informatics and Computing, Indiana University Data To Insight Center, Indiana University.
Tool Integration with Data and Computation Grid GWE - “Grid Wizard Enterprise”
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks, An Overview of the GridWay Metascheduler.
The BioBox Initiative: Bio-ClusterGrid Maddie Wong Technical Marketing Engineer Sun APSTC – Asia Pacific Science & Technology Center.
1 Grid scheduling issues in the Sun Data and Compute Grids Project NeSC Grid Scheduling Workshop, Edinburgh, UK 21 st October.
Institute For Digital Research and Education Implementation of the UCLA Grid Using the Globus Toolkit Grid Center’s 2005 Community Workshop University.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
1October 9, 2001 Sun in Scientific & Engineering Computing Grid Computing with Sun Wolfgang Gentzsch Director Grid Computing Cracow Grid Workshop, November.
 Abstract  The applications in many scientific fields, like bioinformatics and high-energy physics etc, increasingly demand the computing infrastructures.
Grid Security: Authentication Most Grids rely on a Public Key Infrastructure system for issuing credentials. Users are issued long term public and private.
July 11-15, 2005Lecture3: Grid Job Management1 Grid Compute Resources and Job Management.
Part Five: Globus Job Management A: GRAM B: Globus Job Commands C: Laboratory: globusrun.
GRID Zhen Xie, INFN-Pisa, on DataGrid WP6 meeting1 Globus Installation Toolkit Zhen Xie On behalf of grid-release team INFN-Pisa.
SC2008 (11/19/2008) Resources Group Pacific Rim Application and Grid Middleware Assembly Reports.
Portal Update Plan Ashok Adiga (512)
1 Grid Activity Summary » Grid Testbed » CFD Application » Virtualization » Information Grid » Grid CA.
Biomedical and Bioscience Gateway to National Cyberinfrastructure John McGee Renaissance Computing Institute
Development of e-Science Application Portal on GAP WeiLong Ueng Academia Sinica Grid Computing
Tool Integration with Data and Computation Grid “Grid Wizard 2”
INFSO-RI Enabling Grids for E-sciencE Using of GANGA interface for Athena applications A. Zalite / PNPI.
MSF and MAGE: e-Science Middleware for BT Applications Sep 21, 2006 Jaeyoung Choi Soongsil University, Seoul Korea
Shibboleth Use at the National e-Science Centre Hub Glasgow at collaborating institutions in the Shibboleth federation depending.
1. 2 Introduction SUMS (STAR Unified Meta Scheduler) overview –Usage Architecture Deprecated Configuration Current Configuration –Configuration via Information.
CSF. © Platform Computing Inc CSF – Community Scheduler Framework Not a Platform product Contributed enhancement to The Globus Toolkit Standards.
CSF4 Meta-Scheduler Zhaohui Ding College of Computer Science & Technology Jilin University.
Parallel Computing Globus Toolkit – Grid Ayaka Ohira.
Duncan MacMichael & Galen Deal CSS 534 – Autumn 2016
Dynamic Deployment of VO Specific Condor Scheduler using GT4
Peter Kacsuk – Sipos Gergely MTA SZTAKI
GWE Core Grid Wizard Enterprise (
Globus Job Management. Globus Job Management Globus Job Management A: GRAM B: Globus Job Commands C: Laboratory: globusrun.
Condor-G: An Update.
Presentation transcript:

PRAGMA9 – Demo Bioinformatics applications inside Gfarm using meta-scheduler (CSF) and local schedulers (LSF/SGE/etc) Dr. Xiaohui Wei, JLU, China Dr. Wilfred W. Li, UCSD, USA Dr. Osamu Tatebe, AIST, Japan

2 Motivations To integrate CSF4, local job schedulers with Gfarm To provide a grid test bed for data-intensive applications, like iGap etc. To provide a grid test bed for data-intensive applications, like iGap etc. To support a GT4 and GT2 mixed Grid Env The WSRF based GT4 is released in April, 2005 The WSRF based GT4 is released in April, 2005 GT2 is still popular, most clusters in SDSC are using GT2 and SGE 5.3 GT2 is still popular, most clusters in SDSC are using GT2 and SGE 5.3 To integrate CSF4 with different local job schedulers – LSF, SGE, PBS, Condor etc CSF4 is a WSRF based meta-scheduler CSF4 is a WSRF based meta-scheduler Different sites use different local schedulers, LSF/SGE/PBS etc Different sites use different local schedulers, LSF/SGE/PBS etc

3 Community Scheduler Framework 4.0 – CSF4 What is CSF Full name: Community Scheduler Framework Full name: Community Scheduler Framework CSF is a meta-scheduler working at grid level CSF is a meta-scheduler working at grid level The first version of CSF, CSF3, was developed based on GT3-OGSI, contributed by Platform (a CA software company/LSF) The first version of CSF, CSF3, was developed based on GT3-OGSI, contributed by Platform (a CA software company/LSF) CSF4 is the GT4-WSRF compliant version of CSF, which is a contribution component of GT4 CSF4 is the GT4-WSRF compliant version of CSF, which is a contribution component of GT4 CSF is an open source project and can be accessed at (the cvs mainline code is csf4) CSF is an open source project and can be accessed at (the cvs mainline code is csf4) The development team of CSF4 is from Jilin University The development team of CSF4 is from Jilin University

4 CSF4 – Architecture

5 Meta-scheduler vs Local Resource Manger Meta-scheduler In a Grid-computing environment, there is a common requirement for users to query, negotiate access and manage resources existing within different administrative domains at Grid level. The meta- scheduler is designed to perform such global wide policies. Local scheduler Typically local Resource Management software (RM) like LSF, PBS, and Sun Grid Engine is responsible for load balancing and resource sharing polices within a local administrative domain. Typically local Resource Management software (RM) like LSF, PBS, and Sun Grid Engine is responsible for load balancing and resource sharing polices within a local administrative domain.

6 The Grid Test Bed Overview CSF4 WS-GRAM GT4: WSRF Pre-WS-GRAM GT2: Gatekeeper Rocks-110.sdsc.edu SGE6 Fork LSF6 Rocks-32.sdsc.edu LSFSGE5.3 Fork gabd Local cluster on rocks-110Remote cluster on rocks- 32/rocks-52 SGE5.3 Fork Pre-WS-GRAM GT2: Gatekeeper Remote cluster on rocks-52 Local cluster on rocks-110 Remote cluster on rocks-110

7 Demo E nvironment Setup Set up GT4/CSF4 at rocks-110(frontend) Set up SGE6.0u4 at rocks-110 cluster and SGE6.0 adapter for GT4 – developed by adapter developed by London e-Science Centre ( Set up GT2 at rocks-32 cluster Set up SGE5.3 at rocks-32 cluster and SGE adapter for GT2 Set up Gfarm1.2 on both rocks-32 and rocks-110 clusters Set up LSF6.0 on both rocks-32 and rocks-110 clusters

8 To Support Full Delegation for GT2 Gatekeeper Java CoG Kit supports full delegation The class org.globus.gram.GramJob is provided to represent a simple gram job The class org.globus.gram.GramJob is provided to represent a simple gram job This class supports full delegation. This class supports full delegation. Let CSF4 use Java CoG Kit instead of Gatekeeper client library to contact with GT2 Gatekeeper service

9 CSF4 configuration examples Configuration for CSF Resource Manager (resourcemanager-config.xml) (resourcemanager-config.xml)<cluster> gatekeeper32 gatekeeper32 GRAM GRAM rocks-32.sdsc.edu/jobmanager-fork rocks-32.sdsc.edu/jobmanager-fork <version>2.4</version></cluster><cluster> sge32 sge32 GRAM GRAM rocks-32.sdsc.edu/jobmanager-sge rocks-32.sdsc.edu/jobmanager-sge <version>2.4</version></cluster>

10 CSF4 configuration examples(cont.) <cluster> sge52 sge52 GRAM GRAM rocks-52.sdsc.edu/jobmanager-sge rocks-52.sdsc.edu/jobmanager-sge <version>2.4</version></cluster><cluster> demo_lsf_cluster1 demo_lsf_cluster1 LSF LSF rocks-32.sdsc.edu rocks-32.sdsc.edu <version>6.0</version></cluster>

11 Demos Demo 1 – Query available clusters to CSF Demo 2 – Run a Gfarm job access gfarm file system in remote SGE5.3

12 CSF4 – Demo1 Demo1 Get all cluster information available $ csf-job-RmInfo $ csf-job-RmInfo

13 CSF4 – Demo2( PRAGMA ) Run a Gfarm job in remote SGE5.3 (rocks-52) $ csf-job-create –rsl test_gfarm.rsl –name job6 $ csf-job-start job6 –Rh r/ResourceManagerFactoryService –c sge52 r/ResourceManagerFactoryService r/ResourceManagerFactoryService $ gfls wublast.out $ cat /gfarm/zding/wublast.out

14 CSF4 – Demo2+ Run a Gfarm job in remote SGE5.3 (rocks-32) $ csf-job-create –rsl test_gfarm.rsl –name job5 $ csf-job-start job5 –Rh r/ResourceManagerFactoryService –c sge32 r/ResourceManagerFactoryService r/ResourceManagerFactoryService $ gfls wublast.out $ cat /gfarm/zding/wublast.out zding]$ cat test.out | grep "At:" At: rocks-34.sdsc.edu At: rocks-34.sdsc.edu At: rocks-36.sdsc.edu At: rocks-36.sdsc.edu At: rocks-34.sdsc.edu At: rocks-34.sdsc.edu