PRAGMA 25 Working Group Updates Resources Working Group Yoshio Tanaka (AIST) Phil Papadopoulos (UCSD) Most slides by courtesy of Peter, Nadya, Luca, and.

Slides:



Advertisements
Similar presentations
Resource WG Breakout. Agenda How we will support/develop data grid testbed and possible applications (1 st day) –Introduction of Gfarm (Osamu) –Introduction.
Advertisements

Resources WG Update PRAGMA 9 Hyderabad. Status (in 1 slide) Applications QMMD (AIST) Savannah (MU) iGAP (SDSC, AIST) Middleware Gfarm (AIST) Community.
Reports from Resource Breakout PRAGMA 16 KISTI, Korea.
Resources WG Report Back. Account Creation Complaint –Too difficult to obtain user account on all resources Observations –Just ask Cindy and she will.
Resource WG Update PRAGMA 14 Mason Katz, Yoshio Tanaka, Cindy Zheng.
Resource WG Update PRAGMA 8 Singapore. Routine Use - Users make a system work.
Motivation 1.Application resources setup – make it easy 2.Transform PRAGMA grid – add on demand –Continue using Grid resources –Add cloud resources Current.
Steering Committee Meeting Summary PRAGMA 18 4 March 2010.
Resource WG Update PRAGMA 14 Mason Katz, Yoshio Tanaka, Cindy Zheng.
Resources WG Update PRAGMA 9 Hyderabad. Status (in 1 slide) Applications QMMD (AIST) Savannah (MU) iGAP (SDSC, AIST) Middleware Gfarm (AIST) Community.
Resource WG PRAGMA Mason Katz, Yoshio Tanaka, Cindy Zheng.
Demonstrations at PRAGMA demos are nominated by WG chairs Did not call for demos. We will select the best demo(s) Criteria is under discussion. Notes.
Resource/data WG Summary Yoshio Tanaka Mason Katz.
1 Bio Applications Virtualization using Rocks Nadya Williams UCSD.
Resource WG Summary Mason Katz, Yoshio Tanaka. Next generation resources on PRAGMA Status – Next generation resource (VM-based) in PRAGMA by UCSD (proof.
Gfarm v2 and CSF4 Osamu Tatebe University of Tsukuba Xiaohui Wei Jilin University SC08 PRAGMA Presentation at NCHC booth Nov 19,
Wei Lu 1, Kate Keahey 2, Tim Freeman 2, Frank Siebenlist 2 1 Indiana University, 2 Argonne National Lab
Sponsors and Acknowledgments This work is supported in part by the National Science Foundation under Grants No. OCI , IIP and CNS
1 Applications Virtualization in VPC Nadya Williams UCSD.
VC Deployment Script for OpenNebula/KVM Yoshio Tanaka, Akihiko Ota (AIST)
Virtualizing Lifemapper for PRAGMA: Step 2 - The Computational Tier By Aimee Stewart, Cindy Zheng, Phil Papadopoulos, C.J. Grady University of Kansas Biodiversity.
PRAGMA Overview and Future Directions Workshop on Building Collaborations in Clouds, HPC and Applications 17 July 2012.
Virtualization in PRAGMA and Software Collaborations Philip Papadopoulos (University of California San Diego, USA)
GLOBAL VIRTUAL CLUSTER DEPLOYMENT THROUGH A CONTENT DELIVERY NETWORK Pongsakorn U-chupala, Kohei Ichikawa (NAIST) Luca Clementi, Philip Papadopoulos (UCSD)
PRAGMA Biodiversity Expedition: Update on planned activities for June 2014 Reed Beaman 10 April 2014 University of Florida.
HEPiX Virtualisation Working Group Status, July 9 th 2010
PRAGMA19 – PRAGMA 20 Collaborative Activities Resources Working Group.
PRAGMA Resources Group Updates Philip Papadopoulos (Interim Working Group Lead) Reporting on SIGNIFICANT WORK by a LARGE Cast of Researchers.
An OpenFlow based virtual network environment for Pragma Cloud virtual clusters Kohei Ichikawa, Taiki Tada, Susumu Date, Shinji Shimojo (Osaka U.), Yoshio.
Lifemapper Provenance Virtualization
PRAGMA19, Sep. 15 Resources breakout Migration from Globus-based Grid to Cloud Mason Katz, Yoshio Tanaka.
© UC Regents 2010 Extending Rocks Clusters into Amazon EC2 Using Condor Philip Papadopoulos, Ph.D University of California, San Diego San Diego Supercomputer.
Eucalyptus on FutureGrid: A case for Eucalyptus 3 Sharif Islam, Javier Diaz, Geoffrey Fox Gregor von Laszewski Indiana University.
PRAGMA21 – PRAGMA 22 Collaborative Activities Resources Working Group.
Nara Institute of Science and Technology, Nara Prefecture, Japan CONFIGURATION AND DEPLOYMENT OF A SCALABLE VIRTUAL MACHINE CLUSTER FOR MOLECULAR DOCKING.
PRAGMA20 – PRAGMA 21 Collaborative Activities Resources Working Group.
Personal Cloud Controller (PCC) Yuan Luo 1, Shava Smallen 2, Beth Plale 1, Philip Papadopoulos 2 1 School of Informatics and Computing, Indiana University.
Cindy Zheng, Pragma Cloud, 3/20/2013 Building The PRAGMA International Cloud Cindy Zheng For Resources Working Group.
Rocks ‘n’ Rolls An Introduction to Programming Clusters using Rocks © 2008 UC Regents Anoop Rajendra.
1.  PRAGMA Grid test-bed : Shares clusters which managed by multiple sites Realizes a large-scale computational environment. › Expects as a platform.
Biosciences Working Group Update Wilfred W. Li, Ph.D., UCSD, USA Habibah Wahab, Ph.D., USM, Malaysia Hosted by CNIC, CAS Beijing, China Oct 16-18, 2013.
PRAGMA 25 Working Group Report Resources Working Group Yoshio Tanaka (AIST) Phil Papadopoulos (UCSD)
Resource WG PRAGMA 18 Mason Katz, Yoshio Tanaka.
Celebrating 10 Years Biodiversity, Ecosystems Services, and CI: Potential, Challenges and Opportunities Konkuk University 9 October 2012 Karpjoo Jeong,
Large Scale Sky Computing Applications with Nimbus Pierre Riteau Université de Rennes 1, IRISA INRIA Rennes – Bretagne Atlantique Rennes, France
PRAGMA 17 – PRAGMA 18 Resources Group. PRAGMA Grid 28 institutions in 17 countries/regions, 22 compute sites (+ 7 site in preparation) UZH Switzerland.
Paul Butterworth Management Technology Architect
Reports on Resources Breakouts. Wed. 11am – noon - Openflow demo rehearsal - Show physical information. How easily deploy/configure OpenvSwitch to create.
Resources Working Group Update Cindy Zheng (SDSC) Yoshio Tanaka (AIST) Phil Papadopoulos (SDSC)
EVGM081 Multi-Site Virtual Cluster: A User-Oriented, Distributed Deployment and Management Mechanism for Grid Computing Environments Takahiro Hirofuchi,
A Personal Cloud Controller Yuan Luo School of Informatics and Computing, Indiana University Bloomington, USA PRAGMA 26 Workshop.
1 Grid Activity Summary » Grid Testbed » CFD Application » Virtualization » Information Grid » Grid CA.
Nara Institute of Science and Technology, Nara Prefecture, Japan CONFIGURATION AND DEPLOYMENT OF A SCALABLE VIRTUAL MACHINE CLUSTER FOR MOLECULAR DOCKING.
Nara Institute of Science and Technology, Nara Prefecture, Japan CONFIGURATION AND DEPLOYMENT OF A SCALABLE VIRTUAL MACHINE CLUSTER FOR MOLECULAR DOCKING.
PRAGMA19 – PRAGMA 20 Collaborative Activities Resources Working Group.
Aimee Stewart (KU) Nadya Williams (UCSD) 1.
StratusLab is co-funded by the European Community’s Seventh Framework Programme (Capacities) Grant Agreement INFSO-RI Demonstration StratusLab First.
Nara Institute of Science and Technology, Nara Prefecture, Japan CONFIGURATION AND DEPLOYMENT OF A SCALABLE VIRTUAL MACHINE CLUSTER FOR MOLECULAR DOCKING.
WP5 – Infrastructure Operations Test and Production Infrastructures StratusLab kick-off meeting June 2010, Orsay, France GRNET.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
Biomedical Informatics Research Network BIRN as a Shared Infrastructure: An Overview of Policies & Procedures Mark James, Phil Papadopoulos October 9,
Pacific Rim Application and Grid Middleware Assembly (PRAGMA)1:
Biosciences Working Group Update
Exploitation and Sustainability
StratusLab Final Periodic Review
StratusLab Final Periodic Review
Sky Computing on FutureGrid and Grid’5000
Module 01 ETICS Overview ETICS Online Tutorials
Sky Computing on FutureGrid and Grid’5000
Virtual Clusters Production Experience
Presentation transcript:

PRAGMA 25 Working Group Updates Resources Working Group Yoshio Tanaka (AIST) Phil Papadopoulos (UCSD) Most slides by courtesy of Peter, Nadya, Luca, and Pongsakorn

Past, Current, and Future of Resources WG PRAGMA16 GT & VOMS PRAGMA17 Panel on Grid to Cloud Phase 1 Succeeded in manual port (sharing) of VM images PRAGMA 18 Experiments on Rocks + Xen roll PRAGMA 19 Extending VM sites Phase 2 Automation of the deployment Use Gfarm for sharing VM images Phase 3 Expanding sites Prototyped VC deployments Phase 4 VC Sharing automation w/ apps Lifemapper, Docking Simulation SDN (OpenFlow, ViNE) CI for Scientists Routine Use of Network Overlay Start using github Sustained infrastructure Network Overlay ViNE, IPOP

Resources in PRAGMA 25 at a glance

Summary of the 2 nd day PRAGMA24 Scenario: Three different labs want to share computing and data  What are the resources (data, computing)  How do they author their software structure (VM/VC) to do what they need (e.g. LifeMapper Compute Nodes)  How do they provide network privacy  How do they control their distributed infrastructure Main contributors: Nadya (UCSD), Aimee(KU), Quan (IU) See the demo today!!

University of Kansas (KU) Indiana University (IU) UC San Diego (UCSD) Submit species modeling and distribution experiment 1 View experiment provenance 4 Overflow jobs are sent to Virtual Cluster 2 Provenance Data captured on VC, sent to Karma 3b PRAGMA experiment with Virtual Clusters and metadata capture Lifemapper job Results sent to KU 3a

Summary of VC Sharing PRAGMA 24 Demonstrated booting VC images on two cloud hosting environments. – Rocks/KVM by SDSC – OpenNebula/KVM by AIST Next steps – Review the design of the scripts and re-implement by python. – Do this in San Diego in July (SDSC, AIST, NCHC). Detailed schedule TBD. Anybody interested in is welcome to join. – SDN integration with VC (NAIST/Osaka) Done in Sep. and Oct. Main contributors: Luca (UCSD) & Built (KU->NAIST) See demo tomorrow!!

Virtual Cluster Image Frontend Image.gz Compute Image.gz vc-in.xml Compute Image Frontend Image allocate vc-out.xml boot Frontend Image Compute Image Compute Image Compute Image injection Format conversion (e.g Xen->KVM or raw qCOW) Assign local resources for guest cluster (network, hosts, disks, etc) Turn on Virtual Cluster Rocks Implementation

Virtual Cluster Image Frontend Image.gz Compute Image.gz vc-in.xml Compute Image Frontend Image Frontend Image Compute Image Compute Image Compute Image allocate Translate configurations to OpenNebula format 2 OpenNebula Implementation Inject vc-out.xml generation script 1 Instantiate Virtual Cluster 3 OpenNebula Configurations

Agenda of breakouts Thursday 14:00-16:15 – Discussion: VC sharing Detailed demo by Luca and Built Discussion for improvements and next steps based on insights gained through the demonstration Friday 11:10-12:30 – Discussion: CI for Scientists Detailed demo by Nadya, Aimee, and Quan Overlay network demo by Renato Discussion for improvements and next steps based on insights gained through the demonstration Friday 14:00-15:30 – Discussion: What services should PRAGMA provide? How users use PRAGMA Cloud? What persistent infrastructure should PRAGMA provide / use? – Github, Gfarm, Marketplace What kind of services (data, computing, etc.) are available?