Marc Fiuczynski Princeton University Marco Yuen University of Victoria PlanetLab & Clusters.

Slides:



Advertisements
Similar presentations
VINI and its Future Directions
Advertisements

1 Spiral 1 Requirements Demonstrate GENI Clearinghouse & control framework in Spiral 1 projects as a central GENI concept. Demonstrate End-to-end.
CSF4 Meta-Scheduler Tutorial 1st PRAGMA Institute Zhaohui Ding or
Wei Lu 1, Kate Keahey 2, Tim Freeman 2, Frank Siebenlist 2 1 Indiana University, 2 Argonne National Lab
2  Industry trends and challenges  Windows Server 2012: Beyond virtualization  Complete virtualization platform  Improved scalability and performance.
1 Applications Virtualization in VPC Nadya Williams UCSD.
PlanetLab Workshop May 12, Incentives Private PLs are happening… What direction for “public” PL? –Growth? Distributing ops? Incentives to move in.
1 Planetary Network Testbed Larry Peterson Princeton University.
PlanetLab Architecture Larry Peterson Princeton University.
CSF4, SGE and Gfarm Integration Zhaohui Ding Jilin University.
Chapter 22: Cloud Computing and Related Security Issues Guide to Computer Network Security.
Xen , Linux Vserver , Planet Lab
PlanetLab Federation Development Aaron Klingaman Princeton University.
CoreGRID Workpackage 5 Virtual Institute on Grid Information and Monitoring Services Authorizing Grid Resource Access and Consumption Erik Elmroth, Michał.
© UC Regents 2010 Extending Rocks Clusters into Amazon EC2 Using Condor Philip Papadopoulos, Ph.D University of California, San Diego San Diego Supercomputer.
1 GENI: Global Environment for Network Innovations Jennifer Rexford Princeton University
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
An Overlay Data Plane for PlanetLab Andy Bavier, Mark Huang, and Larry Peterson Princeton University.
Integrated Scientific Workflow Management for the Emulab Network Testbed Eric Eide, Leigh Stoller, Tim Stack, Juliana Freire, and Jay Lepreau and Jay Lepreau.
2nd European PlanetLab Meetings, EPFL, Oct The EverLab Project Danny Bickson, Elliot Jaffe The Hebrew University of Jerusalem, Israel.
Virtual Machine approach to Security Gautam Prasad and Sudeep Pradhan 10/05/2010 CS 239 UCLA.
Minerva Infrastructure Meeting – October 04, 2011.
Cisco and OpenStack Lew Tucker VP/CTO Cloud Computing Cisco Systems,
VAP What is a Virtual Application ? A virtual application is an application that has been optimized to run on virtual infrastructure. The application software.
Open Source for Government Alexander C. Pitzner Sr. Network Engineer Harrisburg University of Science and Technology
“ Does Cloud Computing Offer a Viable Option for the Control of Statistical Data: How Safe Are Clouds” Federal Committee for Statistical Methodology (FCSM)
Condor and Distributed Computing David Ríos CSCI 6175 Fall 2011.
SeeTestCloud. End-to-End Suite of Tools for iOS, Android, BlackBerry & Windows Phone Automation tools for 24/7 testing and monitoring Productivity tools.
10/20/05 LIGO Scientific Collaboration 1 LIGO Data Grid: Making it Go Scott Koranda University of Wisconsin-Milwaukee.
Rocks ‘n’ Rolls An Introduction to Programming Clusters using Rocks © 2008 UC Regents Anoop Rajendra.
1.  PRAGMA Grid test-bed : Shares clusters which managed by multiple sites Realizes a large-scale computational environment. › Expects as a platform.
1 Supporting the development of distributed systems CS606, Xiaoyan Hong University of Alabama.
Presented by: Sanketh Beerabbi University of Central Florida COP Cloud Computing.
MyPLC My Little PlanetLab Mark Huang
PlanetLab Applications and Federation Kiyohide NAKAUCHI NICT 23 rd ITRC Symposium 2008/05/16 Aki NAKAO Utokyo / NICT
Grids, Clouds and the Community. Cloud Technology and the NGS Steve Thorn Edinburgh University Matteo Turilli, Oxford University Presented by David Fergusson.
1 Overview of the Application Hosting Environment Stefan Zasada University College London.
From Virtualization Management to Private Cloud with SCVMM 2012 Dan Stolts Sr. IT Pro Evangelist Microsoft Corporation
CSF4 Meta-Scheduler Name: Zhaohui Ding, Xiaohui Wei
Intel IT Overlay Jeff Sedayao PlanetLab Workshop at HPLABS May 11, 2006.
Deliverable A meeting report that outlines our current thinking about Private PlanetLabs and Federation. Private PlanetLabs: Opportunities and Challenges.
Sponsored by the National Science Foundation GENI Exploring Networks of the Future
Cloud Computing Security Keep Your Head and Other Data Secure in the Cloud Lynne Pizzini, CISSP, CISM, CIPP Information Systems Security Officer Information.
CoBrow Collaborative Browsing A Virtual Presence Service RE 1003 RE 4003.
The Microsoft Services Provider License Program (SPLA)
1October 9, 2001 Sun in Scientific & Engineering Computing Grid Computing with Sun Wolfgang Gentzsch Director Grid Computing Cracow Grid Workshop, November.
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
GRID Overview Internet2 Member Meeting Spring 2003 Sandra Redman Information Technology and Systems Center and Information Technology Research Center National.
EVGM081 Multi-Site Virtual Cluster: A User-Oriented, Distributed Deployment and Management Mechanism for Grid Computing Environments Takahiro Hirofuchi,
11 CLUSTERING AND AVAILABILITY Chapter 11. Chapter 11: CLUSTERING AND AVAILABILITY2 OVERVIEW  Describe the clustering capabilities of Microsoft Windows.
1 Grid Activity Summary » Grid Testbed » CFD Application » Virtualization » Information Grid » Grid CA.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
2005 GRIDS Community Workshop1 Learning From Cyberinfrastructure Initiatives Grid Research Integration Development & Support
Introduction to Open Source GIS David McIlhagga, President DM Solutions Group.
Rick McGeer Chief Scientist, US IGNITE October 28, 2013.
Tool Integration with Data and Computation Grid “Grid Wizard 2”
© Copyright AARNet Pty Ltd PRAGMA Update & some personal observations James Sankar Network Engineer - Middleware.
CSF. © Platform Computing Inc CSF – Community Scheduler Framework Not a Platform product Contributed enhancement to The Globus Toolkit Standards.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
© 2012 Eucalyptus Systems, Inc. Cloud Computing Introduction Eucalyptus Education Services 2.
Architecture of a platform for innovation and research Erik Deumens – University of Florida SC15 – Austin – Nov 17, 2015.
01/27/10 What is PlanetLab? A planet-wide testbed for the R & D of network applications and distributed computing Over 1068 nodes at 493 sites, primarily.
© 2007 UC Regents1 Rocks – Present and Future The State of Things Open Source Grids and Clusters Conference Philip Papadopoulos, Greg Bruno Mason Katz,
SERVERS. General Design Issues  Server Definition  Type of server organizing  Contacting to a server Iterative Concurrent Globally assign end points.
Security Virtualization
Cloud Challenges C. Loomis (CNRS/LAL) EGI-TF (Amsterdam)
Towards Distributed Test-Lab for Planetary-Scale Services
AbbottLink™ - IP Address Overview
GENI Exploring Networks of the Future
Presentation transcript:

Marc Fiuczynski Princeton University Marco Yuen University of Victoria PlanetLab & Clusters

2 Benefits of Clusters to PlanetLab  CYCLES: more compute resources for public PlanetLab  Easier to convince folks to commit machines on a short term basis vs. permanently to PlanetLab  FEDERATION: more clusters than private PlanetLabs to peer w/  Better way to evaluate federation problem space—e.g., expose clusters as Management Authority (MA) or combined MA and Slice Authority (SA)  SIMULATION: small-scale test bed for PlanetLab Services  Alternative way to doing “PlanetLab in Emulab”

3 Benefits of PlanetLab to Clusters  VIRTUALIZATION/SLICES: eases cluster management  Empowers the cluster user & cluster administrator  Easier to run cluster software stacks side-by-side in slices  Better resource isolation on single node  FEDERATION: eases sharing of cluster resources among sites  Slices are a proven model for addressing/sharing compute resources hosted at different institution PlanetLab Virtual Machine Monitor (VMM) Node Mgr Owner VM GLOBUS VM Sun GE VM Intel VM MPI x VM MPI y NODE

4 Many cluster packages out there… … we started with Rocks?!  Rocks:  Thousands of clusters worldwide  Used by Scientists, Wall Street, …  Interesting “software distribution” model  Commercially support & sold (platform.com/HP)  High visibility within Cluster community  Open Source similar to PlanetLab  Active community, well supported & funded  Active development to enhance base architecture

MyPLC & Rocks Integration

Rocks Extended Rocks API Extended MyPLC API Node Registrar Network Topology Manager MyPLC WEB GUI AdmAddNode AdmGetNodes AdmDeleteNode AdmUpdateNodeNetwork Bridge Design

Usage Scenarios  Manage a single cluster  Single Site; Single Owner  Status: works today  Manage multiple clusters  Multiple Sites; Single Owner  Status: testing  Federated Clusters  Multiple Sites; Multiple Owners  Status: awaiting federation API spec

Summary: Combing PL & Clusters  Interesting exercise in extending myPLC  Customization by overriding myPLC API  myPLC API plug-in or proxy  Release scheduled for early/mid summer  Earlier for close collaborators  20 node Princeton in June  AUP = planet-lab.org  Hosting requirement = planet-lab.org

Rocks ‘n’ Roll  Roll is a customized distribution  Convenient to deploy new software packages  Rolls: Globus, PBS, SGE, Condor, etc...  Commercial companies providing custom rolls and technical support.

10 Security Issues  Clusters usually are behind a firewall  sometimes in private LAN  Sometimes have full access to site internal resources  Sites do not like to let others inside  Need to protect sites from PlanetLab  Is existing scheme – PlanetFlow – sufficient?  Approach: Leverage VLANs to isolate PlanetLab  Use smart switch to put PL nodes on inside/outside of Firewall  PlanetLab Nodes on separate VLAN  Each Slice on unique VLAN