PlanetLab Architecture Larry Peterson Princeton University.

Slides:



Advertisements
Similar presentations
VINI and its Future Directions
Advertisements

VINI Overview. PL-VINI: Prototype on PlanetLab PlanetLab: testbed for planetary-scale services Simultaneous experiments in separate VMs –Each has root.
Cabo: Concurrent Architectures are Better than One Nick Feamster, Georgia Tech Lixin Gao, UMass Amherst Jennifer Rexford, Princeton.
Wei Lu 1, Kate Keahey 2, Tim Freeman 2, Frank Siebenlist 2 1 Indiana University, 2 Argonne National Lab
Mobile Agents Mouse House Creative Technologies Mike OBrien.
PlanetLab Workshop May 12, Incentives Private PLs are happening… What direction for “public” PL? –Growth? Distributing ops? Incentives to move in.
PlanetLab V3 and beyond Steve Muir Princeton University September 17, 2004.
Resource specification update for PlanetLab and VINI Andy Bavier Princeton University March 16, 2010.
PlanetLab What is PlanetLab? A group of computers available as a testbed for computer networking and distributed systems research.
PlanetLab Operating System support* *a work in progress.
1 PlanetLab: A globally distributed testbed for New and Disruptive Services CS441 Mar 15th, 2005 Seungjun Lee
PlanetLab: Present and Future Steve Muir 3rd August, 2005 (slides taken from Larry Peterson)
PlanetLab Europe 2008 Thomas Bourgeau Laboratoire LIP6 – CNRS Université Pierre et Marie Curie – Paris 6
Xen , Linux Vserver , Planet Lab
OneLab: Federating Testbeds Timur Friedman Laboratoire LIP6-CNRS Université Pierre et Marie Curie TERENA Networking Conference 2007 Lyngby, Denmark, 22.
PlanetLab Federation Development Aaron Klingaman Princeton University.
1 Extensible Security Architectures for Java Authors: Dan S.Wallch, Dirk Balfanz Presented by Moonjoo Kim.
1 In VINI Veritas: Realistic and Controlled Network Experimentation Jennifer Rexford with Andy Bavier, Nick Feamster, Mark Huang, and Larry Peterson
1 GENI: Global Environment for Network Innovations Jennifer Rexford Princeton University
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
An Overlay Data Plane for PlanetLab Andy Bavier, Mark Huang, and Larry Peterson Princeton University.
Virtualization: An End or a Means? Larry Peterson Princeton University
Network Rspecs in PlanetLab and VINI Andy Bavier PL Developer's Meeting May 13-14, 2008.
1 PLuSH – Mesh Tree Fast and Robust Wide-Area Remote Execution Mikhail Afanasyev ‧ Jose Garcia ‧ Brian Lum.
CS-3013 & CS-502, Summer 2006 Memory Management1 CS-3013 & CS-502 Summer 2006.
Virtual Machine approach to Security Gautam Prasad and Sudeep Pradhan 10/05/2010 CS 239 UCLA.
The Future of the Internet Jennifer Rexford ’91 Computer Science Department Princeton University
5205 – IT Service Delivery and Support
Real Security for Server Virtualization Rajiv Motwani 2 nd October 2010.
Virtual Machine Hosting for Networked Clusters: Building the Foundations for “Autonomic” Orchestration Based on paper by Laura Grit, David Irwin, Aydan.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED.
Andy Bavier, PlanetWorks Scott Baker, SB-Software July 27, 2011.
Mobile data. Introduction Wireless (cellular) communications has experienced a tremendous growth in this decade. Most of the wireless users also access.
Large Scale Broadband Measurement Activities within the IETF, Broadband Forum and EU Leone project Trevor Burbridge, 16 th May 2013 The research leading.
Chapter 6 Operating System Support. This chapter describes how middleware is supported by the operating system facilities at the nodes of a distributed.
Cluster Heartbeats Node health monitoring CSV I/O Built-in resiliency for storage volume access Intra-Cluster Synchronization Replicated state.
1 Cabo: Concurrent Architectures are Better than One Jennifer Rexford Princeton University Joint work with Nick Feamster.
DiProNN Resource Management System (DiProNN = Distributed Programmable Network Node) Tomáš Rebok Faculty of Informatics MU, Brno Czech.
1 On the Design & Evolution of an Architecture for Testbed Federation Stephen Soltesz, David Eisenstat, Marc Fiuczynski, Larry Peterson.
GEC5 Security Summary Stephen Schwab Cobham Analytical Services July 21, 2009.
Intel IT Overlay Jeff Sedayao PlanetLab Workshop at HPLABS May 11, 2006.
An Overview of the PlanetLab SeungHo Lee.
Deliverable A meeting report that outlines our current thinking about Private PlanetLabs and Federation. Private PlanetLabs: Opportunities and Challenges.
Report on Onelab/2 Activities Future Internet Research and Experimentation Report on Onelab/2 Activities Serge Fdida Université.
Addressing Issues David Conrad Internet Software Consortium.
PlanetLab Architecture Larry Peterson Princeton University.
Marc Fiuczynski Princeton University Marco Yuen University of Victoria PlanetLab & Clusters.
Virtual Workspaces Kate Keahey Argonne National Laboratory.
1 Testbeds Breakout Tom Anderson Jeff Chase Doug Comer Brett Fleisch Frans Kaashoek Jay Lepreau Hank Levy Larry Peterson Mothy Roscoe Mehul Shah Ion Stoica.
11 CLUSTERING AND AVAILABILITY Chapter 11. Chapter 11: CLUSTERING AND AVAILABILITY2 OVERVIEW  Describe the clustering capabilities of Microsoft Windows.
Trusted Virtual Machine Images a step towards Cloud Computing for HEP? Tony Cass on behalf of the HEPiX Virtualisation Working Group October 19 th 2010.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
The Intelligent Infrastructure John Pollard – 29 th April 2008
Dynamic Creation and Management of Runtime Environments in the Grid Kate Keahey Matei Ripeanu Karl Doering.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI VM Management Chair: Alexander Papaspyrou 2/25/
Distributed Systems Ryan Chris Van Kevin. Kinds of Systems Distributed Operating System –Offers Transparent View of Network –Controls multiprocessors.
PlanetLab-Based Control Framework for GENI Larry Peterson Princeton University.
01/27/10 What is PlanetLab? A planet-wide testbed for the R & D of network applications and distributed computing Over 1068 nodes at 493 sites, primarily.
SERVERS. General Design Issues  Server Definition  Type of server organizing  Contacting to a server Iterative Concurrent Globally assign end points.
GGF 17 - May, 11th 2006 FI-RG: Firewall Issues Overview Document update and discussion The “Firewall Issues Overview” document.
DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S
Operating System Structures
Securing the Network Perimeter with ISA 2004
Management of Virtual Execution Environments 3 June 2008
Chapter 3: Windows7 Part 4.
Page Replacement.
CHAPTER 4:THreads Bashair Al-harthi OPERATING SYSTEM
System calls….. C-program->POSIX call
Preventing Privilege Escalation
Presentation transcript:

PlanetLab Architecture Larry Peterson Princeton University

Roadmap Yesterday –Code + Design Principles Today –Defined Architecture + Standardization Process Tomorrow –Clusters –Federation –ISP (overlays on layer 2 networks)

Meta-Issue Reference Model –describes PlanetLab-like systems Architecture –narrow waist (universal agreement) –by convention Implementation –what we happen to run today –alternatives possible tomorrow

Owner 1 Owner 2 Owner 3 Owner N... Slice Authority Management Authority Software updates Auditing data Create slices... U S E R S PlanetLab Nodes Service Providers Request a slice New slice ID Access slice Identify slice users (resolve abuse) Learn about nodes Principals

Architectural Elements Node MA NM + VMM node database Node Owner VM SCS SA slice database VM Service Provider

Architecture vs Implementation Linux: implementation –well-defined VM types (default on all nodes?) –VM template (keys, bootscript) Node Manager: narrow waist –VMM-specific implementation of common interface (rspec) –stacked vs flat? pl_conf: architecture by convention –supports remote interface/protocol (ticket) –depends on name space for SAs (narrow waist) PLC-as-MA: implementation –independent MAs real soon now –share fate for foreseeable future PLC-as-SA: implementation –advantage of common slice authority –decouple naming from slice creation

PlanetLab Compliant Node –support node manager interface –pl_conf (accepts PLC-as-SA tickets) –at least one known VM type (base type?) –owner VM to make root allocation decision (ops on NM) –audit service Management Authority –secure boot –audit service –responsive support team Slice Authority –creates slices and/or returns tickets –auditing capability

Breakout Session Questions Group C’s notes

Questions 1.What challenges do we face in extending PlanetLab to support: –clusters –autonomous regions (e.g., EU, Japan) –private PlanetLabs 2.What is the solution space for these problems? 3.How do these solutions affect the PlanetLab architecture? 4.What roadmap gets us to where we want to be without breaking anything?

Challenges (1) Requires IP address per node (in the DB) –Have to NAT on shared machines –Mobility Node Owner - what’s the interface? –Keep resources private –Fine-grain control for exceptional cases –Enable select services the owner wants to run –Enable “side agreements” Opportunistically exploit available capacity –Desktops –Unused cluster nodes –Perhaps largely motivated by private PL’s (Condor) Value is providing a consistent base level (the VM) –Users may want to name sites, not nodes Provide incentives to make more nodes available public PL –Dedicated machines –Opportunistic nodes (from cluster)

Challenges (2) When private and public PL meet… –Private VMs come and go on short notice –Policy/usage problems (PL currently in DMZ) –Clusters easier than desktops Sometimes public and sometimes private Runs slices from both local and public SA Owner has to be able to specify how much to commit to each Need incentives to provide resources to public PL Private vs Regional –One public PL and many Private PLs, all federating –Who manages a site’s public nodes? Public MA if dedicates nodes (e.g., PLC continues to manage) –Owner retains right to make root resource allocation decision –Sites may be happy to let PLC manage their nodes (business model!) Private MA if exploiting a dynamic setting (e.g., cluster) –In the limit PLC manages no nodes (just a public research SA) –PLC needs to learn set of available nodes (MA has an interface to export) Some ISP-like entity manages the nodes on a set of sites’ behalf

Challenges (3) Incentives –Markets –Policies (short-term fixes) Change node-to-slice ratio –Measure aggregate cpu/net usage (course grained) »Account for benefit provided to the local site (true cost) »Account for free bandwidth (e.g., Internet2) –Contribute additional nodes and additional bandwidth –Risk: reigning in heavy users causes light users to contribute less –May need exchange rate between bw and cpu… markets Mandate use of admission control at crunch time –Could be an exception to the Allow site-specific “side” agreements –Public PL is just one “side” agreement –PLC can’t mediate all side agreements (implementation limit) A new SA could mediate side agreements between a set of sites –Other consortiums form (virtual organizations)

Challenges (4) Wireless –Multiple interfaces (not unique to wireless) –Schedule in time since not virtualizable at MAC level –Incentives to provide access to unique capability (e.g., WiMax) Side agreements with other wireless sites Fold into the incentive mechanism –Flea market for virtual organizations Management as system scales –Help sites better manage their nodes (be more responsive) –Generally, need better management tools (at PLC too)