PlanetLab: Present and Future Steve Muir 3rd August, 2005 (slides taken from Larry Peterson)

Slides:



Advertisements
Similar presentations
Seungmi Choi PlanetLab - Overview, History, and Future Directions - Using PlanetLab for Network Research: Myths, Realities, and Best Practices.
Advertisements

PlanetLab: An Overlay Testbed for Broad-Coverage Services Bavier, Bowman, Chun, Culler, Peterson, Roscoe, Wawrzoniak Presented by Jason Waddle.
PlanetLab V3 and beyond Steve Muir Princeton University September 17, 2004.
Resource specification update for PlanetLab and VINI Andy Bavier Princeton University March 16, 2010.
CoreLab Update Future Internet Workshop University of Tokyo/NICT Aki NAKAO 1Future Internet Workshop, Bangkok
PlanetLab: Evolution vs Intelligent Design in Global Network Infrastructure Larry Peterson Princeton University.
PlanetLab Architecture Larry Peterson Princeton University.
PlanetLab What is PlanetLab? A group of computers available as a testbed for computer networking and distributed systems research.
PlanetLab: Evolution vs Intelligent Design in Global Network Infrastructure Larry Peterson Princeton University.
PlanetLab Operating System support* *a work in progress.
1 PlanetLab: A globally distributed testbed for New and Disruptive Services CS441 Mar 15th, 2005 Seungjun Lee
Global Overlay Network : PlanetLab Claudio E.Righetti 6 October, 2005 (some slides taken from Larry Peterson)
PlanetLab Europe 2008 Thomas Bourgeau Laboratoire LIP6 – CNRS Université Pierre et Marie Curie – Paris 6
PlanetLab: Catalyzing Network Innovation October 2, 2007 Larry Peterson Princeton University Timothy Roscoe Intel Research at Berkeley.
1 - OneLab - AsiaFi– November 17, 2009 – AIT Bangkok A PlanetLab (Europe) Short Tutorial Serge Fdida Université Pierre & Marie Curie, LIP6 Paris, France.
SEEDING CLOUD-BASED SERVICES: DISTRIBUTED RATE LIMITING (DRL) Kevin Webb, Barath Raghavan, Kashi Vishwanath, Sriram Ramabhadran, Kenneth Yocum, and Alex.
Xen , Linux Vserver , Planet Lab
Resource Management with YARN: YARN Past, Present and Future
OneLab: Federating Testbeds Timur Friedman Laboratoire LIP6-CNRS Université Pierre et Marie Curie TERENA Networking Conference 2007 Lyngby, Denmark, 22.
PlanetLab Federation Development Aaron Klingaman Princeton University.
PlanetLab: An open platform for developing, deploying, and accessing planetary-scale services Overview Adapted from Peterson.
1 DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S. TANENBAUM MAARTEN VAN STEEN Chapter 3 Processes Skip
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
An Overlay Data Plane for PlanetLab Andy Bavier, Mark Huang, and Larry Peterson Princeton University.
Global Overlay Network : PlanetLab Claudio E.Righetti October, 2006 (some slides taken from Larry Peterson)
4b.1 Grid Computing Software Components of Globus 4.0 ITCS 4010 Grid Computing, 2005, UNC-Charlotte, B. Wilkinson, slides 4b.
70-290: MCSE Guide to Managing a Microsoft Windows Server 2003 Environment Chapter 1: Introduction to Windows Server 2003.
Xuan Guo Chapter 1 What is UNIX? Graham Glass and King Ables, UNIX for Programmers and Users, Third Edition, Pearson Prentice Hall, 2003 Original Notes.
PlanetLab Software Overview Mark Huang
Virtualization for Cloud Computing
Quality Assurance Testing Tony Mack PlanetLab Developers Meeting.
Understanding Active Directory
70-290: MCSE Guide to Managing a Microsoft Windows Server 2003 Environment, Enhanced Chapter 1: Introduction to Windows Server 2003.
Condor Project Computer Sciences Department University of Wisconsin-Madison Virtual Machines in Condor.
Tanenbaum 8.3 See references
Building service testbeds on FIRE D5.2.5 Virtual Cluster on Federated Cloud Demonstration Kit August 2012 Version 1.0 Copyright © 2012 CESGA. All rights.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED.
Andy Bavier, PlanetWorks Scott Baker, SB-Software July 27, 2011.
1.  PRAGMA Grid test-bed : Shares clusters which managed by multiple sites Realizes a large-scale computational environment. › Expects as a platform.
PlanetLab: A Distributed Test Lab for Planetary Scale Network Services Opportunities Emerging “Killer Apps”: –CDNs and P2P networks are first examples.
Eric Keller, Evan Green Princeton University PRESTO /22/08 Virtualizing the Data Plane Through Source Code Merging.
GT Components. Globus Toolkit A “toolkit” of services and packages for creating the basic grid computing infrastructure Higher level tools added to this.
MyPLC My Little PlanetLab Mark Huang
VICCI: Programmable Cloud Computing Research Testbed Andy Bavier Princeton University November 3, 2011.
70-290: MCSE Guide to Managing a Microsoft Windows Server 2003 Environment, Enhanced Chapter 1: Introduction to Windows Server 2003.
PlanetLab and OneLab Presentation at the GRID 5000 School 9 March 2006 Timur Friedman Université Pierre et Marie Curie Laboratoire LIP6-CNRS PlanetLab.
1 On the Design & Evolution of an Architecture for Testbed Federation Stephen Soltesz, David Eisenstat, Marc Fiuczynski, Larry Peterson.
GEC5 Security Summary Stephen Schwab Cobham Analytical Services July 21, 2009.
Intel IT Overlay Jeff Sedayao PlanetLab Workshop at HPLABS May 11, 2006.
An Overview of the PlanetLab SeungHo Lee.
1 A Blueprint for Introducing Disruptive Technology into the Internet Larry Peterson Princeton University / Intel Research.
PlanetLab Architecture Larry Peterson Princeton University.
PlanetLab: Evolution vs Intelligent Design in Global Network Infrastructure Larry Peterson Princeton University.
Tanenbaum & Van Steen, Distributed Systems: Principles and Paradigms, 2e, (c) 2007 Prentice-Hall, Inc. All rights reserved DISTRIBUTED SYSTEMS.
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
Active Directory. Computers in organizations Computers are linked together for communication and sharing of resources There is always a need to administer.
Sponsored by the National Science Foundation 1 March 15, 2011 GENI I&M Update: I&M Service Types, Arrangements, Assembling Goals Architecture Overview.
Hosting Wide-Area Network Testbeds: Policy Considerations Larry Peterson Princeton University.
Clearing house for all GENI news and documents GENI Architecture Concepts Global Environment for Network Innovations The GENI Project Office.
Let's build a VMM service template from A to Z in one hour Damien Caro Technical Evangelist Microsoft Central & Eastern Europe
01/27/10 What is PlanetLab? A planet-wide testbed for the R & D of network applications and distributed computing Over 1068 nodes at 493 sites, primarily.
SERVERS. General Design Issues  Server Definition  Type of server organizing  Contacting to a server Iterative Concurrent Globally assign end points.
DISTRIBUTED SYSTEMS Principles and Paradigms Second Edition ANDREW S
OneLab - PlanetLab Europe Presentation for Rescom 2007
TYPES OF SERVER. TYPES OF SERVER What is a server.
Oracle Solaris Zones Study Purpose Only
OpenStack Ani Bicaku 18/04/ © (SG)² Konsortium.
PlanetLab: Evolution vs Intelligent Design in Global Network Infrastructure Larry Peterson Princeton University.
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
Presentation transcript:

PlanetLab: Present and Future Steve Muir 3rd August, 2005 (slides taken from Larry Peterson)

PlanetLab Today Global distributed systems infrastructure –platform for long-running services –testbed for network experiments 583 nodes around the world –30 countries –250+ institutions (universities, research labs, gov’t) Standard PC servers –150–200 users per server –30–40 active per hour, 5–10 at any given time –memory, CPU both heavily over-utilised

Node Software Linux Fedora Core 2 –kernel being upgraded to FC4 –always up-to-date with security-related patches VServer patches provide security –each user gets own VM (‘slice’) –limited root capabilities CKRM/VServer patches provide resource mgmt –proportional share CPU scheduling –hierarchical token bucket controls network Tx bandwidth –physical memory limits –disk quotas

Issues Multiple VM Types –Linux vservers, Xen domains Federation –EU, Japan, China Resource Allocation –Policy, markets Infrastructure Services –Delegation Need to define the PlanetLab Architecture

Key Architectural Ideas Distributed virtualization –slice = set of virtual machines Unbundled management –infrastructure services run in their own slice Chain of responsibility –account for behavior of third-party software –manage trust relationships

N x N Trust Relationships Princeton Berkeley Washington MIT Brown CMU NYU ETH Harvard HP Labs Intel NEC Labs Purdue UCSD SICS Cambridge Cornell … princeton_codeen nyu_d cornell_beehive att_mcash cmu_esm harvard_ice hplabs_donutlab idsl_psepr irb_phi paris6_landmarks mit_dht mcgill_card huji_ender arizona_stork ucb_bamboo ucsd_share umd_scriptroute … Trusted Intermediary (PLC)

Principals Node Owners –host one or more nodes (retain ultimate control) –selects an MA and approves of one or more SAs Service Providers (Developers) –implements and deploys network services –responsible for the service’s behavior Management Authority (MA) –installs an maintains software on nodes –creates VMs and monitors their behavior Slice Authority (SA) –registers service providers –creates slices and binds them to responsible provider

Trust Relationships (1) Owner trusts MA to map network activity to responsible slice MA Owner Provider SA (2) Owner trusts SA to map slice to responsible providers (3) Provider trusts SA to create VMs on its behalf 3 (4) Provider trusts MA to provide working VMs & not falsely accuse it 4 (5) SA trusts provider to deploy responsible services (6) MA trusts owner to keep nodes physically secure

Architectural Elements MA NM + VMM node database Node Owner VM SCS SA slice database VM Service Provider

Narrow Waist Name space for slices Node Manager Interface rspec = < vm_type = linux_vserver, cpu_share = 32, mem_limit - 128MB, disk_quota = 5GB, base_rate = 1Kbps, burst_rate = 100Mbps, sustained_rate = 1.5Mbps >

Node Boot/Install Process NodePLC Boot Server 1. Boots from BootCD (Linux loaded) 2. Hardware initialized 3. Read network config. from floppy 7. Node key read into memory from floppy 4. Contact PLC (MA) 6. Execute boot mgr Boot Manager 8. Invoke Boot API 10. State = “install”, run installer 11. Update node state via Boot API 13. Chain-boot node (no restart) 14. Node booted 9. Verify node key, send current node state 12. Verify node key, change state to “boot” 5. Send boot manager

PlanetFlow Logs every outbound IP flow on every node –accesses ulogd via Proper –retrieves packet headers, timestamps, context ids (batched) –used to audit traffic Aggregated and archived at PLC

Chain of Responsibility Join Request PI submits Consortium paperwork and requests to join PI Activated PLC verifies PI, activates account, enables site (logged) User Activated Users create accounts with keys, PI activates accounts (logged) Nodes Added to Slices Users add nodes to their slice (logged) Slice Traffic Logged Experiments run on nodes and generate traffic (logged by Netflow) Traffic Logs Centrally Stored PLC periodically pulls traffic logs from nodes Slice Created PI creates slice and assigns users to it (logged) Network Activity Slice Responsible Users & PI

Slice Creation PLC (SA) VMM NMVM PI SliceCreate( ) SliceUsersAdd( ) User SliceNodesAdd( ) SliceAttributeSet( ) SliceInstantiate( ) SliceGetAll( ) slices.xml VM …

Slice Creation PLC (SA) VMM NMVM PI SliceCreate( ) SliceUsersAdd( ) User SliceAttributeSet( ) SliceGetTicket( ) VM … (distribute ticket to slice creation service) SliverCreate(ticket)

Brokerage Service PLC (SA) VMM NMVM PI SliceCreate( ) SliceUsersAdd( ) Broker SliceAttributeSet( ) SliceGetTicket( ) VM … (distribute ticket to brokerage service) rcap = PoolCreate(ticket)

Brokerage Service (cont) PLC (SA) VMM NMVM … (broker contacts relevant nodes) PoolSplit(rcap, slice, rspec) VM User BuyResources( ) Broker