PlanetLab: A Distributed Test Lab for Planetary Scale Network Services Opportunities Emerging “Killer Apps”: –CDNs and P2P networks are first examples.

Slides:



Advertisements
Similar presentations
VINI and its Future Directions
Advertisements

INDIANAUNIVERSITYINDIANAUNIVERSITY GENI Global Environment for Network Innovation James Williams Director – International Networking Director – Operational.
NetServ Dynamic in-network service deployment Henning Schulzrinne (Columbia University) Srinivasan Seetharaman (Georgia Tech) Volker Hilt (Bell Labs)
PlanetLab: An Overlay Testbed for Broad-Coverage Services Bavier, Bowman, Chun, Culler, Peterson, Roscoe, Wawrzoniak Presented by Jason Waddle.
PlanetLab V3 and beyond Steve Muir Princeton University September 17, 2004.
System Center 2012 R2 Overview
1 Planetary Network Testbed Larry Peterson Princeton University.
1 PlanetLab: A Blueprint for Introducing Disruptive Technology into the Internet Scott Karlin Princeton University.
PlanetLab Operating System support* *a work in progress.
1 PlanetLab: A globally distributed testbed for New and Disruptive Services CS441 Mar 15th, 2005 Seungjun Lee
PlanetLab: Present and Future Steve Muir 3rd August, 2005 (slides taken from Larry Peterson)
GENI: Global Environment for Networking Innovations Larry Landweber Senior Advisor NSF:CISE Joint Techs Madison, WI July 17, 2006.
Xen , Linux Vserver , Planet Lab
INTRODUCTION TO CLOUD COMPUTING CS 595 LECTURE 6 2/13/2015.
High memory instances Monthly SLA : Virtual Machines Validated & supported Microsoft workloads Price reduction: standard Windows (22%) & Linux (29%)
PlanetLab: An open platform for developing, deploying, and accessing planetary-scale services Overview Adapted from Peterson.
1 In VINI Veritas: Realistic and Controlled Network Experimentation Jennifer Rexford with Andy Bavier, Nick Feamster, Mark Huang, and Larry Peterson
1 GENI: Global Environment for Network Innovations Jennifer Rexford Princeton University
An Overlay Data Plane for PlanetLab Andy Bavier, Mark Huang, and Larry Peterson Princeton University.
1 GENI: Global Environment for Network Innovations Jennifer Rexford On behalf of Allison Mankin (NSF)
PlanetLab: an open community test- bed for Planetary-Scale Services a “work in progress” for USITS03 David Culler* UC Berkeley Intel Berkeley.
The Whats and Whys of Whole System Virtualization Peter A. Dinda Prescience Lab Department of Computer Science Northwestern University
Jennifer Rexford Princeton University MW 11:00am-12:20pm Data-Center Traffic Management COS 597E: Software Defined Networking.
Virtualization for Cloud Computing
Towards a Distributed Test-Lab for Planetary-Scale Services David Culler UC Berkeley Intel Berkeley.
Getting Started with Oracle Compute Cloud
Assessment of Core Services provided to USLHC by OSG.
By Mihir Joshi Nikhil Dixit Limaye Pallavi Bhide Payal Godse.
Windows Azure Conference 2014 Running Docker on Windows Azure.
Cloud Computing. What is Cloud Computing? Cloud computing is a model for enabling convenient, on-demand network access to a shared pool of configurable.
Michael Ernst, page 1 Collaborative Learning for Security and Repair in Application Communities Performers: MIT and Determina Michael Ernst MIT Computer.
Ocean Observatories Initiative Common Execution Infrastructure (CEI) Overview Michael Meisinger September 29, 2009.
Xen Overview for Campus Grids Andrew Warfield University of Cambridge Computer Laboratory.
1 st PlanetLab Asia Workshop September 17 th -18 th 2004 Yonsei University, Seoul, Korea Friday September 17 th, 1.30 – 2.30 pm –Welcome Greetings, Young-Yong.
GENI: Global Environment for Networking Innovations Allison Mankin (for the GENI Team) CISE/NSF Rest of GENI Team: Guru Parulkar, Paul.
M.A.Doman Short video intro Model for enabling the delivery of computing as a SERVICE.
Microsoft and Community Tour 2011 – Infrastrutture in evoluzione Community Tour 2011 Infrastrutture in evoluzione.
VICCI: Programmable Cloud Computing Research Testbed Andy Bavier Princeton University November 3, 2011.
1 On the Design & Evolution of an Architecture for Testbed Federation Stephen Soltesz, David Eisenstat, Marc Fiuczynski, Larry Peterson.
Overview of PlanetLab and Allied Research Test Beds.
CSG Cloud Survey 10 September Revised 9/9/2014 Respondents New York University University of Virginia University of Iowa UC San Diego University.
Sponsored by the National Science Foundation GENI Registry Services, a.k.a. Digital Object Registry Spiral 2 Year-end Project Review CNRI PI: Larry Lannom.
Intel IT Overlay Jeff Sedayao PlanetLab Workshop at HPLABS May 11, 2006.
An Overview of the PlanetLab SeungHo Lee.
By L. Peterson, Princeton T.Anderson, UW D. Culler, T. Roscoe, Intel, Berkeley HotNets-I (Infrastructure panel), 2002 Presenter Shobana Padmanabhan Discussion.
Deliverable A meeting report that outlines our current thinking about Private PlanetLabs and Federation. Private PlanetLabs: Opportunities and Challenges.
1 A Blueprint for Introducing Disruptive Technology into the Internet Larry Peterson Princeton University / Intel Research.
2009 Federal IT Summit Cloud Computing Breakout October 28, 2009.
PlanetLab: A Platform for Planetary-Scale Services Mic Bowman
PlanetLab Architecture Larry Peterson Princeton University.
Seattle: Building a Million- Node Testbed Justin Cappos Ivan Beschastnikh Arvind Krishnamurthy Tom Anderson University of Washington
Uwe Lüthy Solution Specialist, Core Infrastructure Microsoft Corporation Integrated System Management.
Programmable Packets in the Emerging Extreme Internet David Culler UC Berkeley Intel Berkeley.
PlanetLab Current Status Brent Chun Timothy Roscoe David Culler 8 th November 2002.
Globus and PlanetLab Resource Management Solutions Compared M. Ripeanu, M. Bowman, J. Chase, I. Foster, M. Milenkovic Presented by Dionysis Logothetis.
Multi-Tier Apps with Admin Access, RDP, Custom Installs Modern Scalable Web Sites Full Windows Server/Linux VMs Web Sites Virtual Machines Cloud Services.
NSF Middleware Initiative Purpose To design, develop, deploy and support a set of reusable, expandable set of middleware functions and services that benefit.
Guru Parulkar CISE/NSF How can Great Plains Region Contribute to GENI and FIND?
1 A Blueprint for Introducing Disruptive Technology into the Internet Larry Peterson Princeton University / Intel Research.
01/27/10 What is PlanetLab? A planet-wide testbed for the R & D of network applications and distributed computing Over 1068 nodes at 493 sites, primarily.
Distributed Network Monitoring in the Wisconsin Advanced Internet Lab Paul Barford Computer Science Department University of Wisconsin – Madison Spring,
Open Source Virtualization Andrey Meganov RHCA, RHCX Consultant / VDEL
What are people doing in/on/with/around PlanetLab?
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
PlanetLab: a foundation on which the next Internet can emerge
Towards a Distributed Test-Lab for Planetary-Scale Services
Towards Distributed Test-Lab for Planetary-Scale Services
GENI Global Environment for Network Innovation
Rick McGeer, CITRIS Scientific Liaison HP Labs
Towards a Distributed Test-Lab for Planetary-Scale Services
Presentation transcript:

PlanetLab: A Distributed Test Lab for Planetary Scale Network Services Opportunities Emerging “Killer Apps”: –CDNs and P2P networks are first examples –Application spreads itself over the Internet Vibrant Research Community: –Distributed Hash Tables: Chord, CAN, Tapestry, Pastry –Distributed Storage: Oceanstore, Mnemosyne, Past –Lack of viable testbed for ideas People and Organization Project Owner: Timothy Roscoe (acting) Hans Mulder (sponsor) David Culler Larry Peterson Tom Anderson Milan Milenkovic Earl Hines Seed Research & Design Community MIT Utah Duke UCSD Rice Berkeley Distributed Design Team Washington Princeton IR DSL Core Engineering & Design Team Larger academic, industry, government non-profit organization CMU Pittsburg The Hard Problems “Slice-ability”: multiple experimental services sharing many nodes Security and Integrity Management Building Blocks and Primitives Instrumentation Q2 ‘02 Phase 0 Phase 2 Phase 1 2H ‘02 1H ‘03 2H ‘03 1H ‘04 2H ‘04 Silk: Safe Raw Sockets 100 Nodes online Broader Consortium to take to 1000 Nodes Phase 0 Design Spec Mission Control System Rev 0 Centralized Mgmt Service thinix API in Linux Native thinix Mgmt Services 300 Nodes online PlanetLab Summit Aug-20 Secure thinix Continued development of Self- serviced Infrastructure Transition Ops Mgmt to Consortium Overall Timeline Target Phase 0 Sites Intel Berkeley ICIR MIT Princeton Cornell Duke UT Columbia UCSB UCB UCSD UCLA UW Intel Seattle KY Melbourne Cambridge Harvard GIT Uppsala Copenhagen CMU UPenn WI Chicago Utah Intel OR UBC Washu ISI Intel Rice Beijing Tokyo Barcelona Amsterdam Karlsruhe St. Louis 33 sites, ~3 nodes each: – Located with friends and family – Running First Design code by 9/01/2002 – 10 of 100 located in manage colo centers, Phase 0 (40 of 300 in Phase 1) – Disciplined roll-out Synopsis Open, planetary-scale research and experimental facility Dual-role: Research Testbed AND Deployment Platform >1000 viewpoints (compute nodes) on the internet resource-rich sites at network crossroads Analysis of infrastructure enhancements, Experimentation with new applications and services Typical use involves a slice across substantial subset of nodes New Ideas / Opportunities Service-centric Virtualization –Re-emergence of Virtual Machine technology (VMWare…) –Sandboxing to provide virtual servers (Ensim, UML, Vservers) –Network Services require fundamentally simpler virtual machines, making them more scalable (more VMs per PM), focussed on service requirements. –Instrumentation and Management become further virtualized “slices” Restricted API => Simple Machine Monitor –Very simple monitor => push complexity up to where it can be managed –Ultimately one can only make very tiny machine monitor truly secure –SILK effort (Princeton) captures most valuable part of ANets nodeOS in Linux kernel modules –API should self-virtualize: deploy the next PlanetLab within the current one –Host V.1 planetSILK within V.2 thinix VM Planned Obsolescence of Building Block Services –Community-driven service definition and development –Service components on node run in just another VM –Team develops bootstrap ‘global’ services V.0: seed 100 nodes V.1: get API & interfaces right Grow to 300 nodes V.2: get underlying arch. and impl. Right Grow to nodes Minimal VM / Auth. requirements Applications mostly self-sufficient Core team manages platform Thin, event-driven monitor Host new services directly Host phase 1 as virtual OS Replace bootstrap services Linux variant with constrained API Bootstrap services hosted in VMs Outsource operational support Project Strategy What will PlanetLab enable? The open infrastructure that enables the next generation of planetary scale services to be invented Post-cluster, post-yahoo, post-CDN, post-P2P,... Potentially, the foundation on which the next Internet can emerge A different kind of testbed Focus and Mobilize the Network / Systems Research Community Position Intel to lead in the emerging Internet