Presentation is loading. Please wait.

Presentation is loading. Please wait.

PlanetLab: An Overlay Testbed for Broad-Coverage Services Bavier, Bowman, Chun, Culler, Peterson, Roscoe, Wawrzoniak Presented by Jason Waddle.

Similar presentations


Presentation on theme: "PlanetLab: An Overlay Testbed for Broad-Coverage Services Bavier, Bowman, Chun, Culler, Peterson, Roscoe, Wawrzoniak Presented by Jason Waddle."— Presentation transcript:

1 PlanetLab: An Overlay Testbed for Broad-Coverage Services Bavier, Bowman, Chun, Culler, Peterson, Roscoe, Wawrzoniak Presented by Jason Waddle

2 Overview 1.What is PlanetLab? 2.Architecture 1.Local: Nodes 2.Global: Network 3.Details 1.Virtual Machines 2.Maintenance

3 What Is PlanetLab? Geographically distributed overlay network Testbed for broad- coverage network services

4 PlanetLab Goal …to support seamless migration of an application from an early prototype, through multiple design iterations, to a popular service that continues to evolve.

5 Priorities Diversity of Network –Geographic –Links Edge-sites, co-location and routing centers, homes (DSL, cable-modem) Flexibility –Allow experimenters maximal control over PlanetLab nodes –Securely and fairly

6 PlanetLab Architecture Node-level –Several virtual machines on each node, each running a different service Resources distributed fairly Services are isolated from each other Network-level –Node managers, agents, brokers, and service managers provide interface and maintain PlanetLab

7 Services Run in Slices PlanetLab Nodes

8 Services Run in Slices PlanetLab Nodes Virtual Machines Service / Slice A

9 Services Run in Slices PlanetLab Nodes Virtual Machines Service / Slice A Service / Slice B

10 Services Run in Slices PlanetLab Nodes Virtual Machines Service / Slice A Service / Slice B Service / Slice C

11 Node Architecture Goals Provide a virtual machine for each service running on a node Isolate virtual machines Allow maximal control over virtual machines Fair allocation of resources –Network, CPU, memory, disk

12 One Extreme: Software Runtime (e.g., Java Virtual Machine) High level API Depend on OS to provide protection and resource allocation Not flexible

13 Other Extreme: Complete Virtual Machine (e.g., VMware) Low level API (hardware) –Maximum flexibility Excellent protection High CPU/Memory overhead –Cannot share common resources among virtual machines OS, common filesystem

14 Mainstream Operating System API and protection at same level (system calls) Simple implementation (e.g., Slice = process group) Efficient use of resources (shared memory, common OS) Bad protection and isolation Maximum Control and Security?

15 PlanetLab Virtualization: VServers Kernel patch to mainstream OS (Linux) Gives appearance of separate kernel for each virtual machine –Root privileges restricted to activities that do not affect other vservers Some modification: resource control (e.g., File handles, port numbers) and protection facilities added

16 PlanetLab Network Architecture Node manger (one per node) –Create slices for service managers When service managers provide valid tickets –Allocate resources for vservers Resource Monitor (one per node) –Track nodes available resources –Tell agents about available resources

17 PlanetLab Network Architecture Agents (centralized) –Track nodes free resources –Advertise resources to resource brokers –Issue tickets to resource brokers Tickets may be redeemed with node managers to obtain the resource

18 PlanetLab Network Architecture Resource Broker (per service) –Obtain tickets from agents on behalf of service managers Service Managers (per service) –Obtain tickets from broker –Redeem tickets with node managers to acquire resources –If resources can be acquired, start service

19 Obtaining a Slice Agent Service Manager Broker

20 Obtaining a Slice Agent Service Manager Broker Resource Monitor

21 Obtaining a Slice Agent Service Manager Broker Resource Monitor

22 Obtaining a Slice Agent Service Manager Broker Resource Monitor ticket

23 Obtaining a Slice Agent Service Manager Broker ticket Resource Monitor

24 Obtaining a Slice Agent Service Manager Broker ticket Resource Monitor ticket

25 Obtaining a Slice Agent Service Manager Broker ticket

26 Obtaining a Slice Agent Service Manager Broker ticket

27 Obtaining a Slice Agent Service Manager Broker ticket

28 Obtaining a Slice Agent Service Manager Broker ticket

29 Obtaining a Slice Agent Service Manager Broker ticket Node Manager

30 Obtaining a Slice Agent Service Manager Broker ticket

31 Obtaining a Slice Agent Service Manager Broker ticket

32 PlanetLab Virtual Machines: VServers Extend the idea of chroot(2) –New vserver created by system call –Descendent processes inherit vserver –Unique filesystem, SYSV IPC, UID/GID space –Limited root privilege Cant control host node –Irreversible

33 Scalability Reduce disk footprint using copy-on- write –Immutable flag provides file-level CoW –Vservers share 508MB basic filesystem Each additional vserver takes 29MB Increase limits on kernel resources (e.g., file descriptors) –Is the kernel designed to handle this? (inefficient data structures?)

34 Protected Raw Sockets Services may need low-level network access –Cannot allow them access to other services packets Provide protected raw sockets –TCP/UDP bound to local port –Incoming packets delivered only to service with corresponding port registered –Outgoing packets scanned to prevent spoofing ICMP also supported –16-bit identifier placed in ICMP header

35 Resource Limits Node-wide cap on outgoing network bandwidth –Protect the world from PlanetLab services Isolation between vservers: two approaches –Fairness: each of N vservers gets 1/N of the resources during contention –Guarantees: each slice reserves certain amount of resources (e.g., 1Mbps bandwidth, 10Mcps CPU) Left-over resources distributed fairly

36 Linux and CPU Resource Management The scheduler in Linux provides fairness by process, not by vserver –Vserver with many processes hogs CPU No current way for scheduler to provide guaranteed slices of CPU time

37 PlanetLab Network Management 1.PlanetLab Nodes boot a small Linux OS from CD, run on RAM disk 2.Contacts a bootserver 3.Bootserver sends a (signed) startup script Boot normally or Write new filesystem or Start sshd for remote PlanetLab Admin login Nodes can be remotely power-cycled

38 Dynamic Slice Creation 1.Node Manager verifies tickets from service manager 2.Creates a new vserver 3.Creates an account on the node and on the vserver

39 User Logs in to PlanetLab Node /bin/vsh immediately: 1.Switches to the accounts associated vserver 2.Chroot()s to the associated root directory 3.Relinquishes true root privileges 4.Switch UID/GID to account on vserver –Transition to vserver is transparent: it appears the user just logged into the PlanetLab node directly

40 PlanetLab Today More than 220 nodes Over 100 sites More than 200 research projects have used PlanetLab Goal: over 1000 geographically diverse nodes

41 PlanetLab Today www.planet-lab.org


Download ppt "PlanetLab: An Overlay Testbed for Broad-Coverage Services Bavier, Bowman, Chun, Culler, Peterson, Roscoe, Wawrzoniak Presented by Jason Waddle."

Similar presentations


Ads by Google