Presentation is loading. Please wait.

Presentation is loading. Please wait.

OneLab - PlanetLab Europe Presentation for Rescom 2007

Similar presentations


Presentation on theme: "OneLab - PlanetLab Europe Presentation for Rescom 2007"— Presentation transcript:

1 OneLab - PlanetLab Europe Presentation for Rescom 2007
Serge Fdida Thomas Bourgeau Laboratoire LIP6 – CNRS Université Pierre et Marie Curie – Paris 6

2 The OneLab Project www.one-lab.org
A Europe-wide project European Commission funding under the FP6 funding program STREP : IST A STREP (Specific Targeted Research Project) funded by the European Commission start: September 2006, duration: 2 years Funding 1.9 M€ , Total budget : 2.86 M€ Aims Extend, deepen, and federate PlanetLab

3 The OneLab Consortium Project leader Technical direction Partners
Université Pierre et Marie Curie (France) Technical direction INRIA (France) Partners Universitad Carlos III de Madrid (Spain) Université Catholique de Louvain (Belgium) Università di Napoli (Italia) Università di Pisa (Italia) Alcatel Italia (Italia) Quantavis (Italia) Telekomunikacja Polska (Polska) France Telecom (France)

4 OneLab Goals OneLab is a concrete path towards Experimental Facilities
Based on Planetlab OneLab will help us better understand federation, which will be key to Experimental Facility success OneLab will also make considerable progress in Extending Extend PlanetLab into new environments, beyond the traditional wired internet. Deepening Deepen PlanetLab’s monitoring capabilities. Federating Provide a European administration for PlanetLab nodes in Europe.

5 Outline PlanetLab OneLab PlanetLab Europe Practice

6 PlanetLab An open platform for: A set of virtual machines
testing overlays, deploying experimental services, deploying commercial services, developing the next generation of internet technologies. A set of virtual machines distributed virtualization each of 350+ network services runs in its own slice

7 PlanetLab nodes Single PLC located at Princeton
784 machines spanning 382 sites and 35+ countries nodes within a LAN-hop of 2M+ users Administration at Princeton University Prof. Larry Peterson, six full-time systems administrators

8 Usage Stats Slices: 350 - 425 AS peers: 6000 Users: 1028
Bytes-per-day: TB Coral CDN represents about half of this IP-flows-per-day: 190M Unique IP-addrs-per-day: 1M Experiments on PlanetLab figure in many papers at major networking conferences

9 Slices

10 Slices

11 Slices

12 Slices

13 User Opt-in Client Server NAT

14 Virtual Machine Monitor (VMM)
Per-Node View Node Manager Local Admin VM1 VM2 VMn Virtual Machine Monitor (VMM) VMM: Currently Linux with vserver extensions Could eventually be Xen

15 Virtual Machine Monitor (VMM)
Per-Node View Node Manager Local Admin VM1 VM2 VMn Virtual Machine Monitor (VMM) Kernel Hardware

16 Virtual Machine Monitor (VMM)
Per-Node Mechanisms PlanetFlow SliceStat pl_scs pl_mom SliverMgr Proper Node Mgr Owner VM VM1 VM2 VMn Virtual Machine Monitor (VMM) Linux kernel (Fedora Core) + Vservers (namespace isolation) + Schedulers (performance isolation) + VNET (network virtualization)

17 PlanetLab

18 PlanetLab Rajouter image de ad hoc, wimax …

19 Architecture (1) Node Operating System PlanetLab Central (PLC)
isolate slices audit behavior PlanetLab Central (PLC) remotely manage nodes bootstrap service to instantiate and control slices Third-party Infrastructure Services monitor slice/node health discover available resources create and configure a slice resource allocation

20 Architecture (2) . . . Slice Authority Management U S E R S PlanetLab
Owner 1 Owner 2 Owner 3 Owner N . . . Slice Authority Management Software updates Auditing data Create slices U S E R S PlanetLab Nodes Service Developers Request a slice New slice ID Access slice Identify slice users (resolve abuse) Learn about nodes

21 Architecture (3) Node MA Node Owner Service Developer SA NM + VMM VM
database Node Owner VM Node SCS SA slice database VM Service Developer MA = management authority NM = node manager - creates virtual machines (VMs), controls resources allocated to VMs VMM = virtual machine monitor - execution environment for VMs SA = slice authority SCS = slice creation service VM = virtual machine - currently linux-vserver, could be xen-domain, or other

22 Requirements Distributed Virtualization
Global platform that supports both short-term experiments and long-running services. services must be isolated from each other performance isolation name space isolation multiple services must run concurrently Distributed Virtualization each service runs in its own slice: a set of VMs

23 Requirements 2) Must convince sites to host nodes running code written by unknown researchers. protect the Internet from PlanetLab Chain of Responsibility explicit notion of responsibility trace network activity to responsible party

24 Requirements 3) Federation
universal agreement on minimal core (narrow waist) allow independent pieces to evolve independently identify principals and trust relationships among them

25 Trust Relationships N x N Trusted Intermediary (PLC) Princeton
Berkeley Washington MIT Brown CMU NYU ETH Harvard HP Labs Intel NEC Labs princeton_codeen harvard_ice hplabs_donutlab paris6_landmarks mit_dht mcgill_card huji_ender arizona_stork ucb_bamboo ucsd_share umd_scriptroute Trusted Intermediary (PLC) N x N

26 Trust Relationships (cont)
4 2 Service Developer (User) Node Owner PLC 3 1 1) PLC expresses trust in a user by issuing it credentials to access a slice 2) Users trust to create slices on their behalf and inspect credentials 3) Owner trusts PLC to vnet users and map network activity to right user 4) PLC trusts owner to keep nodes physically secure

27 Trust Relationships (cont)
4 6 2 Node Owner Mgmt Authority Service Developer (User) Slice Authority 3 5 1 1) PLC expresses trust in a user by issuing credentials to access a slice 2) Users trust to create slices on their behalf and inspect credentials 3) Owner trusts PLC to vet users and map network activity to right user 4) PLC trusts owner to keep nodes physically secure 5) MA trusts SA to reliably map slices to users 6) SA trusts MA to provide working VMs

28 VMM Linux Vserver Scheduling significant mind-share
scales to hundreds of VMs per node (12MB each) Scheduling CPU fair share per slice (guarantees possible) link bandwidth fair share per slice average rate limit: 1.5Mbps (24-hour bucket size) peak rate limit: set by each site (100Mbps default) disk 5GB quota per slice (limit run-away log files) memory no limit

29 VMM (cont) VNET socket programs “just work”
including raw sockets slices should be able to send only… well-formed IP packets to non-blacklisted hosts slices should be able to receive only… packets related to connections that they initiated (e.g., replies) packets destined for bound ports (e.g., server requests) essentially a switching firewall for sockets leverages Linux's built-in connection tracking modules also supports virtual devices standard PF_PACKET behavior used to connect to a “virtual ISP”

30 Long-Running Services
Content Distribution CoDeeN: Princeton (serving > 1 TB of data per day) Coral CDN: NYU Cobweb: Cornell Internet Measurement ScriptRoute: Washington, Maryland Anomaly Detection & Fault Diagnosis PIER: Berkeley, Intel PlanetSeer: Princeton DHT Bamboo (OpenDHT): Berkeley, Intel Chord (DHash): MIT

31 Services (cont) Routing DNS Storage & Large File Transfer Multicast
i3: Berkeley Virtual ISP: Princeton DNS CoDNS: Princeton CoDoNs: Cornell Storage & Large File Transfer LOCI: Tennessee CoBlitz: Princeton Shark: NYU Multicast End System Multicast: CMU Tmesh: Michigan

32 Node Manager SliverMgr Proper: PRivileged OPERations
creates VM and sets resource allocations interacts with… bootstrap slice creation service (pl_scs) third-party slice creation & brokerage services (using tickets) Proper: PRivileged OPERations grants unprivileged slices access to privileged info effectively “pokes holes” in the namespace isolation examples files: open, get/set flags directories: mount/unmount sockets: create/bind processes: fork/wait/kill

33 Auditing & Monitoring PlanetFlow SliceStat
logs every outbound IP flow on every node accesses ulogd via Proper retrieves packet headers, timestamps, context ids (batched) used to audit traffic aggregated and archived at PLC SliceStat has access to kernel-level / system-wide information accesses /proc via Proper used by global monitoring services used to performance debug services

34 Infrastructure Services
Brokerage Services Sirius: Georgia Bellagio: UCSD, Harvard, Intel Tycoon: HP Environment Services Stork: Arizona Application Manager: MIT Monitoring/Discovery Services CoMon: Princeton PsEPR: Intel SWORD: Berkeley IrisLog: Intel

35 Outline PlanetLab OneLab PlanetLab Europe Practice

36 OneLab Goals Extend Deepen Federate
Extend PlanetLab into new environments, beyond the traditional wired internet. Deepen Deepen PlanetLab’s monitoring capabilities. Federate Provide a European administration for PlanetLab nodes in Europe.

37 OneLab Workpackages WP0 Management (UPMC)
WP1 Operations (UPMC, with INRIA, FT, ALA, TP) WP2 Integration (INRIA, with UPMC) WP3 Monitoring (IRC lead) WP3A Passive monitoring (IRC) WP3B Topology monitoring (UPMC) WP4 New Environments (FT lead) WP4A WiMAX component (UCL) WP4B UMTS component (UniNa, with ALA) WP4C Multihomed component (UC3M, with IRC) WP4D Wireless ad hoc component (FT, with TP) WP4E Emulation component (UniPi, with UPMC, INRIA) WP5 Validation (UPMC, with all partners) WP6 Dissemination (UniNa, with all partners save TP)

38 PlanetLab Today - A set of end-hosts - A limited view of the
underlying network - Built on the wired internet

39 OneLab Vision for PlanetLab
- Reveal the underlying network - Extend into new wired and wireless environments

40 Goal: Extend

41 Why Extend PlanetLab? Problem: PlanetLab nodes are connected to the traditional wired internet. They are mostly connected to high-performance networks such as Abilene, DANTE, NRENs. These are not representative of the internet as a whole. PlanetLab does not provide access to emerging environments.

42 OneLab’s New Environments
WiMAX (Université Catholique de Louvain) Install two nodes connected via a commercial WiMAX provider Nodes on trucks (constrained mobility) UMTS (Università di Napoli, Alcatel Italia) Nodes on a UMTS micro-cell run by Alcatel Italia Wireless ad hoc networks (France Telecom at Lannion) Nodes in a Wi-Fi mesh network (like ORBIT)

43 OneLab’s New Environments
Emulated (Università di Pisa) For emerging wireless technologies Based on dummynet Multihomed (Universidad Carlos III de Madrid)

44 Progress on Extension Added wireless capabilities to the kernel
Will enable nodes to attach via: WiMAX, UMTS, Wi- Fi Implementing SHIM-6 multihoming Nodes connected via IPv6 will be able to choose their paths Incorporating Wi-Fi emulation into dummynet Will allow experimentation in scenarios where deployment is difficult (other wireless technologies to follow)

45 Goal: Deepen Expose the underlying network

46 Why Deepen PlanetLab? Problem: PlanetLab provides limited facilities to make applications aware of the underlying network PlanetLab consists of end-hosts Routing between nodes is controlled by the internet (This will change with GENI) Applications must currently make their own measurements

47 OneLab Monitoring Components
Passive monitoring (Intel Research Cambridge) Track packets at the routers Use CoMo boxes placed within DANTE Active monitoring (U. P. & M. Curie) Provide a view of the route structure Increase the scalability of wide distributed traceroute Reduce traceroute deficiencies on load balanced path (paris traceroute) BGP guided probing

48 Progress on Deepening CoMo is now OneLab-aware, has better scripting
CoMo allows one to write scripts to track one’s own packets as they pass measurement boxes within the network Deploying a distributed topology- tracing system Made fundamental improvements to traceroute to correct errors introduced by network load balancing (new tool: Paris traceroute)

49 Goal: Federate Before: a homogeneous system

50 Goal: Federate After: a heterogeneous set of systems

51 Goal: Federate PlanetLab PlanetLab

52 Federation

53 Why Federate PlanetLab?
Problem: Changes to PlanetLab can come only through the administration at Princeton. PlanetLab in the US is necessarily focussed on US funding agencies’ research priorities. What if we want to study a particular wireless technology, and this requires changes to the source code? What if we wish to change the cost structure for small and medium size enterprises (currently $25,000/yr.)?

54 OneLab and Federation OneLab will create a PlanetLab Europe.
It will federate with PlanetLab in the US, Japan, and elsewhere. Eventually federate with “Private PlanetLabs” as well The federated structure will allow: PlanetLab Europe to set policy in accordance with European research priorities, PlanetLab Europe to customize the platform, so long as a common interface is preserved.

55 How to federate? A Private PlanetLab might have a rare resource
e.g., a node behind a wireless link What are the right incentives to… encourage the Private PlanetLab to share the resource discourage over-subscription by other users? A Private PlanetLabs might wish to customize the source code What are the right abstractions (APIs) that will allow both… inter-operability flexibility?

56 Federation We have two PlanetLab Central with sites attached
Site A1 Site B1 PLC A (PlanetLab Central)‏ PLC B (PlanetLab Europe)‏ Site A2 Site B2 We have two PlanetLab Central with sites attached PLC A is responsible for green sites and PLC B is responsible for red sites Without federation, all sites of one PLC can only see sites and nodes that are connected to their PLC

57 Federation Site A1 Site B1 PLC A (PlanetLab Central)‏ PLC B (PlanetLab Europe)‏ Site A2 Site B2 Site B2 With Federation, all sites will be able to create slices on sites connected to the federated PLC Now, if we federate B and C, B will be able to see sites of A, B and C whereas C (A) will only be able to see sites of B and C (A and B)‏

58 Deployment of the Federation Before OneLab
All other Sites UPMC Site OneLab Partners Sites Other European Sites PlanetLab Central ( INRIA Site Federation Test (boot.one-lab.org)‏ All sites are connected to the PlanetLab Central at Princeton There is no federation

59 Deployment of the Federation Next steps
OneLab Partners Sites All other Sites UPMC Site PlanetLab Central ( PlanetLab Europe ( Other European Sites INRIA Site Federate PlanetLab Europe with PlanetLab Central We have installed the future PlanetLab Europe Federate Private OneLab with PlanetLab Europe Move our nodes to PlanetLab Europe We should have at least 20 nodes on PlanetLab Europe to become serious

60 Deployment of the Federation Final goal
OneLab Partners Sites All other Sites UPMC Site PlanetLab Central ( PlanetLab Europe ( Other European Sites INRIA Site Find a way to move other European sites on PlanetLab Europe All newly created European sites are connected to PlanetLab Europe? Force all European sites to be connected to PlanetLab Europe at some point?

61 The Path to Federation Joint Access
Administrators in Paris log in to machines at Princeton Management Authority PlanetLab machines in Europe boot from Paris rather than from Princeton Slice Authority Paris can create slices across PlanetLab, becoming a second global slice authority, alongside Princeton Peering With an agreed interface, Paris and Princeton code bases can diverge

62 Some Federation Issues
A Private PlanetLab might have a rare resource e.g., a node behind a wireless link What are the right incentives to… encourage the Private PlanetLab to share the resource discourage over-subscription by other users? A Private PlanetLabs might wish to customize the source code What are the right abstractions (APIs) that will allow both… inter-operability flexibility?

63 Progress on Federation
Jointly developed PlanetLab v4 with Princeton Allows two or more PLCs (PlanetLab Centrals) Each PLC maintains its own node database, receives updates from other PLCs regarding their nodes Users can create a slice across the whole system by requesting it through their own PLC Running an embryonic PlanetLab Europe Peered with the Princeton PLC

64 OneLab Operation and research
Research project OneLab Private: private experimental PlanetLab for the project members PlanetLab Europe: European PLC administered by OneLab Embryonic production testbed for Europe Federated with PLCs in USA and elsewhere

65 Outline PlanetLab OneLab PlanetLab Europe Practice

66 PlanetLab Europe PlanetLab Europe Run by UPMC
Federation with Princeton has been tested once and will be permanent soon

67 Objectives Set up a functional PlanetLab Central in Europe to manage European sites Create a Private PlanetLab testbed to deploy and test our new components Create a federation between PlanetLab Europe and PlanetLab Central at Princeton

68 Resource allocation and provisioning
Problem Many PlanetLab nodes are down or congested Needed Incentives for infrastructure/ressource contributions (provisioning)‏ Question‏ How to allocate ressources in case of congestion?

69 Resource allocation and provisioning
Existing solutions Simple rules like those existing on PlanetLab Complicated virtual market Simple rules that relate contributions to allocations Panayotis Antoniadis (expert on game theory and incentive mechanisms)‏ Joint work with IST project of GRID Econ

70 Component - server side
Database schema Web UI API - xmlrpc Boot server Netflow Build, monitoring, support

71 Component - node side Boot CD Boot Manager PlanetLab kernel vnet
vserver netflow

72 Entities in the DB Sites Persons Keys Nodes Slices Slice attributes
Implementation details (addresses, PDU,…)

73 Joining PlanetLab Europe

74 Joining PlanetLab Europe

75 Joining PlanetLab Europe

76 PlanetLab Europe Site creation
How to join? Just connect to lab.eu or lab.org and fill in the “site registration” form

77 PlanetLab Europe Site creation
Warning: there are fields that should be unique across all federated PLC /PrivateDeploymentHowto Node's hostname, user's , site's login base and slice's name must be unique!!

78 PlanetLab Europe Node creation

79 Node Boot/Install Node Boot Manager PLC (MA) Boot Server
1. Boots from BootCD (Linux loaded) 2. Hardware initialized 3. Read network config from floppy 4. Contact PLC (MA) 5. Send boot manager 6. Execute boot mgr 7. Node key read into memory from floppy 8. Invoke Boot API 9. Verify node key, send current node state 10. State = “install”, run installer 11. Update node state via Boot API 12. Verify node key, change state to “boot” 13. Chain-boot node (no restart) 14. Node booted

80 Chain of Responsibility
Join Request PI submits Consortium paperwork and requests to join PI Activated PLC verifies PI, activates account, enables site (logged) User Activated Users create accounts with keys, PI activates accounts (logged) Slice Created PI creates slice and assigns users to it (logged) Nodes Added to Slices Users add nodes to their slice (logged) Slice Traffic Logged Experiments generate traffic (logged by PlanetFlow) Traffic Logs Centrally Stored PLC periodically pulls traffic logs from nodes Network Activity Slice Responsible Users & PI

81 MyPLC design Database server API server Web server Boot server Node
Primary information store Nodes, users, slices API server Database frontend Programatic interface Web server API frontend Administration Boot server Software distribution Node PlanetLab kernel Node Manager

82 Outline PlanetLab OneLab PlanetLab Europe Practice

83 New User https://www.planet-lab.eu:443/db/persons/register.php
In order to login into PlanetLab computers, you should first register into the PlanetLab Europe joining users page (select your site, , status) The PI of your site will confirm your account. Create a SSH private/public key pair, use the ssh-keygen program ssh-keygen -t rsa A private key named id_rsa and a public key named id_rsa.pub are generated at default in the .ssh/ on your home directory. After registration you will get an from your PI with : Your slice name : <site_name>_<username> Upload your public key on your user account : Now you can login into your PlanetLab account in order to verify your details. ( you can view your account information by clicking the "View Account"  link)

84 First steps with PlanetLab Europe
Logging into PlanetLab machines ssh -l <slicename> <machinename> ssh Executing a command ssh -l <slicename> <machinename> <command> Installing software on a remote machine using ssh To install a program like yum on a PlanetLab machine you have to be root. ssh -n -t -l <slice_name> <machine_name> " su -c ‘command’ "

85 First steps with PlanetLab Europe
For the first time you will connect to any remote machine using ssh you will get the following warning. That’s because new host keys are added at the first connection to ~/.ssh/known_hosts In case you have an host key which is not updated, you can delete the offending line of the ~/.ssh/known_hosts file which contains the stale key Other SSH resources : for official releases for numerous linux-based platforms. : ssh client for Windows

86 Copying files to/from PlanetLab machines
Using SCP to copy files from/to a PlanetLab machine scp Filename1 and filename2 can be file or directory For example : Using rsync to copy a directory tree from /to PlanetLab machine Rsync is a utility to synchronize two directories to the same content. Rsync example copies the full content of the directory ~/New_trace into the remote directory ~/PL/New_trace on the remote machine planet1.cs.huji.ac.il

87 Installing packages Since the PlanetLab installation is minimal, application like man, make, gzip, tar etc. are not installed Installing packages using yum curl | bash yum -y install man gzip less gcc rpm-build Installing packages using apt-get You can install any rpms using apr-get (freshrpms.net)

88 Compiling your software on PlanetLab
It is not recommended to compile your software on a PlanetLab machine, since the machines are already heavily loaded. Non-recommended (but possible) option using make A better option: install a local PlanetLab node Install a fedora core 2 machine.

89 Onelab future(s) Support for SAC Projects & others Federation
Monitoring Wireless Emulation Operation & Management Virtualization Sensors Optical Content IPR, Legal Economics, … Benchmarking Archiving

90 From Planetlab To Onelab See also : COST ARCADIA activity COST/NSF Workshop April Berlin

91 Evolution for PlanetLabSlices
Deployed Future Internet This chasm is a major barrier to realizing the Future Internet Maturity Small Scale Testbeds This slide can communicate that CISE has been funding research and planning grants and we are ready to undertake building of the global facility. Simulation and Research Prototypes Foundational Research Time

92 How to virtualize wireless?
We understand how to virtualize an end-host Time-share a CPU We understand how to virtualize a wired link MPLS for, e.g., VPNs GENI will be doing this Less certain how to virtualize a wireless link There is interference One must make power and channel choices What if the node is poorly connected?

93 Secure, scalable measurements?
Security Packet traces are highly sensitive How can an application track only its own packets? How can network providers set their own access policies for a common infrastructure? Scalability Can active measurements be mutualized among applications? How can information be shared among nodes to reduce measurement redundancy?


Download ppt "OneLab - PlanetLab Europe Presentation for Rescom 2007"

Similar presentations


Ads by Google