Presentation is loading. Please wait.

Presentation is loading. Please wait.

Jon Turner Applied Research Lab Computer Science & Engineering Washington University www.arl.wustl.edu Supercharged Planetlab Platform GENI Experimenters’

Similar presentations


Presentation on theme: "Jon Turner Applied Research Lab Computer Science & Engineering Washington University www.arl.wustl.edu Supercharged Planetlab Platform GENI Experimenters’"— Presentation transcript:

1 Jon Turner Applied Research Lab Computer Science & Engineering Washington University www.arl.wustl.edu Supercharged Planetlab Platform GENI Experimenters’ Workshop – 6/2010

2 Shameless Plug This talk is just a teaser To really learn how to use the SPPs, come to our tutorial at GEC-8 (Thursday, 7/22) »more detailed presentations »live demonstrations »supervised hands-on session If you can’t do that »read the online tutorial at wiki.arl.wustl.edu/spp »get an account at spp_plc.arl.wustl.edu »start playing

3 SPP Nodes SPP is a high performance overlay hosting platform Designed to be largely compatible with PlanetLab How it’s different from PlanetLab »multiple processors, including an NP-blade for slice fastpaths »multiple 1 GbE interfaces »support for advance reservation of interface bw, NP resources Chassis Switch 10x1 GbE CP External Switch net FPGA GPE NPE Line Card

4 SPP Deployment in Internet 2 2 2 2 two more nodes Houston and Atlanta later this year

5 Washington DC Installation

6 Details I2 Optical Connections ProtoGENI I2 Internet Service I2 Router SPP SPP_PLC SALT KANS WASH spp_plc.arl. wustl.edu

7 Hosting Platform Details Chassis Switch 10x1 GbE CP External Switch net FPGA GPE NPE Line Card PLOS VM... General Purpose Processing Engine filter... filter Line Card lookupparseheader format... queues... Network Processing Engine

8 Key Control Software Components System Resource Manager (SRM) »runs on Control Processor »retrieves slice definitions from SPP-PLC »manages all system-level resources and reservations Resource Management Proxy RMP) »runs on GPEs (in root vServer) »provides API for user slices to access resources Slice Configuration tool (scfg) »command-line interface to RMP Substrate Control Daemon (SCD) »runs on NPE management processor »supports NPE resource configuration, statistics reporting

9 Application Framework Fastpath/slowpath »fastpath mapped onto NPE »control daemon in vServer on GPE Configurable elements »code option – determines how packets processed by parse, header format »fastpath interfaces map to physical interface provisioned bandwidth »TCAM filters »Queues length, bandwidth Control daemon can configure fastpath through RMP »or users can configure manually with scfg ParseLookup Filters Control Interface Header Format Queue Manager Fast Path... output interfaces input interfaces control daemon (in vServer) GPE Remote Login Interface exception packets & in-band control

10 Working with SPPs Define new slice using SPP-PLC »just like PlanetLab Login to SPPs in slice to install application code Reserve resources needed for your experiment »includes interface bandwidth on external ports and NPE fastpath resources To run experiment during reserved period »“claim” reserved resources »setup slowpath endpoints »configure fastpath (if applicable) »setup real-time monitoring »run application and start traffic generators

11 Web Site and SPP_PLC http://wiki.arl.wustl.edu/index.php /Internet_Scale_Overlay_Hosting http://wiki.arl.wustl.edu/index.php /The_SPP_Tutorial spp_plc.arl.wustl.edu

12 Creating a Slice SPP_ PLC Chassis Switch CP External Switch net FPGA GPE NPE Line Card SRM

13 Preparing a Slice Chassis Switch CP External Switch net FPGA GPE NPE Line Card SLM Logging into slice SFTP connection for downloading code Requires Network Address Translation datapath detects new connection, LC control processor adds filters

14 Chassis Switch CP External Switch net FPGA GPE NPE LC SRM RMP Configuring a Slowpath Endpoint ●Request endpoint with desired port number on specific interface thru Resource Manager Proxy (RMP) ●RMP relays request to System Resource Manager (SRM) ●SRM configures LC filters for interface ●Arriving packets directed to slice, which is listening on socket

15 Setting Up a Fast Path ●Request fastpath through RMP ●SRM allocates fastpath ●Specify logical interfaces and interface bandwidths ●Specify #of filters, queues, binding of queues to interfaces, queue lengths and bandwidths ●Configure fastpath filters Chassis Switch CP External Switch net FPGA GPE NPE LC SRM RMP lookupparseheader format queues...

16 Displaying Real-Time Data Fastpath maintains traffic counters and queue lengths To display traffic data »configure an external TCP port »run sliced within your vServer, using configured port sliced --ip 64.57.23.194 --port 3552 & »on remote machine, run SPPmon.jar use provided GUI to setup monitoring displays SPPmon configures sliced, which polls the NPE running the fastpath SCD-N reads and returns counter values to sliced can also display data written to file within your vServer NPECPGPE Line Card SCD sliced SPPmon

17 Command Line Tools scfg – general slice configuration tool »scfg –-cmd make_resrv (cancel_resrv, get_resrvs,..) »scfg –-cmd claim_resources »scfg –-info get_ifaces (...) »scfg –-cmd setup_sp_endpoint (setup_fp_tunnel,..) »same features (and more) available thru C/C++ API sliced – monitor traffic and display remotely ip_fpd – fastpath daemon for with IPv4 fastpath »handles exception packets (e.g. ICMP ping) »can extend to add features (e.g. bw reservation option) ip_fpc – fastpath configuration tool for IPv4 fastpath

18 Forest Overlay Network Overlay for real-time distributed applications »large online virtual worlds »distributed cyber-physical systems Large distributed sessions »endpoints issue periodic status reports »and subscribe to dynamically changing sets of reports »requires real-time, non-stop data delivery, even as communication pattern changes Per-session provisioned channels (comtrees) »unicast data delivery with route learning »dynamic multicast subscriptions using multicast core overlay router client server access connection comtree

19 Unicast Addressing and Forwarding Every comtree has its own topology and routes Two level unicast addresses »zip code and endpoint number Nodes with same zip code form subtree »all nodes in “foreign” zips reached through same branch Unicast routing »table entry for each foreign zip code and local endpoint »if no route table entry, broadcast and set route request flag »first router with a route responds 1 5 4 3 2 3.1

20 Multicast Routing Flat addresses »on per comtree basis Hosts subscribe to multicast addresses to receive packets Each comtree has “core” subtree »multicast packets go to all core routers »subscriptions propagate packets outside core Subscription processing »propagate requests towards first core router »stop if intermediate router already subscribed »can subscribe/unsubscribe to many multicasts at once Core size can be configured to suit application core subtree subscriptions propagate towards core

21 Forest Packet Format Encapsulated in IP/UDP packet »IP (addr,port#) identify logical interface to Forest router Types include »user data packets »multicast subscribe/unsubscribe »route reply Flags include »unicast route request IP/UDP Header (addr,port#) identify logical interface verlengthtypeflags comtree src adr dst adr header err check payload (≤1400 bytes) payload err check

22 Basic Demo Setup WASH KANS SALT 10.1.7.*.2 10.1.1.*.1 10.1.3.*.2.1 64.57.23.* 2.100 3.100 1.100.218 planetlab nodes 1.1 1.2 1.3.186 2.1 2.2 2.3.204 3.1 3.2 3.3 laptop

23 Simple Unicast Demo Uses one host at each router Single comtree with root at KANS Host 1.1 sends to 2.1 and 3.1 »packet to 3.1 flagged with route-request and “flooded” »router 2.100 responds with route-reply Host 2.1 sends to 1.1 and 3.1 »already has routes to both so no routing updates needed Host 3.1 sends to 1.1 and 2.1 »packet to 1.1 flagged with route-request and “flooded” »router 2.100 responds with route-reply 1.* 2.* 3.*

24 new route from 1.100 (at salt) to 3.0 (at wash) new route from 3.100 (at salt) to 1.0 (at wash) 2.100 receives flagged packet from 1.100 to 3.1 sending route-reply forwarding to 3.1 and sending route-reply use UserData to display data in stats files on GPEs run script to launch Forest routers and hosts

25 Setting up & Running the Demo Prepare Forest router configuration files »config info for Forest links, comtrees, routes, statistics Prepare/save SPPmon config – specify charts Reserve SPP resources »bandwidth on four external interfaces (one for sliced) Start session »claim reserved resources »setup communication endpoint for router logical interfaces and for sliced to report statistics »start sliced on SPP and SPPmon on laptop Start Forest routers & hosts, then observe traffic »done remotely from laptop, using shell script

26 Reservation Script On spp, execute reservation 03151000 03152200 #! /bin/bash cat >res_file.xml <<foobar... foobar scfg --cmd make_resrv --xfile res_file.xml reservation start and end times (mmddhhmm) GMT reserve interface bandwidth (4 of these) invoke scfg on reservation file copy reservation to a file

27 Setup Script On spp, execute setup #! /bin/bash # claim reserved resources scfg --cmd claim_resources # configure interfaces, binding port numbers scfg –cmd setup_sp_endpoint –bw 10000 –ipaddr 10.1.1.1 --proto 17 –port 30123 scfg –cmd setup_sp_endpoint –bw 10000 –ipaddr 10.1.3.1 --proto 17 –port 30123 scfg –cmd setup_sp_endpoint –bw 10000 –ipaddr 64.57.23.186 --proto 17 –port 30123 scfg –cmd setup_sp_endpoint –bw 2000 –ipaddr 64.57.23.182 --proto 6 –port 3551 # run monitoring daemon cat stats sliced –ip 64.57.23.182 & interfaces to other SPPs “public” interface interface for traffic monitoring

28 Run Demo On laptop, run script fdemo1 #! /bin/sh tlim=50 # time limit for hosts and routers (sec) dir=fdemo1 # directory in which code is executed... # public ip addresses used by forest routers r1ip=64.57.23.218... # names and addresses of planetlab nodes used as forest hosts h11host=planetlab6.flux.utah.edu h11ip=155.98.35.7... ssh ${sppSlice}@${salt} fRscript ${dir} 1.100 ${tlim} &... sleep 2 ssh ${plabSlice}@${h11host} fHscript ${dir} ${h11ip} ${r1ip} 1.1 ${rflg} ${minp} ${tlim} &...

29 Basic Multicast Demo Uses four hosts per router »one source, three receivers Comtree centered at each Forest router »each source sends to a multicast group on each comtree Phase 1 – uses comtree 1 »each receiver subscribes to multicast 1, then 2 and 3; then unsubscribes – 3 second delay between changes »receivers all offset from each other Phases 2 and 3 are similar »use comtrees 2 and 3, so different topology 1.* 2.* 3.*

30 run script to launch Forest routers and hosts all hosts subscribe in turn to three multicasts on comtree 1 salt to kans traffic salt to wash traffic

31 Logical Topology and Addresses WASH KANS SALT 10.1.7.*.2 10.1.8.*.2 10.1.4.*.2 10.1.2.*.2 10.1.1.*.1 10.1.3.*.1.2.1.2.1 64.57.23.*.210.214.218.194.198.204.178.182.186 64.57.23.* sppsalt1.arl.wustl.edu sppkans1.arl.wustl.edu sppwash1.arl.wustl.edu


Download ppt "Jon Turner Applied Research Lab Computer Science & Engineering Washington University www.arl.wustl.edu Supercharged Planetlab Platform GENI Experimenters’"

Similar presentations


Ads by Google