Presentation is loading. Please wait.

Presentation is loading. Please wait.

Asdb Challenges in Moving (slowly) Towards Production Mode Jon MacLaren GHPN-RG at OGF20, Manchester, UK 9 May 2007 NSF seed-funded project.

Similar presentations


Presentation on theme: "Asdb Challenges in Moving (slowly) Towards Production Mode Jon MacLaren GHPN-RG at OGF20, Manchester, UK 9 May 2007 NSF seed-funded project."— Presentation transcript:

1 asdb Challenges in Moving (slowly) Towards Production Mode Jon MacLaren GHPN-RG at OGF20, Manchester, UK 9 May 2007 NSF seed-funded project

2 CENTER FOR COMPUTATION & TECHNOLOGY AT LOUISIANA STATE UNIVERSITY EnLIGHTened Introduction Network research, driven by concrete application projects, all of which critically require progress in network technologies and tools that utilize them EnLIGHTened testbed: 10 Gbps optical networks running over NLR and Louisiana Optical Network Initiative (LONI), connected via four all-photonic Calient switches, all using GMPLS control plane technologies Global alliance of partners Will develop, test, and disseminate advanced software and underlying technologies to: –Provide generic applications with the ability to be aware of their network, Grid environment and capabilities, and to make dynamic, adaptive and optimized use (monitor & abstract, request & control) of networks connecting various high end resources –Provide vertical integration from the application to the optical control plane, including extending GMPLS Will examine how to distribute the network intelligence among the network control plane, management plane, and the Grid middleware

3 CENTER FOR COMPUTATION & TECHNOLOGY AT LOUISIANA STATE UNIVERSITY EnLIGHTened testbed connectivity diagram, with partners Cisco/UltraLight wave EnLIGHTened wave (Cisco/NLR) LONI wave Members: - MCNC GCNS - LSU CCT - NCSU - RENCI Official Partners: - AT&T Research - SURA - NRL - Cisco Systems - Calient Networks - IBM NSF Project Partners - OptIPuter - UltraLight - DRAGON - Cheetah International Partners Phosphorus - EC G-lambda - Japan -GLIF CHI HOU DAL TUL KAN PIT WDC OGD BOI CLE POR DEN SVL SEA To Asia To Canada To Europe San Diego CAVE wave VCL @NCSU

4 Resource Allocation Resource Manager Co-Scheduler Resource Monitoring Applications Edge Routers Workflow Engines Application Abstraction Layer (API) Policy Translate app request to policy Discovery Performance Policy For SLA Monitoring Policy Feedback Loop Abstraction

5 CENTER FOR COMPUTATION & TECHNOLOGY AT LOUISIANA STATE UNIVERSITY EnLIGHTened to extend to PHOSPHORUS Testbed

6 CENTER FOR COMPUTATION & TECHNOLOGY AT LOUISIANA STATE UNIVERSITY HARC: Highly Available Resource Co-allocator Extensible, open-sourced co-allocation system Can already reserve: –Time on supercomputers (advance reservation), and –Dedicated paths on GMPLS-based networks with simple topologies Uses Paxos Commit to atomically reserve multiple resources, while providing a highly-available service Used to coordinate bookings across EnLIGHTened and G-lambda testbeds in largest demonstration of its kind to date (more later) Used for setting up the network for Thomas Sterling’s HPC Class which goes out live in HD (more later)

7 CENTER FOR COMPUTATION & TECHNOLOGY AT LOUISIANA STATE UNIVERSITY Network Components

8 CENTER FOR COMPUTATION & TECHNOLOGY AT LOUISIANA STATE UNIVERSITY NRM View of Network

9 CENTER FOR COMPUTATION & TECHNOLOGY AT LOUISIANA STATE UNIVERSITY Scheduling a link Scheduling is only one part of the puzzle Input is a request to make a connection from one endpoint to another, e.g. RA1 to BT2 The NRM “decides” the path that this will use Currently it does this by looking up the path in a table Sends the Explicit Route Object (ERO) as part of the TL1 command So the middleware is deciding the path, not GMPLS XML snippet: RA1 BT2 10240

10 CENTER FOR COMPUTATION & TECHNOLOGY AT LOUISIANA STATE UNIVERSITY Resource Map X2S Japan South Japan North US X2N X1N X1U X1S KMF FUK KHN AKB RA1 (MCNC) BT2 (LSU) Santaka KAN TKB OSA CH1 (SL) VC1 (NCSU) 4G 5G 2G NR3 (UO1) (UO2) (UO4) (UO3) (UR1) (UR2) (UR3) X1 X2 LA1 (Caltech) BT1 (LSU) Pelican BT3 (LSU) Viz Machine Client 6509 Back up 0.11a.7 0.11a.6 0.11a.2 10.16a.2 LA Foundry Credit: Tomohiro Kudoh

11 CENTER FOR COMPUTATION & TECHNOLOGY AT LOUISIANA STATE UNIVERSITY EL-GL Middleware Interoperability Credit: Tomohiro Kudoh GL: G-lambda EL: Enlightened Computing CRM: Compute Resource Manager HARC: Highly-Available Resource Co-allocator GNS-WSI: Grid Network Service-Web Services Interface NRM: Network Resource Manager

12 CENTER FOR COMPUTATION & TECHNOLOGY AT LOUISIANA STATE UNIVERSITY RNDS Display of G-lambda Reservations Credit: Tomohiro Kudoh

13 CENTER FOR COMPUTATION & TECHNOLOGY AT LOUISIANA STATE UNIVERSITY RNDS Display of EnLIGHTened Reservations Credit: Tomohiro Kudoh

14 CENTER FOR COMPUTATION & TECHNOLOGY AT LOUISIANA STATE UNIVERSITY G-Lambda and EnLIGHTened GMPLS E-NNI Demonstrations Collaborative effort between NTT, KDDI Research and EnLIGHTened Computing –Goal: to investigate potential for interdomain provisioning GLIF2006 and SC06 - demonstrated single-vendor interoperation between Japan North (KDDI Research) and Enlightened –Automated simultaneous in-advance reservation of network bandwidth between the US and Japan, and computing resources in the US and Japan –World’s first inter-domain coordination of resource mangers for in-advance reservation –Resource managers have different interfaces and are independently developed December 2006 - tested three domain multi-vendor provisioning between Japan South (NTT) and Enlightened, with Japan North as transit domain. GL’s GNS-WSI between the Grid Resource Scheduler (GRS) and the Network Resource Manager (NRM) EL’s Highly-Available Resource Co-allocator (HARC) uses Paxos Commit to reserve resources

15 CENTER FOR COMPUTATION & TECHNOLOGY AT LOUISIANA STATE UNIVERSITY KDDI NRM CRM Cluster CRM Cluster CRM Cluster NTT NRM CRM Cluster CRM Cluster EL NRM CRM Cluster CRM Cluster CRM Cluster JAPAN CRM US Application (MPI) Application (Visualization) Request Network bandwidth and Computers Reservation From xx:xx to yy:yy Request Network bandwidth and Computers Reservation From xx:xx to yy:yy

16 CENTER FOR COMPUTATION & TECHNOLOGY AT LOUISIANA STATE UNIVERSITY HD Class Spring 2007 - HPC Class Thomas Sterling, LSU, instructor Students at LSU (2 sites), LA Tech, University of Arkansas, Masaryk University Simulcast as HDVC, Access Grid, QT Streaming (NCast), Webex (graphics only), iChat (text only for inbound messages) Broadcast streams 1:N HD video broadcast from LSU (each 1.5 Gbps) N:1 HD video broadcast to LSU N:N audio distribution Uses EnLIGHTened middleware to build and tear-down network twice weekly System developed as collaboration between CCT/LSU and Masaryk Univ. (Brno, Czech Republic)

17 CENTER FOR COMPUTATION & TECHNOLOGY AT LOUISIANA STATE UNIVERSITY HD Video over IP Transmission System Packetization specified in RFC 4175 (“RTP Payload Format for Uncompressed Video”) –encapsulation: payload/RTP/UDP/IP –augmenting common RTP headers with payload headers (e.g., additional packet numbers because of fast RTP counter wrap-arounds – 0.5 s for HD-SDI) Reasonable to use jumbo frames (best > 8500B) –decreases packetization size overhead –decreases host load due to decreasing number of pps Linux-based implementation Comes from UltraGrid by Perkins & Gharai –extended to support full-HD 1080i –support for SW display including color space down-sampling and eld de-interlacing (assembly optimized for AMD) –other enhancements Used with DVS Centaurus HD capture cards –problems with latency, since the card doesn’t support DMA and requires buffering at least 4 fields for reliable operation –quite expensive –there are other cards but not supported in Linux :( End-to-end (camera-to-display) latency: 175±5 ms

18 CENTER FOR COMPUTATION & TECHNOLOGY AT LOUISIANA STATE UNIVERSITY HDTV Video Distribution Based on UDP packet reflectors –designed as user-empowered solution –relatively scalable with respect to number of users Simple design for HDTV video distribution –read from network socket, write to other network sockets (loop) –optimizations for 1.5 Gbps streams - reduced per-packet overhead (system calls/packet) –can multiply the 1.5 Gbps stream for 4 users (limited by the network card performance) –latency increase as low as 13 ms

19 CENTER FOR COMPUTATION & TECHNOLOGY AT LOUISIANA STATE UNIVERSITY HDTV Video Class Advantages Low network latencies –Measurements based on ICMP ping averages with 8500 B packets: LSU – StarLight: 30.631 ms StarLight – Masaryk University: 115.481 ms LSU – Masaryk University: 145.720 ms Multi-way interaction Quality far exceeds Access Grid –“you can see the students’ facial expressions”

20 CENTER FOR COMPUTATION & TECHNOLOGY AT LOUISIANA STATE UNIVERSITY HDTV Video Distribution

21 CENTER FOR COMPUTATION & TECHNOLOGY AT LOUISIANA STATE UNIVERSITY Configuration Changes...

22 CENTER FOR COMPUTATION & TECHNOLOGY AT LOUISIANA STATE UNIVERSITY Yufeng Xin Steve Thorpe Bonnie Hurst Joel Dunn Gigi Karmous-Edwards Mark Johnson John Moore Carla Hunt Lina Battestilli Andrew Mabe Ed Seidel Gabrielle Allen Seung Jong Park Jon MacLaren Andrei Hutanu Lonnie Leger Dan Katz Savera Tanwir Harry Perros Mladen Vouk Javad Boroumand Russ Gyurek Wayne Clark Kevin McGrattan Peter Tompsu Olivier Jerphagnon John Bowers Rick Schlichting John Strand Matti Hiltunen The EnLIGHTened Team Steven Hunter Dan Reed Alan Blatecky Chris Heermann Yang Xia Xun Su


Download ppt "Asdb Challenges in Moving (slowly) Towards Production Mode Jon MacLaren GHPN-RG at OGF20, Manchester, UK 9 May 2007 NSF seed-funded project."

Similar presentations


Ads by Google