Office of Science U.S. Department of Energy ESCC Meeting July 21-23, 2004 Network Research Program Update Thomas D. Ndousse Program Manager Mathematical,

Slides:



Advertisements
Similar presentations
Electronic Visualization Laboratory University of Illinois at Chicago EVL Optical Networking Research Oliver Yu Electronic Visualization Laboratory University.
Advertisements

DRAGON Dynamic Resource Allocation via GMPLS Optical Networks Tom Lehman University of Southern California Information Sciences Institute (USC/ISI) National.
U.S. Department of Energy’s Office of Science Basic Energy Sciences Advisory Committee Dr. Daniel A. Hitchcock October 21, 2003
1 Chin Guok ESnet Network Engineer David Robertson DSD Computer Software Engineer Lawrence Berkeley National Laboratory.
1 Chin Guok ESnet Network Engineer David Robertson DSD Computer Software Engineer Lawrence Berkeley National Laboratory.
GNEW 2004 CERN, Geneva, Switzerland March 16th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview for GNEW2004 Shawn McKee University of Michigan.
Presented by DOE UltraScience Net: High-Performance Experimental Network Research Testbed Nagi Rao Complex Systems Computer Science and Mathematics Division.
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY The DOE UltraScience Network (aka UltraNet) GNEW2004 March 15, 2004 Wing/Rao.
ESnet On-demand Secure Circuits and Advance Reservation System (OSCARS) Chin Guok Network Engineering Group Thomas Ndousse Visit February Energy.
TeraPaths : A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research USATLAS Tier 1 & Tier 2 Network Planning Meeting December.
Report of the “DOE Workshop on Ultra High-Speed Transport Protocols.
Circuit Services - IPTV Christian Todorov Internet2 Fall Member Meeting October 9, 2007.
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
Nlr.net © 2004 National LambdaRail, Inc NLR Update July 26, 2006.
TeraPaths: A QoS Collaborative Data Sharing Infrastructure for Petascale Computing Research Bruce Gibbard & Dantong Yu High-Performance Network Research.
NORDUnet NORDUnet The Fibre Generation Lars Fischer CTO NORDUnet.
LIGHTNESS Introduction 10th Oct, 2012 Low latency and hIGH Throughput dynamic NEtwork infrastructureS for high performance datacentre interconnectS.
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY DOE-UltraScience Net (& network infrastructure) Update JointTechs Meeting February 15, 2005.
U.S. Department of Energy’s Office of Science Mary Anne Scott Program Manager Washington Update July 21, 2004 ESCC Meeting.
LHC Tier 2 Networking BOF Joe Metzger Joint Techs Vancouver 2005.
The Research and Education Network: Platform for Innovation Heather Boyles, Next Generation Network Symposium Malaysia 2007-March-15.
HOPI Update Rick Summerhill Director Network Research, Architecture, and Technologies Jerry Sobieski MAX GigaPoP and TSC Program Manager Mark Johnson MCNC.
A Technology Vision for the Future Rick Summerhill, Chief Technology Officer, Eric Boyd, Deputy Technology Officer, Internet2 Joint Techs Meeting 16 July.
UVA work items  Provisioning across CHEETAH and UltraScience networks Transport protocol for dedicated circuits: Fixed-Rate Transport Protocol (FRTP)
High-quality Internet for higher education and research GigaPort  Overview SURFnet6 Niels den Otter SURFnet EVN-NREN Meeting Amsterdam October 12, 2005.
PPDG and ATLAS Particle Physics Data Grid Ed May - ANL ATLAS Software Week LBNL May 12, 2000.
Rick Summerhill Chief Technology Officer, Internet2 TIP January 2008 Honolulu, HI Internet2 Update.
A Framework for Internetworking Heterogeneous High-Performance Networks via GMPLS and Web Services Xi Yang, Tom Lehman Information Sciences Institute (ISI)
Delivering Circuit Services to Researchers: The HOPI Testbed Rick Summerhill Director, Network Research, Architecture, and Technologies, Internet2 Joint.
MAIN TECHNICAL CHARACTERISTICS Next generation optical transport networks with 40Gbps capabilities are expected to be based on the ITU’s.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
Logistical Networking Micah Beck, Research Assoc. Professor Director, Logistical Computing & Internetworking (LoCI) Lab Computer.
Department of Energy Office of Science ESCC & Internet2 Joint Techs Workshop Madison, Wisconsin.July 16-20, 2006 Network Virtualization & Hybridization.
ASCR/ESnet Network Requirements an Internet2 Perspective 2009 ASCR/ESnet Network Requirements Workshop April 15/16, 2009 Richard Carlson -- Internet2.
John D. McCoy Principal Investigator Tom McKenna Project Manager UltraScienceNet Research Testbed Enabling Computational Genomics Project Overview.
© 2006 National Institute of Informatics 1 Jun Matsukata National Institute of Informatics SINET3: The Next Generation SINET July 19, 2006.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Probe Plans and Status SciDAC Kickoff July, 2001 Dan Million Randy Burris ORNL, Center for.
Cyberinfrastructure and Internet2 Eric Boyd Deputy Technology Officer Internet2.
The Particle Physics Data Grid Collaboratory Pilot Richard P. Mount For the PPDG Collaboration DOE SciDAC PI Meeting January 15, 2002.
Initiative on Designing a New Generation Network APII Workshop 2006 Singapore July 18, 2006 Masaki Hirabaru NICT.
Terapaths: MPLS based Data Sharing Infrastructure for Peta Scale LHC Computing Bruce Gibbard and Dantong Yu USATLAS Computing Facility DOE Network Research.
Advanced Networks: The Past and the Future – The Internet2 Perspective APAN 7 July 2004, Cairns, Australia Douglas Van Houweling, President & CEO Internet2.
PPDGLHC Computing ReviewNovember 15, 2000 PPDG The Particle Physics Data Grid Making today’s Grid software work for HENP experiments, Driving GRID science.
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY 1 Enabling Supernova Computations by Integrated Transport and Provisioning Methods Optimized.
The OptIPuter Project Tom DeFanti, Jason Leigh, Maxine Brown, Tom Moher, Oliver Yu, Bob Grossman, Luc Renambot Electronic Visualization Laboratory, Department.
GRNET activities on Optical Networking Dimitrios Kalogeras.
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
DOE UltraScience Net The Need –DOE large-scale science applications on supercomputers and experimental facilities require high-performance networking Petabyte.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
Cyberinfrastructure and Internet2 Eric Boyd Deputy Technology Officer Internet2.
30 November 2001 Advisory Panel on Cyber Infrastructure National Science Foundation Douglas Van Houweling November 30, 2001 National Science Foundation.
Dynamic Network Services In Internet2 John Vollbrecht /Dec. 4, 2006 Fall Members Meeting.
INDIANAUNIVERSITYINDIANAUNIVERSITY HOPI: Hybrid Packet and Optical Infrastructure Chris Robb and Jim Williams Indiana University 7 July 2004 Cairns, AU.
An Architectural Approach to Managing Data in Transit Micah Beck Director & Associate Professor Logistical Computing and Internetworking Lab Computer Science.
1 (Brief) Introductory Remarks On Behalf of the U.S. Department of Energy ESnet Site Coordinating Committee (ESCC) W.Scott Bradley ESCC Chairman
I Arlington, VA April 20th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview for the Internet2 Spring 2004 Meeting Shawn McKee University.
-1- ESnet On-Demand Secure Circuits and Advance Reservation System (OSCARS) David Robertson Internet2 Joint Techs Workshop July 18,
HENP SIG Austin, TX September 27th, 2004Shawn McKee The UltraLight Program UltraLight: An Overview and Update Shawn McKee University of Michigan.
Presented by DOE UltraScience Net: High-Performance Experimental Network Research Testbed Nagi Rao Computer Science and Mathematics Division Complex Systems.
Office of Science U.S. Department of Energy High-Performance Network Research Program at DOE/Office of Science 2005 DOE Annual PI Meeting Brookhaven National.
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
Internet2 Members Meeting Washington, DC 1 Advanced Networking Infrastructure and Research (ANIR) Aubrey Bush Division Director, ANIR National Science.
Run - II Networks Run-II Computing Review 9/13/04 Phil DeMar Networks Section Head.
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 Given by Rich Carlson LHC Meeting 25.
Supporting Advanced Scientific Computing Research Basic Energy Sciences Biological and Environmental Research Fusion Energy Sciences High Energy Physics.
DutchGrid KNMI KUN Delft Leiden VU ASTRON WCW Utrecht Telin Amsterdam Many organizations in the Netherlands are very active in Grid usage and development,
The Internet2 Network and LHC Rick Summerhill Director Network Research, Architecture, and Technologies Internet2 LHC Meeting 23 October 2006 FERMI Lab,
Welcome Network Virtualization & Hybridization Thomas Ndousse
DOE Facilities - Drivers for Science: Experimental and Simulation Data
Abilene Update Rick Summerhill
Presentation transcript:

Office of Science U.S. Department of Energy ESCC Meeting July 21-23, 2004 Network Research Program Update Thomas D. Ndousse Program Manager Mathematical, Informational, and Computational Sciences (MICS Division)

Office of Science U.S. Department of Energy Program Goals What's new New SciDAC and MICS Network Research Projects 1.Ultra-Science Network Testbed – Base funding 2.ESnet MPLS Testbed – Base funding 3.Application Pilot Projects (Fusion Energy, Biology, Physics) 4.GMPLS Control Plane 5.GridFTP Lite (Generalized File Transfer Protocol) 6.Transport protocols for switched dedicated links 7.Cyber security: IDS and group security 8.Data grid wide area network monitoring for LHC Gap (Network-enabled storage systems) Leadership Class National Supercomputer Budget Reduction in FY-04 & FY-05 Budget SC Network PI meeting in late September, 2004

Office of Science U.S. Department of Energy Revised Program Focus Previous Focus R&D on fundamental networking issues Single and small group of investigators Limited emphasis on technology transfer and integration Limited emphasis on network, middleware, and applications integration New Focus Applied research, engineering, and testing Experimental networking using UltraNet and MPLS testbed Integrated applications, middleware, an networks prototypes Leadership-class supercomputing –Impact on network research –Impact on research testbeds –Impact on inter-agency network coordination activities

Office of Science U.S. Department of Energy Network Research Program Elements Program Elements  R&D, E: Research, Development and Engineering  ANRT: Advanced Network Research Testbeds (ANRT)  ECPI: Early Career Principal Investigators  SBIR: Small Business innovation Research

Office of Science U.S. Department of Energy FY-03/04 Network Research Program Budget YearFY-03/04 SciDAC$2.0M MICS$4.5M Total$6.5M YearFY-04/5 SciDAC$2.0M MICS$2.5M Total$4.5M

Office of Science U.S. Department of Energy Implementation of Office of Science Networking Recommendations – I (Very High-Speed Data Transfers) Data, data, data, data everywhere! Many science areas such high energy physics, computational biology, climate modeling, astro-physics, etc., predict a need for multi-Gbits/sec data transfer capabilities in the next 2 years Program Activities Scalable TCP protocol enhancements for share networks Scalable UDP for share networks and dedicated circuits Alternative TCP/UDP transport Protocols Bandwidth on-demand technologies GridFTP lite Ultra high-speed network components

Office of Science U.S. Department of Energy Implementation of Office of Science Networking Recommendations – II (Diverse SC Network Requirements) Optical Layer Hybrid-Switched Links Circuit-Switched Links Packet-Switched Links Logical Network Layer UDP Variants Others TCP Variants TCP High-Performance Middleware High-End Science Applications Control and Signaling Plane Problem Many science areas such high energy physics, computational biology, climate modeling, astro-physics, etc., predict a need for multi-Gbits/sec data transfer capabilities in the next 2 years Program Vision

Office of Science U.S. Department of Energy Implementation of Office of Science Networking Recommendations - III Production Networks  Connects all DOE sites  7x24 and highly reliable  Advanced Internet capability  Predominantly best-effort Advanced Research Network  Experimental optical inter-networking  On-demand bandwidth/DWDM circuits  Ultra high protocol development/testing  GMPLS High-Impact Science Network Connect few high-impact science sites Ultra high-speed IP network technologies Reliable and secure services QoS/MPLS for on-demand bandwidths Ultra-Science Network, 20 Gbps ESnet QoS/MPLS Testbed, 5 Sites ESnet

Office of Science U.S. Department of Energy Impact of MPLS and Ultra-Science Networks Testbeds Category A Sites- Sites w/local fiber arrangements 1.FNAL OC-12/OC Tier 1 - CMS 2.ANLOC-12/OC ORNLOC-12/OC Leadership Computing 4.PNNLOC-12/OC EMSL Computing 5.NERCSOC-48/OC Flagship Computing 6.LBLOC-48/OC SLACOC-12/OC BABAR Data Source Category B Sites- Sites w/local fiber arrangements (T3 to OC-12) 1.BNL --- Tier 1 - ATLAS 2.JLAB 3.GA 4.Princeton 5.MIT 1.Use UltraNet to link site with local fiber connectivity 2.Develop dynamic provisioning technologies to manage DWDM circuits 3.Develop and test advanced transport protocols for high-speed data transfers over DWDM links 1.Use MPLS to establish LSPs to link sites with high- impact applications 2.Use MPLS to provide guaranteed end-to-end QoS to high-impact applications 3.Link LSPs with dynamics GMPLS circuits established

Office of Science U.S. Department of Energy Advanced Research Network Testbeds: (QoS+MPLS) Technical Activities Deployment of site QoS technologies at selected DOE sites Integrate QoS with local grid infrastructure Deployment of MPLS in ESnet core network Integrate MPLS integration with GMPLS Integrate on-demand bandwidth technologies to application Target Science Applications High Energy (CMS & ATLAS) – High-speed data transfer Fusion Energy - remote control of scientific instruments Nuclear Physics – remote collaborative visualization Goal To develop advanced network technologies to provide guaranteed on-demand end-to-end bandwidth to selected high-impact science applications

Office of Science U.S. Department of Energy Initial MPLS Deployment in ESnet GA BNL Starlight CERN JLab PNNL NERCS Caltech SLAC Starlight ORNL QoS/MPLS GMPLS GMPLS Site QoS MPLS Site Technology:QoS Core technologies: MPLS Core Technologies:MPLS & GMPLS FNAL

Office of Science U.S. Department of Energy Ultra-Science Network Testbed: Topology Upgrade: 20 Gbps backbone DOE University Partners DOE National Lab CERN Sunnyvale FNAL CalTech SLAC LBL NERSC PNNL 10 Gbps ESnet Links 10 Gbps UltraNet Link under discussion Major Nodes StarLight/FNAL SOX/ORNL Seattle/PNNL Sunnyvale/SLAC Sunnyvale/Caltech StarLight ORNL CalTech BNL JLab ESnet 10 Gbps 20 Gbps SOX

Office of Science U.S. Department of Energy Dynamic Provisioning Development data circuit-switched technologies IP control plane based on GMPLS Integration of QoS, MPLS, and GMPLS Inter-domain control plane signaling Bandwidth on-demand technologies Ultra High-Speed Data Transfer Protocols High-speed transport protocols for dedicated channels High-speed data transfer protocols for dedicated channels Layer data multicasting Ultra High-Speed Cyber Security Ultra high-speed IDS Ultra high-speed firewalls and alternatives Control plane security Ultra-Science Network Testbed: Activities

Office of Science U.S. Department of Energy UltraNet/GMPLS Institutions FNALFiber Starlight/UltraNet ORNLFiber to Atlanta and Starlight/UltraNet SLACFiber to Sunnyvale/UltraNet (under discussion) PNNLFiber connection to Seattle/UltraNet CaltechDWDM link to Sunnyvale/UltraNet UltraNet QoS/MPLS Fusion Energy:GA, NERCS, Princeton ATALS Project:BNL, CERN, U. Michigan CMS Project: FNAL, CERN, UCSD Funded Projects: Application Development FANLExplore very high-speed transfer of LHC data on UltraNet PNNLRemote visualization of computational biology on UltraNet ORNLAstrophysics real-time data visualization on UltraNet & CHEETAH G AWide Area Network QoS using MPLS BNLExploring QoS/MPLS for LHC data transfers UltraNet funded Projects and Laboratory initiatives

Office of Science U.S. Department of Energy Inter-Agency Collaboration CHEETAHNSF:Dynamic Provisioning – Control plane interoperability Application - Astrophysics (TSI) DRAGONNSF:Dynamic Provisioning – Control plane interoperability All-optical network technology OMNINetNSF:Dynamic Provisioning – Control plane interoperability All-optical network technology UltraNetDOE:Dynamic Provisioning – Control plane interoperability Hybrid Circuit/packet switched network HOPIInternet2- Collaborations Collaboration Issues Control plane architecture and interoperability Optical service definitions and taxonomy Inter-domain circuit exchange services GMPLS and MPLS (ESnet & Internet2) integration Testing of circuit-based transport protocols Integration of network-intensive applications Integration with Grid applications

Office of Science U.S. Department of Energy UltraNet Operations and Management Engineering Team 1.UltraNet Engineering 2.ESnet Engineering representatives 3.Application Developers representatives Research Team – Awards Pending 1.Network Research PIs 2.Application Prototyping PIs 3.Other Research Networks Management Team UltraNet Engineering ESnet Engineering Rep ESCC Rep Management Responsibilities * 1.Prioritize experiments on UltraNet 2.Schedule testing 3.Develop technology transfer strategies * Needs to be integrated into the Office of Science networking governance model articulated in the roadmap workshop

Office of Science U.S. Department of Energy Network Technologies for Leadership Class Supercomputing Leadership super computer being built at ORNL National resource Access from university, national labs, and industry is a major challenge Impact of leadership class supercomputer on Office of science networking Network technologies for leadership class supercomputer Inter-agency networking coordination issues

Office of Science U.S. Department of Energy Network Technologies for Leadership Class Supercomputing Leadership super computer being built at ORNL National resource Access from university, national labs, and industry is a major challenge Impact of leadership class supercomputer on Office of science networking Network technologies for leadership class supercomputer Inter-agency networking coordination issues

Office of Science U.S. Department of Energy Computing and Communications: The “impedance” mismatch: computation and communication Rule of thumb: The bandwidth must be adequate to transfer Petabyte/day ~ 200Gbps - NOT on the evolutionary path of backbone, much less application throughput 80 Gbps 100 Gbps Cray 1: 133M T1 Earth Simulator 37T Cray Y-MP:400M Intel Paragon 150G ASCI Blue Mountain: 3T ASCI White: 12T 10 Gbps SONET 40 Gbps SONET 1.E+02: 100M 1.0E+03: 1G 1.0E E+05 1.E+06: 1T 1.E+07: 10T 1.E+08: 100T 1.E+09:1P 10 Gbps 40 Gbps 1 Gbps 2.5 Gbps SONET 0.6 Gbps SONET 0.15 Gbps SONET T3 10 Mbps Ethernet 100 Mbps Ethernet 1 GigE Ethernet Supercomputer peak performance Backbone performance Achievable end-to-end performance by applications projected

Office of Science U.S. Department of Energy Q&A