Presentation is loading. Please wait.

Presentation is loading. Please wait.

Networks for HENP and ICFA SCIC Networks for HENP and ICFA SCIC Harvey B. Newman Harvey B. Newman California Institute of Technology APAN High Energy Physics.

Similar presentations


Presentation on theme: "Networks for HENP and ICFA SCIC Networks for HENP and ICFA SCIC Harvey B. Newman Harvey B. Newman California Institute of Technology APAN High Energy Physics."— Presentation transcript:

1 Networks for HENP and ICFA SCIC Networks for HENP and ICFA SCIC Harvey B. Newman Harvey B. Newman California Institute of Technology APAN High Energy Physics Workshop January 21, 2003 California Institute of Technology APAN High Energy Physics Workshop January 21, 2003

2 Next Generation Networks for Experiments: Goals and Needs u Providing rapid access to event samples, subsets and analyzed physics results from massive data stores è From Petabytes by 2002, ~100 Petabytes by 2007, to ~1 Exabyte by ~2012. u Providing analyzed results with rapid turnaround, by coordinating and managing the large but LIMITED computing, data handling and NETWORK resources effectively u Enabling rapid access to the data and the collaboration è Across an ensemble of networks of varying capability u Advanced integrated applications, such as Data Grids, rely on seamless operation of our LANs and WANs è With reliable, monitored, quantifiable high performance Large data samples explored and analyzed by thousands of globally dispersed scientists, in hundreds of teams

3 Four LHC Experiments: The Petabyte to Exabyte Challenge ATLAS, CMS, ALICE, LHCB Higgs + New particles; Quark-Gluon Plasma; CP Violation Data stored ~40 Petabytes/Year and UP; CPU 0.30 Petaflops and UP Data stored ~40 Petabytes/Year and UP; CPU 0.30 Petaflops and UP 0.1 to 1 Exabyte (1 EB = 10 18 Bytes) (2007) (~2012 ?) for the LHC Experiments 0.1 to 1 Exabyte (1 EB = 10 18 Bytes) (2007) (~2012 ?) for the LHC Experiments

4 LHC Data Grid Hierarchy Tier 1 Tier2 Center Online System CERN 700k SI95 ~1 PB Disk; Tape Robot FNAL IN2P3 Center INFN Center RAL Center Institute Institute ~0.25TIPS Workstations ~100-400 MBytes/sec 2.5 Gbps 0.1 to 10 Gbps Tens of Petabytes by 2007-8. An Exabyte within ~5 Years later. Physics data cache ~PByte/sec ~2.5 Gbps Tier2 Center ~2.5 Gbps Tier 0 +1 Tier 3 Tier 4 Tier2 Center Tier 2 Experiment CERN/Outside Resource Ratio ~1:2 Tier0/(  Tier1)/(  Tier2) ~1:1:1

5 ICFA and Global Networks for HENP uNational and International Networks, with sufficient (rapidly increasing) capacity and capability, are essential for èThe daily conduct of collaborative work in both experiment and theory èDetector development & construction on a global scale; Data analysis involving physicists from all world regions èThe formation of worldwide collaborations èThe conception, design and implementation of next generation facilities as “global networks” u“Collaborations on this scale would never have been attempted, if they could not rely on excellent networks”

6 ICFA and International Networking uICFA Statement on Communications in Int’l HEP Collaborations of October 17, 1996 See http://www.fnal.gov/directorate/icfa/icfa_communicaes.html “ICFA urges that all countries and institutions wishing to participate even more effectively and fully in international HEP Collaborations should:  Review their operating methods to ensure they are fully adapted to remote participation  Strive to provide the necessary communications facilities and adequate international bandwidth”

7 ICFA Network Task Force: 1998 Bandwidth Requirements Projection (Mbps) 100–1000 X Bandwidth Increase Foreseen for 1998-2005 See the ICFA-NTF Requirements Report: http://l3www.cern.ch/~newman/icfareq98.html NTF

8 ICFA Standing Committee on Interregional Connectivity (SCIC) uCreated by ICFA in July 1998 in Vancouver ; Following ICFA-NTF uCHARGE: Make recommendations to ICFA concerning the connectivity between the Americas, Asia and Europe (and network requirements of HENP) èAs part of the process of developing these recommendations, the committee should  Monitor traffic  Keep track of technology developments  Periodically review forecasts of future bandwidth needs, and  Provide early warning of potential problems uCreate subcommittees when necessary to meet the charge uThe chair of the committee should report to ICFA once per year, at its joint meeting with laboratory directors (Feb. 2003) uRepresentatives: Major labs, ECFA, ACFA, NA Users, S. America

9 ICFA-SCIC Core Membership u Representatives from major HEP laboratories: W. Von Reuden(CERN) Volker Guelzow (DESY) Vicky White(FNAL) Yukio Karita (KEK) Richard Mount (SLAC) u User Representatives Richard Hughes-Jones (UK) Harvey Newman(USA) Dean Karlen (Canada) u For Russia: Slava Ilyin (MSU) u ECFA representatives: Denis Linglin (IN2P3, Lyon) Frederico Ruggieri (INFN Frascati) u ACFA representatives: Rongsheng Xu (IHEP Beijing) H. Park, D. Son (Kyungpook Nat’l University) u For South America: Sergio F. Novaes (University of Sao Paulo)

10 SCIC Sub-Committees Web Page http://cern.ch/ICFA-SCIC/ u Monitoring: Les Cottrell (http://www.slac.stanford.edu/xorg/icfa/scic-netmon) With Richard Hughes-Jones (Manchester), Sergio Novaes (Sao Paolo); Sergei Berezhnev (RUHEP), Fukuko Yuasa (KEK), Daniel Davids (CERN), Sylvain Ravot (Caltech), Shawn McKee (Michigan) u Advanced Technologies: Richard Hughes-Jones, With Vladimir Korenkov (JINR, Dubna), Olivier Martin(CERN), Harvey Newman u The Digital Divide: Alberto Santoro (Rio, Brazil) è With Slava Ilyin, Yukio Karita, David O. Williams è Also Dongchul Son (Korea), Hafeez Hoorani (Pakistan), Sunanda Banerjee (India), Vicky White (FNAL) u Key Requirements: Harvey Newman è Also Charlie Young (SLAC)

11 Transatlantic Net WG (HN, L. Price) Bandwidth Requirements [*] u [*] BW Requirements Increasing Faster Than Moore’s Law See http://gate.hep.anl.gov/lprice/TAN

12 History – One large Research Site Current Traffic ~400 Mbps; ESNet Limitation Projections: 0.5 to 24 Tbps by ~2012 Much of the Traffic: SLAC IN2P3/RAL/INFN; via ESnet+France; Abilene+CERN

13 Tier0-Tier1 Link Requirements Estimate: for Hoffmann Report 2001 1)Tier1  Tier0 Data Flow for Analysis0.5 - 1.0 Gbps 2)Tier2  Tier0 Data Flow for Analysis0.2 - 0.5 Gbps 3)Interactive Collaborative Sessions (30 Peak) 0.1 - 0.3 Gbps 4)Remote Interactive Sessions (30 Flows Peak) 0.1 - 0.2 Gbps 5)Individual (Tier3 or Tier4) data transfers 0.8 Gbps Limit to 10 Flows of 5 Mbytes/sec each uTOTAL Per Tier0 - Tier1 Link1.7 - 2.8 Gbps NOTE: è Adopted by the LHC Experiments; given in the upcoming Hoffmann Steering Committee Report: “1.5 - 3 Gbps per experiment” è Corresponds to ~10 Gbps Baseline BW Installed on US-CERN Link è Hoffmann Panel also discussed the effects of higher bandwidths ­ For example all-optical 10 Gbps Ethernet across WANs

14 Tier0-Tier1 BW Requirements Estimate: for Hoffmann Report 2001 u Does Not Include the more recent ATLAS Data Estimates è 270 Hz at 10 33 Instead of 100Hz è 400 Hz at 10 34 Instead of 100Hz è 2 MB/Event Instead of 1 MB/Event u Does Not Allow Fast Download to Tier3+4 of “Small” Object Collections è Example: Download 10 7 Events of AODs (10 4 Bytes)  100 Gbytes; At 5 Mbytes/sec per person (above) that’s 6 Hours ! u This is a still a rough, bottoms-up, static, and hence Conservative Model. è A Dynamic distributed DB or “Grid” system with Caching, Co-scheduling, and Pre-Emptive data movement may well require greater bandwidth è Does Not Include “Virtual Data” operations: Derived Data Copies; Data-description overheads è Further MONARC Computing Model Studies are Needed

15 ICFA SCIC Meetings[*] and Topics u Focus on the Digital Divide This Year è Identification of problem areas; work on ways to improve u Network Status and Upgrade Plans in Each Country è Performance (Throughput) Evolution in Each Country, and Transatlantic u Performance Monitoring World-Overview (Les Cottrell, IEPM Project) u Specific Technical Topics (Examples): è Bulk transfer, New Protocols; Collaborative Systems, VOIP u Preparation of Reports to ICFA (Lab Directors’ Meetings) è Last Report: World Network Status and Outlook - Feb. 2002 è Next Report: Digital Divide, + Monitoring, Advanced Technologies; Requirements Evolution – Feb. 2003 [*] Seven Meetings in 2002; at KEK In December 13.

16 Network Progress in 2002 and Issues for Major Experiments u Backbones & major links advancing rapidly to 10 Gbps range è “Gbps” end-to-end throughput data flows have been tested; will be in production soon (in 12 to 18 Months) è Transition to Multi-wavelengths 1-3 yrs. in the “most favored” regions u Network advances are changing the view of the net’s roles è Likely to have a profound impact on the experiments’ Computing Models, and bandwidth requirements è More dynamic view: GByte to TByte data transactions; dynamic path provisioning u Net R&D Driven by Advanced integrated applications, such as Data Grids, that rely on seamless LAN and WAN operation è With reliable, quantifiable (monitored), high performance Ü All of the above will further open the Digital Divide chasm. We need to take action

17 ICFA SCIC: R&E Backbone and International Link Progress uGEANT Pan-European Backbone (http://www.dante.net/geant) http://www.dante.net/geant èNow interconnects >31 countries; many trunks 2.5 and 10 Gbps uUK: SuperJANET Core at 10 Gbps è2.5 Gbps NY-London, with 622 Mbps to ESnet and Abilene uFrance (IN2P3): 2.5 Gbps RENATER backbone from October 2002 èLyon-CERN Link Upgraded to 1 Gbps Ethernet èProposal for dark fiber to CERN by end 2003 uSuperSINET (Japan): 10 Gbps IP and 10 Gbps Wavelength Core èTokyo to NY Links: 2 X 2.5 Gbps started; Peer with ESNet by Feb. uCA*net4 (Canada): Interconnect customer-owned dark fiber nets across Canada at 10 Gbps, started July 2002 è“Lambda-Grids” by ~2004-5 uGWIN (Germany): 2.5 Gbps Core; Connect to US at 2 X 2.5 Gbps; Support for SILK Project: Satellite links to FSU Republics uRussia: 155 Mbps Links to Moscow (Typ. 30-45 Mbps for Science) è Moscow-Starlight Link to 155 Mbps (US NSF + Russia Support) è Moscow-GEANT and Moscow-Stockholm Links 155 Mbps

18 R&E Backbone and Int’l Link Progress uAbilene (Internet2) Upgrade from 2.5 to 10 Gbps in 2002 èEncourage high throughput use for targeted applications; FAST uESNET: Upgrade: to 10 Gbps “As Soon as Possible” uUS-CERN èto 622 Mbps in August; Move to STARLIGHT è2.5G Research Triangle from 8/02; STARLIGHT-CERN-NL; to 10G in 2003. [10Gbps SNV-Starlight Link Loan from Level(3) uSLAC + IN2P3 (BaBar) èTypically ~400 Mbps throughput on US-CERN, Renater links è600 Mbps Throughput is BaBar Target for Early 2003 (with ESnet and Upgrade) uFNAL: ESnet Link Upgraded to 622 Mbps èPlans for dark fiber to STARLIGHT, proceeding uNY-Amsterdam Donation from Tyco, September 2002: Arranged by IEEAF: 622 Gbps+10 Gbps Research Wavelength uUS National Light Rail Proceeding; Startup Expected this Year

19

20 > 200 Primary Participants All 50 States, D.C. and Puerto Rico 75 Partner Corporations and Non-Profits 23 State Research and Education Nets 15 “GigaPoPs” Support 70% of Members 2.5  10 Gbps Backbone

21 2003: OC192 and OC48 Links Coming Into Service; Need to Consider Links to US HENP Labs

22 National R&E Network Example Germany: DFN Transatlantic Connectivity 2002 STM 4 STM 16 Virtual SILK Highway Project (from 11/01): NATO ($ 2.5 M) and Partners ($ 1.1M) è Satellite Links to South Caucasus and Central Asia (8 Countries) èIn 2001-2 (pre-SILK) BW 64-512 kbps èProposed VSAT to get 10-50 X BW for same cost èSee www.silkproject.org www.silkproject.org [*] Partners: CISCO, DESY. GEANT, UNDP, US State Dept., Worldbank, UC London, Univ. Groenigen u 2 X OC48: NY-Hamburg and NY-Frankfurt u Direct Peering to Abilene (US) and Canarie (Canada) uUCAID said to be adding another 2 OC48’s; in a Proposed Global Terabit Research Network (GTRN)

23 National Research Networks in Japan SuperSINET è Started operation January 4, 2002 è Support for 5 important areas: HEP, Genetics, Nano-Technology, Space/Astronomy, GRIDs è Provides 10 ’s:  10 Gbps IP connection  Direct intersite GbE links  9 Universities Connected January 2003: Two TransPacific 2.5 Gbps Wavelengths (to NY); Japan-US-CERN Grid Testbed Soon Tokyo Osaka Nagoya Internet Osaka U Kyoto U ICR Kyoto-U Nagoya U NIFS NIG KEK Tohoku U IMS U-Tokyo NAO U Tokyo NII Hitot. NII Chiba IP WDM path IP router OXC ISAS

24 SuperSINET Updated Map: October 2002

25 APAN Links in Southeast Asia January 15, 2003

26 National Light Rail Footprint 15808 Terminal, Regen or OADM site Fiber route PIT POR FRE RAL WAL NAS PHO OLG ATL CHI CLE KAN OGD SAC BOS NYC WDC STR DAL DEN LAX SVL SEA SDG NLR  Buildout Started November 2002  Initially 4 10 Gb Wavelengths  To 40 10Gb Waves in Future NREN Backbones reached 2.5-10 Gbps in 2002 in Europe, Japan and US; US: Transition now to optical, dark fiber, multi-wavelength R&E network

27 * Also see http://www-iepm.slac.stanford.edu/monitoring/bulk/; and the Internet2 E2E Initiative: http://www.internet2.edu/e2e Progress: Max. Sustained TCP Thruput on Transatlantic and US Links  8-9/01 105 Mbps 30 Streams: SLAC-IN2P3; 102 Mbps 1 Stream CIT-CERN  11/5/01 125 Mbps in One Stream (modified kernel): CIT-CERN  1/09/02 190 Mbps for One stream shared on 2 155 Mbps links  3/11/02 120 Mbps Disk-to-Disk with One Stream on 155 Mbps link (Chicago-CERN)  5/20/02 450-600 Mbps SLAC-Manchester on OC12 with ~100 Streams  6/1/02 290 Mbps Chicago-CERN One Stream on OC12 (mod. Kernel)  9/02 850, 1350, 1900 Mbps Chicago-CERN 1,2,3 GbE Streams, OC48 Link  11-12/02 FAST: 940 Mbps in 1 Stream SNV-CERN; 9.4 Gbps in 10 Flows SNV-Chicago

28 CHEP 2001, Beijing Harvey B Newman California Institute of Technology September 6, 2001 FAST (Caltech): A Scalable, “Fair” Protocol for Next-Generation Networks: from 0.1 To 100 Gbps URL: netlab.caltech.edu/FAST SC2002 11/02 Experiment SunnyvaleBaltimore Chicago Geneva 3000km 1000km 7000km C. Jin, D. Wei, S. Low FAST Team & Partners Internet: distributed feedback system R f (s) R b ’ (s) p TCP AQM Theory 22.8.02 IPv6 9.4.02 1 flow 29.3.00 multiple Baltimore-Geneva Baltimore-Sunnyvale SC2002 1 flow SC2002 2 flows SC2002 10 flows I2 LSR Sunnyvale-Geneva u Standard Packet Size u 940 Mbps single flow/GE card 9.4 petabit-m/sec 1.9 times LSR 1.9 times LSR u 9.4 Gbps with 10 flows 37.0 petabit-m/sec 6.9 times LSR 6.9 times LSR u 22 TB in 6 hours; in 10 flows Implementation u Sender-side (only) mods u Delay (RTT) based u Stabilized Vegas Highlights of FAST TCP Next: 10GbE; 1 GB/sec disk to disk

29 HENP Major Links: Bandwidth Roadmap (Scenario) in Gbps Continuing the Trend: ~1000 Times Bandwidth Growth Per Decade; We are Rapidly Learning to Use and Share Multi-Gbps Networks

30 HENP Lambda Grids: Fibers for Physics u Problem: Extract “Small” Data Subsets of 1 to 100 Terabytes from 1 to 1000 Petabyte Data Stores u Survivability of the HENP Global Grid System, with hundreds of such transactions per day (circa 2007) requires that each transaction be completed in a relatively short time. u Example: Take 800 secs to complete the transaction. Then Transaction Size (TB) Net Throughput (Gbps) Transaction Size (TB) Net Throughput (Gbps) 1 10 1 10 10 100 10 100 100 1000 (Capacity of Fiber Today) 100 1000 (Capacity of Fiber Today) u Summary: Providing Switching of 10 Gbps wavelengths within ~3-5 years; and Terabit Switching within 5-8 years would enable “Petascale Grids with Terabyte transactions”, as required to fully realize the discovery potential of major HENP programs, as well as other data-intensive fields.

31 IEPM: PingER Deployment u Measurements from è 34 monitors in 14 countries è Over 790 remote hosts; 3600 monitor-remote site pairs è Recently added 23 Sites in 17 Countries, due to ICTP Collaboration è Reports on RTT, loss, reachability, jitter, reorders, duplicates … è Measurements go 6ack to Jan-95 u 79 Countries Monitored è Contain > 80% of world population è 99% of online users of the Internet è Mainly A&R sites Monitoring Sites Remote Sites

32 History – Loss Quality (Cottrell)  Fewer sites have very poor to dreadful performance  More have good performance (< 1% Loss)

33 History - Throughput Quality Improvements from US Bandwidth of TCP < MSS/(RTT*Sqrt(Loss)) (1) 1) Macroscopic Behavior of the TCP Congestion Avoidance Algorithm, Matthis, Semke, Mahdavi, Ott, Computer Communication Review 27(3), July 1997 ( 1) Macroscopic Behavior of the TCP Congestion Avoidance Algorithm, Matthis, Semke, Mahdavi, Ott, Computer Communication Review 27(3), July 1997 80% annual improvement Factor ~100/8 yr Progress: but Digital Divide is Maintained

34 NREN Core Network Size (Mbps-km): http://www.terena.nl/compendium/2002 Logarithmic Scale 1k 100k 100 100M 10M 1M 10k Ro It Pl Gr Ir Ukr Hu Cz Es Nl Fi Ch Lagging In Transition Leading Advanced

35 Work on the Digital Divide: Several Perspectives u Identify & Help Solve Technical Problems: Nat’l, Regional, Last 10/1/0.1 km u Inter-Regional Proposals (Example: Brazil) è US NSF Proposal (10/2002); possible EU LIS Proposal u Work on Policies and/or Pricing: pk, in, br, cn, SE Europe, … è E.g. RoEduNet (2-6 to 34 Mbps); Pricing not so different from US-CERN price in 2002 for a few Gbps è Find Ways to work with vendors, NRENs, and/or Gov’ts u Use Model Cases: Installation of new advanced fiber infrastructures; Convince Neighboring Countries è Poland (to 5k km Fiber); Slovakia; Ireland u Exploit One-off Solutions: E.g. extend the SILK Project (DESY/FSU satellite links) to a SE European site u Work with other organizations: Terena, Internet2, AMPATH, IEEAF, UN, etc. to help with technical and/or political sol’ns

36 Digital Divide Committee

37 Gigabit Ethernet Backbone; 100 Mbps Link to GEANT

38 GEANT 155Mbps 34Mbps Annual Cost > 1 MEuro Romania: 155 Mbps to GEANT and Bucharest; Inter-City Links of 2-6 Mbps; to 34 Mbps in 2003

39 Digital Divide WG Activities u Questionnaire Distributed to the HENP Lab Directors and the Major Collaboration Managements u Plan on Project to Build HENP World Network Map; Updated and Maintained on a Web Site, Backed by Database: è Systematize and Track Needs and Status è Information: Link Bandwidths, Utilization, Quality, Pricing, Local Infrastructure, Last Mile Problems, Vendors, etc. è Identify Urgent Cases; Focus on Opportunities to Help u First ICFA SCIC Workshop: Focus on the Digital Divide è Target Date February 2004 in Rio de Janeiro (LISHEP) è Organization Meeting July 2003 è Plan Statement at the WSIS, Geneva (December 2003) è Install and Leave Behind a Good Network è Then 1 (to 2) Workshops Per Year, at Sites that Need Help

40 We Must Close the Digital Divide Goal: To Make Scientists from All World Regions Full Partners in the Process of Search and Discovery Goal: To Make Scientists from All World Regions Full Partners in the Process of Search and Discovery What ICFA and the HENP Community Can Do u Help identify and highlight specific needs (to Work On) è Policy problems; Last Mile problems; etc. u Spread the message: ICFA SCIC is there to help; Coordinate with AMPATH, IEEAF, APAN, Terena, Internet2, etc. u Encourage Joint programs [such as in DESY’s Silk project; Japanese links to SE Asia and China; AMPATH to So. America] è NSF & LIS Proposals: US and EU to South America u Make direct contacts, arrange discussions with gov’t officials è ICFA SCIC is prepared to participate u Help Start, or Get Support for Workshops on Networks (& Grids) è Discuss & Create opportunities è Encourage, help form funded programs u Help form Regional support & training groups (requires funding)

41 Groningen Carrier Hotel: March 2002 “Cultivate and promote practical solutions to delivering scalable, universally available and equitable access to suitable bandwidth and necessary network resources in support of research and education collaborations.” http://www.ieeaf.org

42 NY-AMS 9/02 CA-Tokyo by ~1/03 (Research)

43 CN SG PERTH GHANA Buenos Aires/San Paolo St. Petersburg Kazakhstan Uzbekistan Chenai Navi Mumbai Barcelona Greece MD NL CA Global Quilt Initiative – GMRE Initiative - 001 Global Medical Research Exchange Initiative Bio-Medicine and Health Sciences Layer 1 – Spoke & Hub Sites Layer 2 – Spoke & Hub Sites Layer 3 – Spoke & Hub Sites Propose Global Research and Education Network for Physics

44 Networks, Grids and HENP u Current generation of 2.5-10 Gbps network backbones arrived in the last 15 Months in the US, Europe and Japan è Major transoceanic links also at 2.5 - 10 Gbps in 2003 è Capability Increased ~4 Times, i.e. 2-3 Times Moore’s u Reliable high End-to-end Performance of network applications (large file transfers; Grids) is required. Achieving this requires: è End-to-end monitoring; a coherent approach è Getting high performance (TCP) toolkits in users’ hands u Digital Divide: Network improvements are especially needed in SE Europe, So. America; SE Asia, and Africa: è Key Examples: India, Pakistan, China; Brazil; Romania u Removing Regional, Last Mile Bottlenecks and Compromises in Network Quality are now On the critical path, in all world regions u Work in Concert with APAN, Internet2, Terena, AMPATH; DataTAG, the Grid projects and the Global Grid Forum


Download ppt "Networks for HENP and ICFA SCIC Networks for HENP and ICFA SCIC Harvey B. Newman Harvey B. Newman California Institute of Technology APAN High Energy Physics."

Similar presentations


Ads by Google