Stanford University, SLAC, NIIT, the Digital Divide & Bandwidth Challenge Prepared by Les Cottrell, SLAC for the NIIT, February 22, 2006.

Slides:



Advertisements
Similar presentations
Current Testbed : 100 GE 2 sites (NERSC, ANL) with 3 nodes each. Each node with 4 x 10 GE NICs Measure various overheads from protocols and file sizes.
Advertisements

High Performance Computing Course Notes Grid Computing.
Maximizing End-to-End Network Performance Thomas Hacker University of Michigan October 5, 2001.
1 SLAC Site Report By Les Cottrell for UltraLight meeting, Caltech October 2005.
1 Stanford University, SLAC, NIIT and the Digital Divide Prepared by Les Cottrell, SLAC for the NIIT, March 18, 2005.
Internet Monitoring and Results for the Digital Divide Les Cottrell SLAC, Aziz Rehmatullah NIIT, Jerrod Williams SLAC, Akbar Khan NIIT Presented at the.
1 Internet End-to-end Monitoring Project at SLAC Les Cottrell, Connie Logg, Jerrod Williams, Gary Buhrmaster Site visit to SLAC by DoE program managers.
1 Quantifying the Digital Divide from Within and Without Les Cottrell, SLAC Internet2 Members Meeting SIG on Hard to Reach Network Places, Washington,
Jean-Yves Nief, CC-IN2P3 Wilko Kroeger, SCCS/SLAC Adil Hasan, CCLRC/RAL HEPiX, SLAC October 11th – 13th, 2005 BaBar data distribution using the Storage.
MAGGIE NIIT- SLAC On Going Projects Measurement & Analysis of Global Grid & Internet End to end performance.
1 Network Monitoring for SCIC Les Cottrell, SLAC For ICFA meeting September, 2005 Initially funded by DoE Field Work proposal. Currently partially funded.
1 PingER: Methodology, Uses & Results Les Cottrell SLAC, Warren Matthews GATech Extending the Reach of Advanced Networking: Special International Workshop.
Quantifying the Digital Divide from an Internet Point of View Les Cottrell SLAC, Aziz Rehmatullah NIIT, Jerrod Williams SLAC, Akbar Khan NIIT Presented.
SC|05 Bandwidth Challenge ESCC Meeting 9th February ‘06 Yee-Ting Li Stanford Linear Accelerator Center ESCC Meeting 9th February ‘06 Yee-Ting Li Stanford.
Stanford University, SLAC, NIIT, the Digital Divide & Projects Prepared by Les Cottrell, SLAC for the NIIT Under Graduate Students, March 15, 2007.
Quantifying the Digital Divide from an Internet Point of View Les Cottrell SLAC, Aziz Rehmatullah NIIT, Jerrod Williams SLAC, Akbar Khan NIIT Presented.
1 ICFA/SCIC Network Monitoring Prepared by Les Cottrell, SLAC, for ICFA
HEP Prospects, J. Yu LEARN Strategy Meeting Prospects on Texas High Energy Physics Network Needs LEARN Strategy Meeting University of Texas at El Paso.
Copyright © 2010 Platform Computing Corporation. All Rights Reserved.1 The CERN Cloud Computing Project William Lu, Ph.D. Platform Computing.
CMS Data Transfer Challenges LHCOPN-LHCONE meeting Michigan, Sept 15/16th, 2014 Azher Mughal Caltech.
1 ESnet Network Measurements ESCC Feb Joe Metzger
Maximizing End-to-End Network Performance Thomas Hacker University of Michigan October 26, 2001.
Quantifying the Digital Divide: A scientific overview of the connectivity of South Asian and African Countries Les Cottrell SLAC, Aziz Rehmatullah NIIT,
1 Quantifying the Digital Divide from Within and Without Les Cottrell, SLAC International ICFA Workshop on HEP Networking, Grid and Digital Divide Issues.
LAN and WAN Monitoring at SLAC Connie Logg September 21, 2005.
The University of Bolton School of Games Computing & Creative Technologies LCT2516 Network Architecture CCNA Exploration LAN Switching and Wireless Chapter.
Network Tests at CHEP K. Kwon, D. Han, K. Cho, J.S. Suh, D. Son Center for High Energy Physics, KNU, Korea H. Park Supercomputing Center, KISTI, Korea.
1 Using Netflow data for forecasting Les Cottrell SLAC and Fawad Nazir NIIT, Presented at the CHEP06 Meeting, Mumbai India, February
Data transfer over the wide area network with a large round trip time H. Matsunaga, T. Isobe, T. Mashimo, H. Sakamoto, I. Ueda International Center for.
A Measurement Based Memory Performance Evaluation of High Throughput Servers Garba Isa Yau Department of Computer Engineering King Fahd University of Petroleum.
DataTAG Research and Technological Development for a Transatlantic Grid Abstract Several major international Grid development projects are underway at.
Stanford University, SLAC, NIIT, the Digital Divide & Bandwidth Challenge Prepared by Les Cottrell, SLAC for the NIIT, March 9, 2007.
Measurement & Analysis of Global Grid & Internet End to end performance (MAGGIE) Network Performance Measurement.
SCIC in the WSIS Stocktaking Report (July 2005): uThe SCIC, founded in 1998 by ICFA, is listed in Section.
1 The PingER Project: Measuring the Digital Divide PingER Presented by Les Cottrell, SLAC At the SIS Show Palexpo/Geneva December 2003.
1 Network Monitoring for SCIC Les Cottrell, SLAC ICFA/SCIC meeting August 24, aug05.ppt Initially.
1 Measurements of Internet performance for NIIT, Pakistan Jan – Feb 2004 PingER From Les Cottrell, SLAC For presentation by Prof. Arshad Ali, NIIT.
1 Network Measurement Summary ESCC, Feb Joe Metzger ESnet Engineering Group Lawrence Berkeley National Laboratory.
1 Measuring The Digital Divide Prepared by: Les Cottrell SLAC, Shahryar Khan NIIT/SLAC, Jared Greeno SLAC, Qasim Lone NIIT/SLAC Presentation to Princess.
BNL Wide Area Data Transfer for RHIC & ATLAS: Experience and Plans Bruce G. Gibbard CHEP 2006 Mumbai, India.
1 Quantifying the Digital Divide: focus Africa Prepared by Les Cottrell, SLAC for the NSF IRNC meeting, March 11,
ATLAS WAN Requirements at BNL Slides Extracted From Presentation Given By Bruce G. Gibbard 13 December 2004.
1 Achieving High Throughput on Fast Networks (Bandwidth Challenges and World Records) Les Cottrell & Yee-Ting Li Stanford Linear Accelerator Center Presented.
Performance Engineering E2EpiPEs and FastTCP Internet2 member meeting - Indianapolis World Telecom Geneva October 15, 2003
30 June Wide Area Networking Performance Challenges Olivier Martin, CERN UK DTI visit.
ICFA/SCIC Aug '05 SLAC Site Report 1 SLAC Site Report Les Cottrell, Gary Buhrmaster, Richard Mount, SLAC For ICFA/SCIC meeting 8/22/05
CSCI-235 Micro-Computer Applications The Network.
Internet Connectivity and Performance for the HEP Community. Presented at HEPNT-HEPiX, October 6, 1999 by Warren Matthews Funded by DOE/MICS Internet End-to-end.
Brookhaven Science Associates U.S. Department of Energy 1 Network Services LHC OPN Networking at BNL Summer 2006 Internet 2 Joint Techs John Bigrow July.
A UK Computing Facility John Gordon RAL October ‘99HEPiX Fall ‘99 Data Size Event Rate 10 9 events/year Storage Requirements (real & simulated data)
1 Quantifying the Digital Divide Prepared by Les Cottrell, SLAC for the Internet2/World Bank meeting, Feb 7,
Computer Networks and Internet. 2 Objectives Computer Networks Computer Networks Internet Internet.
PetaCache: Data Access Unleashed Tofigh Azemoon, Jacek Becla, Chuck Boeheim, Andy Hanushevsky, David Leith, Randy Melen, Richard P. Mount, Teela Pulliam,
Fall 2005 Internet2 Member Meeting International Task Force Julio Ibarra, PI Heidi Alvarez, Co-PI Chip Cox, Co-PI John Silvester, Co-PI September 19, 2005.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
Recent experience with PCI-X 2.0 and PCI-E network interfaces and emerging server systems Yang Xia Caltech US LHC Network Working Group October 23, 2006.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
DATA ACCESS and DATA MANAGEMENT CHALLENGES in CMS.
Networking between China and Europe
UK GridPP Tier-1/A Centre at CLRC
Introduction to Networks
Enabling High Speed Data Transfer in High Energy Physics
Using Netflow data for forecasting
Wide Area Networking at SLAC, Feb ‘03
Experiences from SLAC SC2004 Bandwidth Challenge
PingER: An Effort to Quantify the Digital Divide
Advanced Networking Collaborations at SLAC
Wide-Area Networking at SLAC
MAGGIE NIIT- SLAC On Going Projects
Quantifying the Global Digital Divide
Presentation transcript:

Stanford University, SLAC, NIIT, the Digital Divide & Bandwidth Challenge Prepared by Les Cottrell, SLAC for the NIIT, February 22, 2006

Stanford University Location

Some facts Founded in 1890’s by Governor Leland Stanford & wife Jane –in memory of son Leland Stanford Jr. –Apocryphal story of foundation Movies invented at Stanford 1600 freshman entrants/year (12% acceptance), 7:1 student:faculty, students from 53 countries 169K living Stanford alumni

Some alumni Sports: Tiger Woods, John McEnroe Sally Ride Astronaut Vint Cerf “father of Internet” Industry: –Hewlett & Packard, Steve Ballmer CEO Microsoft, Scott McNealy Sun … Ex-presidents: Ehud Barak Israel, Alejandro Toledo Peru US Politics: Condoleeza Rice, George Schultz, President Hoover

Some Startups Founded Silicon Valley (turned orchards into companies): –Start by providing land and encouragement (investment) for companies started by Stanford alumni, such as HP & Varian –More recently: Sun (Stanford University Network), Cisco, Yahoo, Google

Excellence 17 Nobel prizewinners Stanford Hospital Stanford Linear Accelerator Center (SLAC) – my home: –National Lab operated by Stanford University funded by US Department of Energy –Roughly 1400 staff, + contractors & outside users => 3000, ~ 2000 on site at a given time –Fundamental research in: Experimental particle physics Theoretical physics Accelerator research Astro-physics Synchrotron Light research –Has faculty to pursue above research and awards degrees, 3 Nobel prizewinners

Work with NIIT Co-supervision of students, build research capacity, publish etc., for example: –Quantify the Digital Divide: Develop a measurement infrastructure to provide information on the extent of the Digital Divide: –Within Pakistan, between Pak & other regions Improve understanding, provide planning information, expectations, identify needs Provide and deploy tools in Pakistan MAGGIE-NS collaboration - projects: –TULIP - Faran –Network Weather Forecasting – Fawad, Fareena –Anomaly – Fawad, Adnan, Muhammad Ali Detection, diagnosis and alerting –PingER Management - Waqar –MTBF/MTTR of networks – Not assigned –Federating Network monitoring Infrastructures – Asma, Abdullah Smokeping, PingER, AMP, MonALISA, OWAMP … –Digital Divide – Aziz, Akbar, Rabail

Quantifying the Digital Divide: A scientific overview of the connectivity of South Asian and African Countries Les Cottrell SLAC, Aziz Rehmatullah NIIT, Jerrod Williams SLAC, Arshad Ali NIIT Presented at the CHEP06 Meeting, Mumbai, India February

Introduction PingER project originally (1995) for measuring network performance for US, Europe and Japanese HEP community Extended this century to measure Digital Divide for Academic & Research community Last year added monitoring sites in S. Africa, Pakistan & India Will report on network performance to these regions from US and Europe – trends, comparisons Plus early results within and between these regions

Why does it matter? Scientists cannot collaborate as equal partners unless they have connectivity to share data, results, ideas etc. Distance education needs good communication for access to libraries, journals, educational materials, video, access to other teachers and researchers.

PingER coverage ~120 countries (99% world’s connected population), 35 monitor sites in 14 countries New monitoring sites in Cape Town, Rawalpindi, Bangalore Monitor 25 African countries, contain 83% African population

Minimum RTT from US Indicates best possible, i.e. no queuing >600ms probably geo-stationary satellite Only a few places still using satellite, mainly Africa Between developed regions min-RTT dominated by distance –Little improvement possible Jan 2000 Dec 2003

World thruput seen from US Derived throughput~MSS/(RTT*sqrt(loss)), Mathis Behind Europe 6 Yrs: Russia, Latin America 7 Yrs: Mid-East, SE Asia 10 Yrs: South Asia 11 Yrs: Cent. Asia 12 Yrs: Africa South Asia, Central Asia, and Africa are in Danger of Falling Even Farther Behind Many sites in DD have less connectivity than a residence in US or Europe

S. Asia & Africa from US Data v. noisy but there are noticeable trends India may be holding its own Africa & Pakistan are falling behind Pakistan

Compare to US residence Sites in many countries have bandwidth< US residence

India to India Monitoring host in Bangalore from Oct ’05 –Too early to tell much, also need more sites, have some good contacts 3 remote hosts (need to increase): –R&E sites in Mumbai & Hyderabad –Government site in AP Lot of difference between sites, Gov. site sees heavy congestion

PERN: Network Architecture DRS Karachi Core ATM/Router Islamabad Core ATM/Router Lahore Core ATM/Router 2x2Mbps LAN Switch Access Router DXX OFS OF Node University Customer Replica of Kr./Iba International 2MB DXX OFS University 12 Universities 22 Universities 23 Universities Access Router University International 4MB International 2MB DRS OFS 50 Mbps 57 Mbps 65 Mbps 33 Mbps  HEC will invest $ 4M in Backbone  3 To 9 Points-of-Presence (Core Nodes)  $ 2.4M from HEC to Public Universities for Last Mile Costs  Possible Dark Fiber Initiative

Pakistan to Pakistan 3 monitoring sites in Islamabad/Rawalpindi –NIIT via NTC, NIIT via Micronet, NTC (PERN supplier) –All monitor 7 Universities in ISB, Lahore, KHI, Peshawar Careful: many University sites have proxies in US & Europe Minimum RTTs: best NTC 6ms, NIIT/NTC 10ms - extra 4ms for last mile, NIIT/Micronet 60ms – slower links different routes Queuing = Avg(RTT)-Min(RTT) –NIIT/NTC heavily congested ms queuing –Better when students holiday –NIIT/Micronet & NTC OK –Outages show fragility NIIT Holiday

Pakistan Network Fragility NIIT/Micronet NIIT/NTC NTC NIIT/NTC heavily congested Other sites OK NIIT outage Remote host outages

Pakistan International fragility Infrastructure appears fragile Losses to QEA & NIIT are 3-8% averaged over month RTT ms Loss % Feb05 Jul05 Fiber cut off Karachi causes 12 day outage Jun- Jul ’05, Huge losses of confidence and business Another fiber outage, this time of 3 hours! Power cable dug up by excavators of Karachi Water & Sewage Board Typically once a month losses go to 20%

Many systemic factors: Electricity, Import duties, Skills M. Jensen

Average Cost $ 11/kbps/Month

Routing in Africa Seen from ZA Only Botswana & Zimbabwe are direct Most go via Europe or USA Wastes costly international bandwidth

Loss within Africa

Satellites vs Terrestrial Terrestrial links via SAT3 & SEAMEW (Mediterranean) Terrestrial not available to all within countries, EASSy will help PingER min-RTT measurements from S. African TENET monitoring station

Between Regions Red ellipses show within region Blue = min(RTT) Red = min-avg RTT India/Pak green ellipses ZA heavy congestion –Botswana, Argentina, Madascar, Ghana, BF India better off than Pak

Overall Sorted by Median throughput Within region performance better (blue ellipses) Europe, N. America, E. Asia Russia generally good M. East, Oceania, S.E. Asia, L. America acceptable Africa, C. Asia, S. Asia poor

Examples India got Internet connectivity in 1989, China 1994 –India is 34Mbits/s backbones, one possible 622Mbits/s –China is deploying multi 10Gbits/s Brazil and India had similar International connectivity in 2001, now Brazil is at multi-Gbits/s Pakistan PERN backbone is 50Mbits/s, and end sites are ~1Mbits/s Growth in # Internet users ( ): 420% Brazil, China 393%, 5000% Pakistan, 900% India, demand outstripping growth –

Conclusions S. Asia and Africa ~ 10 years behind and falling further behind creating a Digital Divide within a Digital Divide India appears better than Africa or Pakistan Last mile problems, and network fragility Decreasing use of satellites, still needed for many remote countries in Africa and C. Asia –EASSy project will bring fibre to E. Africa Growth in # users % Africa, 5000% Pakistan networks not keeping up Need more sites in developing regions and longer time period of measurements

More information Thanks to: Harvey Newman & ICFA for encouragement & support, Anil Srivastava (World Bank) & N.Subramanian (Bangalore) for India, NIIT, NTC and PERN for Pakistan monitoring sites, FNAL for PingER management support, Duncan Martin & TENET (ZA). Future: work with VSNL & ERnet for India, Julio Ibarra & Eriko Porto for L. America, NIIT & NTC for Pakistan Also see: ICFA/SCIC Monitoring report: – Paper on Africa & S. Asia – PingER project: –www-iepm.slac.stanford.edu/pinger/www-iepm.slac.stanford.edu/pinger/

SC|05 Bandwidth Challenge ESCC Meeting 9th February ‘06 Yee-Ting Li Stanford Linear Accelerator Center

LHC Network Requirements CERN/Outside Resource Ratio ~1:2 Tier0/(  Tier1)/(  Tier2) ~1:1:1 Tier 1 Tier2 Center Online System CERN Center PBs of Disk; Tape Robot FNAL Center IN2P3 Center INFN Center RAL Center Institute Workstations ~ MBytes/sec ~10 Gbps 1 to 10 Gbps Tens of Petabytes by An Exabyte ~5-7 Years later. Physics data cache ~PByte/sec Gbps Tier2 Center ~1-10 Gbps Tier 0 +1 Tier 3 Tier 4 Tier2 Center Tier 2 Experiment

Overview Bandwidth Challenge –‘The Bandwidth Challenge highlights the best and brightest in new techniques for creating and utilizing vast rivers of data that can be carried across advanced networks.‘ –Transfer as much data as possible using real applications over a 2 hour window We did… –Distributed TeraByte Particle Physics Data Sample Analysis –‘Demonstrated high speed transfers of particle physics data between host labs and collaborating institutes in the USA and worldwide. Using state of the art WAN infrastructure and Grid Web Services based on the LHC Tiered Architecture, they showed real- time particle event analysis requiring transfers of Terabyte-scale datasets.’

Overview In detail, during the bandwidth challenge (2 hours): –131 Gbps measured by SCInet BWC team on 17 of our waves (15 minute average) –95.37TB of data transferred. (3.8 DVD’s per second) –90-150Gbps (peak 150.7Gbps) On day of challenge –Transferred ~475TB ‘practising’ (waves were shared, still tuning applications and hardware) –Peak one way USN utlisation observed on a single link was 9.1Gbps (Caltech) and 8.4Gbps (SLAC) Also wrote to StorCloud –SLAC: wrote 3.2TB in 1649 files during BWC –Caltech: 6GB/sec with 20 nodes

Networking Overview We had 22 10Gbits/s waves to the Caltech and SLAC/FNAL booths. Of these: –15 waves to the Caltech booth (from Florida (1), Korea/GLORIAD (1), Brazil (1 * 2.5Gbits/s), Caltech (2), LA (2), UCSD, CERN (2), U Michigan (3), FNAL(2)). –7 x 10Gbits/s waves to the SLAC/FNAL booth (2 from SLAC, 1 from the UK, and 4 from FNAL). The waves were provided by Abilene, Canarie, Cisco (5), ESnet (3), GLORIAD (1), HOPI (1), Michigan Light Rail (MiLR), National Lambda Rail (NLR), TeraGrid (3) and UltraScienceNet (4).

Network Overview

Hardware (SLAC only) At SLAC: –14 x 1.8Ghz Sun v20z (Dual Opteron) –2 x Sun 3500 Disk trays (2TB of storage) –12 x Chelsio T110 10Gb NICs (LR) –2 x Neterion/S2io Xframe I (SR) –Dedicated Cisco 6509 with 4 x 4x10GB blades At SC|05: –14 x 2.6Ghz Sun v20z (Dual Opteron) –10 QLogic HBA’s for StorCloud Access –50TB Storage at SC|05 provide by 3PAR (Shared with Caltech) –12 x Neterion/S2io Xframe I NICs (SR) –2 x Chelsio T110 NICs (LR) –Shared Cisco 6509 with 6 x 4x10GB blades

Hardware at SC|05

Software BBCP ‘Babar File Copy’ –Uses ‘ssh’ for authentication –Multiple stream capable –Features ‘rate synchronisation’ to reduce byte retransmissions –Sustained over 9Gbps on a single session XrootD –Library for transparent file access (standard unix file functions) –Designed primarily for LAN access (transaction based protocol) –Managed over 35Gbit/sec (in two directions) on 2 x 10Gbps waves –Transferred 18TBytes in 257,913 files DCache –20Gbps production and test cluster traffic

Last year (SC|04) BWC Aggregate Bandwidth

Cumulative Data Transferred Bandwidth Challenge period

Component Traffic

SLAC-ESnet FermiLab-HOPI SLAC-ESnet-USNFNAL-UltraLight UKLight Out from booth SLAC-FermiLab-UK Bandwidth Contributions In to booth

In to booth Out from booth ESnet routed ESnet SDN layer 2 via USN Bandwidth Challenge period SLAC Cluster Contributions

SLAC/FNAL Booth Aggregate Mbps Waves

Problems… Managerial/PR –Initial request for loan hardware took place 6 months in advance! –Lots and lots of paperwork to keep account of all loan equipment Logistical –Set up and tore down a pseudo production network and servers in a space of week! –Testing could not begin until waves were alight Most waves lit day before challenge! –Shipping so much hardware not cheap! –Setting up monitoring

Problems… Tried to configure hardware and software prior to show Hardware –NICS We had 3 bad Chelsios (bad memory) Xframe II’s did not work in UKLight’s Boston machines –Hard-disks 3 dead 10K disks (had to ship in spare) –1 x 4Port 10Gb blade DOA –MTU mismatch between domains –Router blade died during stress testing day before BWC! –Cables! Cables! Cables! Software –Used golden disks for duplication (still takes 30 minutes per disk to replicate!) –Linux kernels: Initially used , found sever performance problems compared to –(New) Router firmware caused crashes under heavy load Unfortunately, only discovered just before BWC Had to manually restart the affected ports during BWC

Problems Most transfers were from memory to memory (Ramdisk etc). –Local caching of (small) files in memory –Reading and writing to disk will be the next bottleneck to overcome

Conclusion Previewed the IT Challenges of the next generation Data Intensive Science Applications (High Energy Physics, astronomy etc) –Petabyte-scale datasets –Tens of national and transoceanic links at 10 Gbps (and up) –100+ Gbps aggregate data transport sustained for hours; We reached a Petabyte/day transport rate for real physics data Learned to gauge difficulty of the global networks and transport systems required for the LHC mission –Set up, shook down and successfully ran the systems in < 1 week –Understood and optimized the configurations of various components (Network interfaces, router/switches, OS, TCP kernels, applications) for high performance over the wide area network.

Conclusion Products from this the exercise –An optimized Linux ( NFSv4 + FAST and other TCP stacks) kernel for data transport; after 7 full kernel-build cycles in 4 days –A newly optimized application-level copy program, bbcp, that matches the performance of iperf under some conditions. –Extensions of Xrootd, an optimized low-latency file access application for clusters, across the wide area –Understanding of the limits of 10 Gbps-capable systems under stress. –How to effectively utilize 10GE and 1GE connected systems to drive 10 gigabit wavelengths in both directions. –Use of production and test clusters at FNAL reaching more than 20 Gbps of network throughput. Significant efforts remain from the perspective of high- energy physics –Management, integration and optimization of network resources –End-to-end capabilities able to utilize these network resources. This includes applications and IO devices (disk and storage systems)

Press and PR 11/8/05 - Brit Boffins aim to Beat LAN speed record from vnunet.comBrit Boffins aim to Beat LAN speed record SC|05 Bandwidth Challenge SLAC Interaction Point.SC|05 Bandwidth Challenge Top Researchers, Projects in High Performance Computing Honored at SC/05... Business Wire (press release) - San Francisco, CA, USATop Researchers, Projects in High Performance Computing Honored at SC/ /18/05 - Official Winner AnnouncementOfficial Winner Announcement 11/18/05 - SC|05 Bandwidth Challenge Slide PresentationSC|05 Bandwidth Challenge Slide Presentation 11/23/05 - Bandwidth Challenge Results from SlashdotBandwidth Challenge Results 12/6/05 - Caltech press releaseCaltech press release 12/6/05 - Neterion Enables High Energy Physics Team to Beat World Record Speed at SC05 Conference CCN Matthews News Distribution ExpertsNeterion Enables High Energy Physics Team to Beat World Record Speed at SC05 Conference High energy physics team captures network prize at SC|05 from SLACHigh energy physics team captures network prize at SC|05 High energy physics team captures network prize at SC|05 EurekaAlert!High energy physics team captures network prize at SC|05 12/7/05 - High Energy Physics Team Smashes Network Record, from Science Grid this Week.High Energy Physics Team Smashes Network Record Congratulations to our Research Partners for a New Bandwidth Record at SuperComputing 2005, from Neterion.Congratulations to our Research Partners for a New Bandwidth Record at SuperComputing 2005