ICFA/SCIC Aug '05 SLAC Site Report 1 SLAC Site Report Les Cottrell, Gary Buhrmaster, Richard Mount, SLAC For ICFA/SCIC meeting 8/22/05 www.slac.stanford.edu/grp/scs/net/talk05/icfa-slac-aug05.ppt.

Slides:



Advertisements
Similar presentations
Director’s Welcome Jonathan Dorfan 32 nd Annual SSRL Users Meeting October 17, 2005.
Advertisements

1 The Evolution of ESnet (Summary) William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
1 Chin Guok ESnet Network Engineer David Robertson DSD Computer Software Engineer Lawrence Berkeley National Laboratory.
1 SLAC National Accelerator Laboratory Amber Boehnlein October 18, 2011.
1 SLAC Site Report By Les Cottrell for UltraLight meeting, Caltech October 2005.
Sept. 18, 2008SLUO 2008 Annual Meeting Vision for SLAC Science Persis S. Drell Director SLAC.
1 Internet End-to-end Monitoring Project at SLAC Les Cottrell, Connie Logg, Jerrod Williams, Gary Buhrmaster Site visit to SLAC by DoE program managers.
RIT Campus Data Network. General Network Statistics Over 23,000 wired outlets Over 14,500 active switched ethernet ports > 250 network closets > 1,000.
Jean-Yves Nief, CC-IN2P3 Wilko Kroeger, SCCS/SLAC Adil Hasan, CCLRC/RAL HEPiX, SLAC October 11th – 13th, 2005 BaBar data distribution using the Storage.
Network Policies and Procedures Presentation for DoE Office of Assurance Cybersecurity Review visit to SLAC August 2005.
Interdisciplinary and Interagency Cooperation in High Energy Physics Barry Barish BPA 5-Nov-02.
Questionaire answers D. Petravick P. Demar FNAL. 7/14/05 DLP -- GDB2 FNAL/T1 issues In interpreting the T0/T1 document how do the T1s foresee to connect.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
Is Lambda Switching Likely for Applications? Tom Lehman USC/Information Sciences Institute December 2001.
1 ESnet Planning for the LHC T0-T1 Networking William E. Johnston ESnet Manager and Senior Scientist Lawrence Berkeley National Laboratory.
Scientific Computing for SLAC Science Bebo White Stanford Linear Accelerator Center October 2006.
LHC Tier 2 Networking BOF Joe Metzger Joint Techs Vancouver 2005.
LAN and WAN Monitoring at SLAC Connie Logg September 21, 2005.
24 April 2015 FY 2016 Budget Request to Congress for DOE’s Office of Science Dr. Patricia M. Dehmer Acting Director, Office of Science
Tier 1 Facility Status and Current Activities Rich Baker Brookhaven National Laboratory NSF/DOE Review of ATLAS Computing June 20, 2002.
PPDG and ATLAS Particle Physics Data Grid Ed May - ANL ATLAS Software Week LBNL May 12, 2000.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
1 ESnet/HENP Active Internet End-to-end Performance & ESnet/University performance Les Cottrell – SLAC Presented at the ESSC meeting Albuquerque, August.
1 Overview of IEPM-BW - Bandwidth Testing of Bulk Data Transfer Tools Connie Logg & Les Cottrell – SLAC/Stanford University Presented at the Internet 2.
1 The PingER Project: Measuring the Digital Divide PingER Presented by Les Cottrell, SLAC At the SIS Show Palexpo/Geneva December 2003.
ATLAS Computing at SLAC Future Possibilities Richard P. Mount Western Tier 2 Users Forum April 7, 2009.
Office of Science U.S. Department of Energy Raymond L. Orbach Director Office of Science U.S. Department of Energy Presentation to BESAC December 6, 2004.
Office of Science U.S. Department of Energy ESCC Meeting July 21-23, 2004 Network Research Program Update Thomas D. Ndousse Program Manager Mathematical,
October 2006ICFA workshop, Cracow1 HEP grid computing in Portugal Jorge Gomes LIP Computer Centre Lisbon Laboratório de Instrumentação e Física Experimental.
HEP and Non-HEP Computing at a Laboratory in Transition Richard P. Mount Director: Scientific Computing and Computing Services Stanford Linear Accelerator.
Connect. Communicate. Collaborate perfSONAR MDM Service for LHC OPN Loukik Kudarimoti DANTE.
IEPM. Warren Matthews (SLAC) Presented at the ESCC Meeting Miami, FL, February 2003.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
1 SPAFOA Capitol Hill Briefing December 2013 Harry Weerts International Linear Collider - progress & status SPAFOA meeting, Dec 11, 2013, H.Weerts.
Tiziana FerrariThe DataTAG Projct, Roma Nov DataTAG Project.
Scientific Computing at SLAC: The Transition to a Multiprogram Future Richard P. Mount Director: Scientific Computing and Computing Services Stanford Linear.
SLAC Status, Les CottrellESnet International Meeting, Kyoto July 24-25, 2000 SLAC Update Les Cottrell & Richard Mount July 24, 2000.
High Performance Computing (HPC) Data Center Proposal Imran Latif, Facility Project Manager Scientific & Enterprise Computing Data Centers at BNL 10/14/2015.
Computing for LHC Physics 7th March 2014 International Women's Day - CERN- GOOGLE Networking Event Maria Alandes Pradillo CERN IT Department.
TeraPaths: A QoS Enabled Collaborative Data Sharing Infrastructure for Petascale Computing Research The TeraPaths Project Team Usatlas Tier 2 workshop.
Internet Connectivity and Performance for the HEP Community. Presented at HEPNT-HEPiX, October 6, 1999 by Warren Matthews Funded by DOE/MICS Internet End-to-end.
Michael Ernst U.S. ATLAS Tier-1 Network Status Evolution of LHC Networking – February 10,
Budget Outlook Glen Crawford P5 Meeting Sep
1 (Brief) Introductory Remarks On Behalf of the U.S. Department of Energy ESnet Site Coordinating Committee (ESCC) W.Scott Bradley ESCC Chairman
SLAC and ILC Jonathan Dorfan, Director LCFOA, SLAC May 1, 2006 Particle & Particle Astrophysics.
BNL Overview DOE Annual HEP Program Review Brookhaven National Laboratory April 17-19, 2006 Sam Aronson.
HEP and Non-HEP Computing at a Laboratory in Transition Richard P. Mount Director: Scientific Computing and Computing Services Stanford Linear Accelerator.
-1- ESnet On-Demand Secure Circuits and Advance Reservation System (OSCARS) David Robertson Internet2 Joint Techs Workshop July 18,
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
IT-INFN-CNAF Status Update LHC-OPN Meeting INFN CNAF, December 2009 Stefano Zani 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)1.
1 IEPM / PingER project & PPDG Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99 Partially funded by DOE/MICS Field Work Proposal on.
Network Move & Upgrade 2008 Les Cottrell SLAC for SCCS core services group Presented at the OU Admin Group Meeting August 21,
Internet2 Members Meeting Washington, DC 1 Advanced Networking Infrastructure and Research (ANIR) Aubrey Bush Division Director, ANIR National Science.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
Run - II Networks Run-II Computing Review 9/13/04 Phil DeMar Networks Section Head.
Stanford Linear Accelerator
LHCOPN update Brookhaven, 4th of April 2017
Western Analysis Facility
Wide Area Networking at SLAC, Feb ‘03
Digital Divide and PingER
SLAC Site Report.
Stanford Linear Accelerator
LHC Tier 2 Networking BOF
SLAC B-Factory BaBar Experiment WAN Requirements
Advanced Networking Collaborations at SLAC
IEPM. Warren Matthews (SLAC)
Wide-Area Networking at SLAC
Stanford Linear Accelerator
HEP System Managers Meeting, UCL.
Presentation transcript:

ICFA/SCIC Aug '05 SLAC Site Report 1 SLAC Site Report Les Cottrell, Gary Buhrmaster, Richard Mount, SLAC For ICFA/SCIC meeting 8/22/05

ICFA/SCIC Aug '05 SLAC Site Report 2 SLAC Funding Increasingly multi-program: –Increasing focus on photon sources SPEAR3, Linear Coherent Light Source (LCLS) and Ultra Fast Science Center –Increased funding from BES LINAC increasingly funded by BES (all in 2009) –HEP roughly stable, BaBar stops taking data 2008 –Also NASA (GLAST and Large Synoptic Space Telescope (LSST)) –Joint (Stanford / DoE / NSF) funded projects KIPAC, UltraFast center, Guest House

ICFA/SCIC Aug '05 SLAC Site Report 3 SLAC Organization Photon Science Particle & Particle Astrophysics LCLS Construction (~$379 million) Operations (COO) –Computing/networking included here Computing as utility to all SLAC

ICFA/SCIC Aug '05 SLAC Site Report 4 Requires New business practices –More project oriented since multiple projects so need more accountability –No longer dominated by HEP Harder to “hide” projects like PingER with no sources of funding for operations.

ICFA/SCIC Aug '05 SLAC Site Report 5 SLAC external network traffic SLAC is one of top users of ESNet and one of the top users of Internet2. (Fermilab doesn’t do so badly either) –Majority of our science traffic is international –Connectivity to both ESnet and CENIC (via Stanford)

ICFA/SCIC Aug '05 SLAC Site Report 6 Terabytes/Month Fermilab (US)  WestGrid (CA) SLAC (US)  INFN CNAF (IT) SLAC (US)  RAL (UK) Fermilab (US)  MIT (US) SLAC (US)  IN2P3 (FR) IN2P3 (FR)  Fermilab (US) SLAC (US)  Karlsruhe (DE) Fermilab (US)  Johns Hopkins LIGO (US)  Caltech (US) LLNL (US)  NCAR (US) Fermilab (US)  SDSC (US) Fermilab (US)  Karlsruhe (DE) LBNL (US)  U. Wisc. (US) Fermilab (US)  U. Texas, Austin (US) BNL (US)  LLNL (US) Fermilab (US)  UC Davis (US) Qwest (US)  ESnet (US) Fermilab (US)  U. Toronto (CA) BNL (US)  LLNL (US) CERN (CH)  BNL (US) NERSC (US)  LBNL (US) DOE/GTN (US)  JLab (US) U. Toronto (CA)  Fermilab (US) NERSC (US)  LBNL (US) CERN (CH)  Fermilab (US) DOE Lab-International R&E Lab-U.S. R&E (domestic) Lab-Lab (domestic) Lab-Comm. (domestic)

ICFA/SCIC Aug '05 SLAC Site Report 7 ESnet BAMAN connection SLAC participated in the BAMAN “christening” activity on June 24 th 2005 –Moved physics data from SLAC to NERSC at ~8Gb/sec SLAC and ESnet personnel are working on the “commissioning” activities for production traffic cutover –Interim will be connection using 1Gb/s links

ICFA/SCIC Aug '05 SLAC Site Report 8 SLAC 10Gb/s plans Upgrade for border and core site equipment is being ordered RSN – –Cisco 6500’s SUP720s Router functionality – Netflow, MPLS, etc. –Will connect to ESnet and CENIC (via Stanford) at 10Gb/s (when Stanford gets their 10Gb/s upgrade) Power installation has been requested, but currently does not have a completion date –We had actually planned for new power and had it partially installed, and then the electrical arc flash accident on October 2004, suspending most electrical “hot work”, and the ESnet BAMAN equipment has utilized the previously installed outlets.

ICFA/SCIC Aug '05 SLAC Site Report 9 Network Research Activities IEPM for >= 10Gbits/s hybrid networks –Forecasting for middleware/scheduling, problem detection, trouble shooting, develop/evaluate new measurement tools –Passive monitoring for high speed links –Provide network monitoring infrastructure to support critical HEP experiments Next Gen transport evaluation: –User space transport (UDT), new TCP stacks, RDMA/DDP

ICFA/SCIC Aug '05 SLAC Site Report 10 Network Research Activities Datagrid Wide area network Monitoring Infrastructure (DWMI) PingER and the Digital Divide

ICFA/SCIC Aug '05 SLAC Site Report 11 Network Research Activities High speed testbed involvement –UltraLight SLAC systems currently at Sunnyvale Level(3) –Originally UL equipment was to be located at SLAC, but connection to USN has changed plans –USN via UltraLight project. Not directly connected at this time –ESnet Science Data network (SDN) provisioned, guaranteed bandwidth circuits to support large, high-speed science data flows –SC05

ICFA/SCIC Aug '05 SLAC Site Report 12 Future production network requirements BaBar detector runs until December 2008 luminosity will continue to increase until end of run analysis will continue after 2008 GLAST - Launch in 2006 (low data rate) LCLS - First science in 2009 LSST - First science in 2012 (~0.5GB/sec)

ICFA/SCIC Aug '05 SLAC Site Report 13 Futures UltraFast center – modeling and analysis Huge memory systems for data analysis –The PetaCache project The Broader US HEP Program (aka LHC) –Contributes to the orientation of SLAC Scientific Computing R&D Continued network research activities –Network research vs. Research network activities

ICFA/SCIC Aug '05 SLAC Site Report 14 Futures Possibility of moving some/all of site computing infrastructure offsite –Power & Cooling challenges onsite We have a 1MW substation outside of building for expansion, but no cables into building. We have an 8” water cooling pipe, but we are near cooling capacity. –Building retrofit will be disruptive Computing center built for water cooled mainframes, not air cooled rack mounted equipment –If SLAC moves forward, we will require multiple lambdas from site to collocation facility