Storage and Data Movement at FNAL D. Petravick CHEP 2003.

Slides:



Advertisements
Similar presentations
Jens G Jensen Atlas Petabyte store Supporting Multiple Interfaces to Mass Storage Providing Tape and Mass Storage to Diverse Scientific Communities.
Advertisements

HEPiX Edinburgh 28 May 2004 LCG les robertson - cern-it-1 Data Management Service Challenge Scope Networking, file transfer, data management Storage management.
HEPiX GFAL and LCG data management Jean-Philippe Baud CERN/IT/GD.
Storage System Integration with High Performance Networks Jon Bakken and Don Petravick FNAL.
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
Plateforme de Calcul pour les Sciences du Vivant SRB & gLite V. Breton.
CMS Applications Towards Requirements for Data Processing and Analysis on the Open Science Grid Greg Graham FNAL CD/CMS for OSG Deployment 16-Dec-2004.
A conceptual model of grid resources and services Authors: Sergio Andreozzi Massimo Sgaravatto Cristina Vistoli Presenter: Sergio Andreozzi INFN-CNAF Bologna.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
GridPP meeting Feb 03 R. Hughes-Jones Manchester WP7 Networking Richard Hughes-Jones.
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
Open Science Grid The OSG Accounting System: GRATIA by Philippe Canal (FNAL) & Matteo Melani (SLAC) Mumbai, India CHEP2006.
Mass Storage System Forum HEPiX Vancouver, 24/10/2003 Don Petravick (FNAL) Olof Bärring (CERN)
Tier 3 Data Management, Tier 3 Rucio Caches Doug Benjamin Duke University.
PPDG and ATLAS Particle Physics Data Grid Ed May - ANL ATLAS Software Week LBNL May 12, 2000.
SAMGrid as a Stakeholder of FermiGrid Valeria Bartsch Computing Division Fermilab.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Evolution, by tackling new challenges| CHEP 2015, Japan | Patrick Fuhrmann | 16 April 2015 | 1 Patrick Fuhrmann On behave of the project team Evolution,
D C a c h e Michael Ernst Patrick Fuhrmann Tigran Mkrtchyan d C a c h e M. Ernst, P. Fuhrmann, T. Mkrtchyan Chep 2003 Chep2003 UCSD, California.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
Grid Lab About the need of 3 Tier storage 5/22/121CHEP 2012, The need of 3 Tier storage Dmitri Ozerov Patrick Fuhrmann CHEP 2012, NYC, May 22, 2012 Grid.
Virtual Data Grid Architecture Ewa Deelman, Ian Foster, Carl Kesselman, Miron Livny.
Developing & Managing A Large Linux Farm – The Brookhaven Experience CHEP2004 – Interlaken September 27, 2004 Tomasz Wlodek - BNL.
Author - Title- Date - n° 1 Partner Logo EU DataGrid, Work Package 5 The Storage Element.
Storage, Networks, Data Management Report on Parallel Session OSG Meet 8/2006 Frank Würthwein (UCSD)
Ames Research CenterDivision 1 Information Power Grid (IPG) Overview Anthony Lisotta Computer Sciences Corporation NASA Ames May 2,
MTA SZTAKI Hungarian Academy of Sciences Introduction to Grid portals Gergely Sipos
BNL Wide Area Data Transfer for RHIC & ATLAS: Experience and Plans Bruce G. Gibbard CHEP 2006 Mumbai, India.
Owen SyngeTitle of TalkSlide 1 Storage Management Owen Synge – Developer, Packager, and first line support to System Administrators. Talks Scope –GridPP.
BNL Tier 1 Service Planning & Monitoring Bruce G. Gibbard GDB 5-6 August 2006.
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
Overview of grid activities in France in relation to FKPPL FKPPL Workshop Thursday February 26th, 2009 Dominique Boutigny.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
CEOS Working Group on Information Systems and Services - 1 Data Services Task Team Discussions on GRID and GRIDftp Stuart Doescher, USGS WGISS-15 May 2003.
USATLAS dCache System and Service Challenge at BNL Zhenping (Jane) Liu RHIC/ATLAS Computing Facility, Physics Department Brookhaven National Lab 10/13/2005.
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Jens G Jensen RAL, EDG WP5 Storage Element Overview DataGrid Project Conference Heidelberg, 26 Sep-01 Oct 2003.
INFSO-RI Enabling Grids for E-sciencE The gLite File Transfer Service: Middleware Lessons Learned form Service Challenges Paolo.
DataTAG Work Package 4 Meeting Bologna Simone Ludwig Brunel University 23rd and 24th of May 2002.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
Scientific Storage at FNAL Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
23.March 2004Bernd Panzer-Steindel, CERN/IT1 LCG Workshop Computing Fabric.
US-CMS T2 Centers US-CMS Tier 2 Report Patricia McBride Fermilab GDB Meeting August 31, 2007 Triumf - Vancouver.
Super Computing 2000 DOE SCIENCE ON THE GRID Storage Resource Management For the Earth Science Grid Scientific Data Management Research Group NERSC, LBNL.
Adapting SAM for CDF Gabriele Garzoglio Fermilab/CD/CCF/MAP CHEP 2003.
Parag Mhashilkar Computing Division, Fermi National Accelerator Laboratory.
1 A Scalable Distributed Data Management System for ATLAS David Cameron CERN CHEP 2006 Mumbai, India.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
ASCC Site Report Eric Yen & Simon C. Lin Academia Sinica 20 July 2005.
LHCC Referees Meeting – 28 June LCG-2 Data Management Planning Ian Bird LHCC Referees Meeting 28 th June 2004.
1 5/4/05 Fermilab Mass Storage Enstore, dCache and SRM Michael Zalokar Fermilab.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
A Data Handling System for Modern and Future Fermilab Experiments Robert Illingworth Fermilab Scientific Computing Division.
1 Particle Physics Data Grid (PPDG) project Les Cottrell – SLAC Presented at the NGI workshop, Berkeley, 7/21/99.
Storage, Networking and Data Management Don Petravick, Fermilab OSG Milwaukee meeting July, 2005.
9/20/04Storage Resource Manager, Timur Perelmutov, Jon Bakken, Don Petravick, Fermilab 1 Storage Resource Manager Timur Perelmutov Jon Bakken Don Petravick.
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
Storage, Networking and Data Management Don Petravick, Fermilab OSG Milwaukee meeting July, 2005.
High Performance Storage System (HPSS) Jason Hick Mass Storage Group HEPiX October 26-30, 2009.
Gene Oleynik, Head of Data Storage and Caching,
(Prague, March 2009) Andrey Y Shevel
“A Data Movement Service for the LHC”
Grid related projects CERN openlab LCG EDG F.Fluckiger
LQCD Computing Operations
Data Management cluster summary
A conceptual model of grid resources and services
LHC Data Analysis using a worldwide computing grid
Presentation transcript:

Storage and Data Movement at FNAL D. Petravick CHEP 2003

3/25/2003DLP -- CHEP FNAL Overall Goals Provide competent one copy permanent store to all the Lab’s experiments. Provide scalable and performant data flows –Tape and Disk –Local area and Wide area. Provide standard interfaces allowing for interoperation with other sites. Collaborate! (DESY > 5 yrs) GLOBUS, LBL, JLAB, CERN

3/25/2003DLP -- CHEP 20033

3/25/2003DLP -- CHEP Overview Central Storage Systems capacious and scalable enough to be a hub of a data intensive system. –Linux as a hardware platform. –Permanent and temporary semantics. Competent local and GRID interfaces. ENCP, dCap FTP(s) and SRM (Storage Resource Manager) investigating GLUE schema and monitoriing

3/25/2003DLP -- CHEP 20035

3/25/2003DLP -- CHEP 20036

3/25/2003DLP -- CHEP 20037

3/25/2003DLP -- CHEP 20038

3/25/2003DLP -- CHEP RunII Work Plan SAM –Grid FTP integration with SAM. –DCAP integration –SRM integration with SAM. SRM integration with legacy CDF AC++ Restore rates for CDF experiment. Provide hardware for D0 Wide area transfers.

3/25/2003DLP -- CHEP US CMS Work Plan Use dCache as a vehicle for Grid File access locally. Evaluate Storage elements for Tier II centers (dCache, DRM, NeST EDG wp 5, Dfarm). Follow, help formulate LCG requirements. Detailed work to meet CMS Data Challenge on CMS deadlines. Monitor and Improve Network –Pinger IEPM (co-project) –FAST and other h/p TCP stacks. (co-project)

3/25/2003DLP -- CHEP

3/25/2003DLP -- CHEP Job-Aware, Replica-Aware Mdw Permanent Store Replica Stores FTP Inter- face SRM Inter- face Direct File Access Interface Monitoring Interface Framework + job

3/25/2003DLP -- CHEP Lattice Gauge Work Plan Support the Fermilab Lattice Computation Facility. Use GRID FTP, SRM to integrate with –JLAB (cluster), BNL (QCDOC) facilities. Understand relationship of storage systems to QIO (community I/O package) Investigate utility of caching data at FNAL. Investigate symmetries with earth science type system.

3/25/2003DLP -- CHEP Other Work Elements Work on Grid FTP V1.1 specification. –Include work on a few scaling issues for “seas of linux box” type systems. Work on SRM protocol, futures. –V2.1 (co – FNAL) Look at “Object Based Storage” in conjunction with US CMS. –Proposed investigation is root integration. Understand the requirements of analysis systems.

3/25/2003DLP -- CHEP Other work elements… Support Grid protocols for dFarm. –Make it a “storage element” –Another innovative package. Experimenting with Grid Authentication. Prepare for special routing on the WAN –E.g. lambda networking. Support the experimental community at FNAL. –Auger in GridFTP production. –Minos in Kerberized, Weak FTP.

3/25/2003DLP -- CHEP

3/25/2003DLP -- CHEP Advanced Monitoring Animations were Crucial for debugging The RUN II LAN based data systems. Looking for analogous tools for grid based works Follow items such as GLUE.

3/25/2003DLP -- CHEP Advanced Network Integration FNAL proposes to have two paths off site. Path to Starlight will allow R&D on advanced network concepts (e.g. lambda network) Central data movement systems will be an early user.

3/25/2003DLP -- CHEP Summary Very substantial and successful support of FNAL Program. Very proactive work on fabric side. Data in Movement are interesting. Goal :Routine –performant flows –on LAN and WAN –with interoperation