Federico Ruggieri INFN-CNAF GDB Meeting 10 February 2004 INFN TIER1 Status.

Slides:



Advertisements
Similar presentations
Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
Advertisements

LCG Tiziana Ferrari - SC3: INFN installation status report 1 Service Challenge Phase 3: Status report Tiziana Ferrari on behalf of the INFN SC team INFN.
Tier1A Status Andrew Sansum GRIDPP 8 23 September 2003.
Martin Bly RAL Tier1/A RAL Tier1/A Site Report HEPiX-HEPNT Vancouver, October 2003.
Tier 1 Luca dell’Agnello INFN – CNAF, Bologna Workshop CCR Paestum, 9-12 Giugno 2003.
“A prototype for INFN TIER-1 Regional Centre” Luca dell’Agnello INFN – CNAF, Bologna Workshop CCR La Biodola, 8 Maggio 2002.
March 27, IndiaCMS Meeting, Delhi1 T2_IN_TIFR of all-of-us, for all-of-us, by some-of-us Tier-2 Status Report.
1 INDIACMS-TIFR TIER-2 Grid Status Report IndiaCMS Meeting, Sep 27-28, 2007 Delhi University, India.
INFN – Tier1 Site Status Report Vladimir Sapunenko on behalf of Tier1 staff.
INFN Tier1 Status report Spring HEPiX 2005 Andrea Chierici – INFN CNAF.
INFN Tier1 Andrea Chierici INFN – CNAF, Italy LCG Workshop CERN, March
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
Soluzioni HW per il Tier 1 al CNAF Luca dell’Agnello Stefano Zani (INFN – CNAF, Italy) III CCR Workshop May
Prague TIER2 Computing Centre Evolution Equipment and Capacities NEC'2009 Varna Milos Lokajicek for Prague Tier2.
CCR GRID 2010 (Catania) Daniele Gregori, Stefano Antonelli, Donato De Girolamo, Luca dell’Agnello, Andrea Ferraro, Guido Guizzunti, Pierpaolo Ricci, Felice.
October, Site Report Roberto Gomezel INFN.
INTRODUCTION The GRID Data Center at INFN Pisa hosts a big Tier2 for the CMS experiment, together with local usage from other HEP related/not related activities.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
LCG Service Challenge Phase 4: Piano di attività e impatto sulla infrastruttura di rete 1 Service Challenge Phase 4: Piano di attività e impatto sulla.
Tier1 Status Report Martin Bly RAL 27,28 April 2005.
March 2003 CERN 1 EDG and AliEn in Prague Dagmar Adamova INP Rez near Prague.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
LCG-2 Plan in Taiwan Simon C. Lin and Eric Yen Academia Sinica Taipei, Taiwan 13 January 2004.
D0SAR - September 2005 Andre Sznajder 1 Rio GRID Initiatives : T2-HEPGRID Andre Sznajder UERJ(Brazil)
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
Tier1 status at INFN-CNAF Giuseppe Lo Re INFN – CNAF Bologna Offline Week
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
Company LOGO “ALEXANDRU IOAN CUZA” UNIVERSITY OF IAŞI” Digital Communications Department Status of RO-16-UAIC Grid site in 2013 System manager: Pînzaru.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
Sep 02 IPP Canada Remote Computing Plans Pekka K. Sinervo Department of Physics University of Toronto 4 Sep IPP Overview 2 Local Computing 3 Network.
October, HEPiX Fall 2005 at SLACSLAC Site Report Roberto Gomezel INFN.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
Fabric Monitoring at the INFN Tier1 Felice Rosso on behalf of INFN Tier1 Joint OSG & EGEE Operations WS, Culham (UK)
CERN Database Services for the LHC Computing Grid Maria Girone, CERN.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
SA1 operational policy training, Athens 20-21/01/05 Presentation of the HG Node “Isabella” and operational experience Antonis Zissimos Member of ICCS administration.
CASTOR CNAF TIER1 SITE REPORT Geneve CERN June 2005 Ricci Pier Paolo
Materials for Report about Computing Jiří Chudoba x.y.2006 Institute of Physics, Prague.
CNAF Database Service Barbara Martelli CNAF-INFN Elisabetta Vilucchi CNAF-INFN Simone Dalla Fina INFN-Padua.
1.3 ON ENHANCING GridFTP AND GPFS PERFORMANCES A. Cavalli, C. Ciocca, L. dell’Agnello, T. Ferrari, D. Gregori, B. Martelli, A. Prosperini, P. Ricci, E.
The Italian Tier-1: INFN-CNAF 11-Oct-2005 Luca dell’Agnello Davide Salomoni.
Database CNAF Barbara Martelli Rome, April 4 st 2006.
The Italian Tier-1: INFN-CNAF Andrea Chierici, on behalf of the INFN Tier1 3° April 2006 – Spring HEPIX.
IT-INFN-CNAF Status Update LHC-OPN Meeting INFN CNAF, December 2009 Stefano Zani 10/11/2009Stefano Zani INFN CNAF (TIER1 Staff)1.
Storage at TIER1 CNAF Workshop Storage INFN CNAF 20/21 Marzo 2006 Bologna Ricci Pier Paolo, on behalf of INFN TIER1 Staff
PADME Kick-Off Meeting – LNF, April 20-21, DAQ Data Rate - Preliminary estimate Tentative setup: all channels read with Fast ADC 1024 samples, 12.
Storage & Database Team Activity Report INFN CNAF,
Daniele Cesini - INFN CNAF. INFN-CNAF 20 maggio 2014 CNAF 2 CNAF hosts the Italian Tier1 computing centre for the LHC experiments ATLAS, CMS, ALICE and.
1 The S.Co.P.E. Project and its model of procurement G. Russo, University of Naples Prof. Guido Russo.
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
IHEP Computing Center Site Report Gang Chen Computing Center Institute of High Energy Physics 2011 Spring Meeting.
RAL Plans for SC2 Andrew Sansum Service Challenge Meeting 24 February 2005.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
1 1 – Statistical information about our resource centers; ; 2 – Basic infrastructure of the Tier-1 & 2 centers; 3 – Some words about the future.
NERSC/LBNL at LBNL in Berkeley October 2009 Site Report Roberto Gomezel INFN 1.
Validation tests of CNAF storage infrastructure Luca dell’Agnello INFN-CNAF.
status, usage and perspectives
Road map SC3 preparation
Luca dell’Agnello INFN-CNAF
SuperB – INFN-Bari Giacinto DONVITO.
LCG Service Challenge: Planning and Milestones
INFN CNAF TIER1 Network Service
LCG 3D Distributed Deployment of Databases
Luca dell’Agnello INFN-CNAF
The INFN TIER1 Regional Centre
The INFN Tier-1 Storage Implementation
Presentation transcript:

Federico Ruggieri INFN-CNAF GDB Meeting 10 February 2004 INFN TIER1 Status

Infrastructures 1250 KVA Power Generator with 5,000 l oil tank to be safe against power-cuts. 800 KVA Uninterruptable Power Supply with batteries lasting for 10’ at nominal power. 570 KW Cooling System m 2 Computing Room. GARR Giga-PoP with multiple 2.5 Gbps backbone lines and 1Gbps Wide Area Network Access. Unattended C.R. with: –Racks with remotely controlled (TCP/IP) power switches. –Remote Control of Console with analog and digital (TCP/IP) KVM systems.

Personnel Staff: –Pietro Matteuzzi: CNAF Computing Services Responsible –Luca Dell’Agnello: LCG Contact & Linux Systems, File Systems, Security, Networks. –Andrea Chierici: LCFG, System Manager & GRID Site Manager. –Alessandro Italiano: System Manager & GRID Site Manager. –Pier Paolo Ricci: CASTOR, Storage, Robotics, GRID (RLS). –Stefano Zani: Storage & Networks –Donato De Girolamo: System Man., Allarms Monitor & Security. –Barbara Martelli: SW Development, DB & GRID (RLS). –Felice Rosso: System Man., Monitoring & LCFG. –Daniele Bonaccorsi: SW Support for Experim. (CMS) –Giuseppe Lo Re: SW Support for Experim. (ALICE)/CASTOR –Guido Negri: SW Support for Experim. (ATLAS) –Massimo Cinque: General Services.

Actual HW resources TypeDescriptionCapacity Farm 14 biproc. 800Mhz10K SPECint KSI2K 55 biproc MHz55K SPECint biproc. 1400Mhz100K SPECint biproc MHz44K SPECint biproc MHz15K SPECint biproc 2400 MHz384K SPECint2000 Disks NAS FC (Procom)17 TB Raw RAID5 67 TB NAS ATA2.5 TB Raw RAID5 SCSI Disks2 TB Raw RAID5 FC Disks10.5 TB Raw RAID5 FC/ATA Disks28 TB Raw RAID5 Tapes Robot L180 STK TB / 8 TB (comp.) 115/230 TB 4 Drives LTO15 TB / 30 TB (comp.) Robot L5500 STK 6 LTO2100 TB / 200 TB (comp.) Network 14 Switch 1/2U48 FE UTP + 2 GE FO 672 FE UTP 48 GE UTP 132 GE FO 2 Switch 1.5 U24 GE UTP + 4 GE FO 1 Core Switch32 GE FO 1 Core Switch64 GE FO + 48 KSI2K CDF + 30 KSI2K LHCb

New Resources from May ‘ – 1Ux2 3GHz > 700 KSI2K 150 TB FC/SATA disks 3x FasTt900 by IBM 800 LTO2 x 200 GB native New Core Switch >128 Gbit ports + 10Gbit.

CPU Configuration 50% resources statically assigned to exp. 50% common PBS queues with MAUI Almost all resources accessible via local PBS and GRID via CE. Actually configured with LCG-1 production + LCG-2 test. LCG-1 will be discarded when a LCG-2 final version will be available.

Storage Configuration Mass Storage via CASTOR with LTO and staging disks (2TB now migrating to 20 TB) + SRM. Disk Pool with D-Cache on request (CDF and CMS) using the second disk of WN’s. RLS with Oracle. NFS access to home directories and Experiment SW.

Network 1 Gb access for production and general Internet usage. 1 Gb experimental link with CERN (DataTAG). 1 Gb experimental Link with FZK (GARR- GEANT-DFN Tests under way).

Conclusions INFN Tier1 is active and already heavily used by the experiments. LCG-2 is installed. The Tier1 services are integrated with the production GRID (LCG, Grid-IT). New huge amount of resources will be available for the experiments in May.