IN2P3 Status Report HTASC March 2003 Fabio HERNANDEZ et al. from CC-in2p3 François ETIENNE

Slides:



Advertisements
Similar presentations
Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
Advertisements

NIKHEF Testbed 1 Plans for the coming three months.
Tier1A Status Andrew Sansum GRIDPP 8 23 September 2003.
Jean-Yves Nief, CC-IN2P3 Wilko Kroeger, SCCS/SLAC Adil Hasan, CCLRC/RAL HEPiX, SLAC October 11th – 13th, 2005 BaBar data distribution using the Storage.
Magda – Manager for grid-based data Wensheng Deng Physics Applications Software group Brookhaven National Laboratory.
Overview of LCG-France Tier-2s and Tier-3s Frédérique Chollet (IN2P3-LAPP) on behalf of the LCG-France project and Tiers representatives CMS visit to Tier-1.
Backup Rationalisation Reorganisation of the CERN Computer Centre Backups David Asbury IT/DS Friday 6 December 2002.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
HEPIX 3 November 2000 Current Mass Storage Status/Plans at CERN 1 HEPIX 3 November 2000 H.Renshall PDP/IT.
Centre de Calcul IN2P3 Centre de Calcul de l'IN2P Boulevard Niels Bohr F VILLEURBANNE
CC - IN2P3 Site Report Hepix Fall meeting 2009 – Berkeley
Andrew McNab - Manchester HEP - 5 July 2001 WP6/Testbed Status Status by partner –CNRS, Czech R., INFN, NIKHEF, NorduGrid, LIP, Russia, UK Security Integration.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
CASPUR Site Report Andrei Maslennikov Lead - Systems Karlsruhe, May 2005.
W.A.Wojcik/CCIN2P3, May Running the multi-platform, multi-experiment cluster at CCIN2P3 Wojciech A. Wojcik IN2P3 Computing Center
8th November 2002Tim Adye1 BaBar Grid Tim Adye Particle Physics Department Rutherford Appleton Laboratory PP Grid Team Coseners House 8 th November 2002.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
Tier 1 Facility Status and Current Activities Rich Baker Brookhaven National Laboratory NSF/DOE Review of ATLAS Computing June 20, 2002.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
LCG-2 Plan in Taiwan Simon C. Lin and Eric Yen Academia Sinica Taipei, Taiwan 13 January 2004.
GStore: GSI Mass Storage ITEE-Palaver GSI Horst Göringer, Matthias Feyerabend, Sergei Sedykh
20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 1 History  Test system HPSS 3.2 installation in Oct 1997 IBM AIX machines with IBM 3590 drives.
Sejong STATUS Chang Yeong CHOI CERN, ALICE LHC Computing Grid Tier-2 Workshop in Asia, 1 th December 2006.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team 12 th CERN-Korea.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
HEPIX - HEPNT, 1 Nov Milos Lokajicek, IP AS CR, Prague1 Status Report - Czech Republic HEP Groups and experiments Networking and Computing Grid activities.
CERN IT Department CH-1211 Genève 23 Switzerland t Frédéric Hemmer IT Department Head - CERN 23 rd August 2010 Status of LHC Computing from.
ATLAS Tier 1 at BNL Overview Bruce G. Gibbard Grid Deployment Board BNL 5-6 September 2006.
USATLAS dCache System and Service Challenge at BNL Zhenping (Jane) Liu RHIC/ATLAS Computing Facility, Physics Department Brookhaven National Lab 10/13/2005.
National HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University CDF CAF & Grid Meeting July 12,
Status Report of WLCG Tier-1 candidate for KISTI-GSDC Sang-Un Ahn, for the GSDC Tier-1 Team GSDC Tier-1 Team ATHIC2012, Busan,
UK Grid Meeting Glenn Patrick1 LHCb Grid Activities in UK Grid Prototype and Globus Technical Meeting QMW, 22nd November 2000 Glenn Patrick (RAL)
7 March 2000EU GRID Project Proposal Meeting CERN, M. Lokajicek 1 Proposal for Participation of the Czech Republic in the EU HEP GRID Project Institute.
2-Sep-02Steve Traylen, RAL WP6 Test Bed Report1 RAL and UK WP6 Test Bed Report Steve Traylen, WP6
DataTAG Work Package 4 Meeting Bologna Simone Ludwig Brunel University 23rd and 24th of May 2002.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
Partner Logo A Tier1 Centre at RAL and more John Gordon eScience Centre CLRC-RAL HEPiX/HEPNT - Catania 19th April 2002.
Oracle for Physics Services and Support Levels Maria Girone, IT-ADC 24 January 2005.
02/12/02D0RACE Worshop D0 Grid: CCIN2P3 at Lyon Patrice Lebrun D0RACE Wokshop Feb. 12, 2002.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
CC - IN2P3 Site Report Hepix Fall meeting 2010 – Ithaca (NY) November 1st 2010
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
12 Mars 2002LCG Workshop: Disk and File Systems1 12 Mars 2002 Philippe GAILLARDON IN2P3 Data Center Disk and File Systems.
1 5/4/05 Fermilab Mass Storage Enstore, dCache and SRM Michael Zalokar Fermilab.
W.A.Wojcik/CCIN2P3, HEPiX at SLAC, Oct CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
CC-IN2P3 Pierre-Emmanuel Brinette Benoit Delaunay IN2P3-CC Storage Team 17 may 2011.
Status of GSDC, KISTI Sang-Un Ahn, for the GSDC Tier-1 Team
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
November 28, 2007 Dominique Boutigny – CC-IN2P3 CC-IN2P3 Update Status.
Grid Computing 4 th FCPPL Workshop Gang Chen & Eric Lançon.
CC-IN2P3: A High Performance Data Center for Research Dominique Boutigny February 2011 Toward a future cooperation with Israel.
CCIN2P3 Site Report - BNL, Oct 18, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center.
CC - IN2P3 Site Report Hepix Spring meeting 2011 Darmstadt May 3rd
SAM at CCIN2P3 configuration issues
The INFN TIER1 Regional Centre
IN2P3 Computing Center April 2007
CC and LQCD dimanche 13 janvier 2019dimanche 13 janvier 2019
Lee Lueking D0RACE January 17, 2002
Presentation transcript:

IN2P3 Status Report HTASC March 2003 Fabio HERNANDEZ et al. from CC-in2p3 François ETIENNE

HTASC - 14 March Outline  User community  Update on computing services  Update on storage services  Network status  Grid status

HTASC - 14 March , Mb/s 2000 IN2P3 current context  18 labs  1 Computer Center  2500 users  40 experiments CC IN2P3 -CERN connection 155 Mb/s Gb/s 2003 CC IN2P3 -SLAC connection 30 Mb/s Mb/s Mb/s 2003

HTASC - 14 March RENATER current context  Deployed : oct  More grid than star-shape  Most links = 2.4 Gbps  still 2 main nodes : Paris, Lyon

HTASC - 14 March User community  Experiments: LHC (Atlas, CMS, Alice, LHCB), BaBar (SLAC), D0 (FNAL), PHENIX (Brookhaven), astrophysics (17 expts : EROS, SuperNovae, Auger, Virgo…) 2500 users from different countries TIER A BaBar 20% CPU power were consumed by non-French users in 2002  Starting to provide services to biologists at a local/regional level (4 teams and ~3% of cpu over the last 6 months, WP10 EDG, Heaven cluster)  User community steadily growing

HTASC - 14 March Experiments CPU (UI ~ 5 SI-95) Aleph Alice Ams Antares Archeops Atlas Auger Babar Clas Cmb Cms D Delphi Edelweiss Eros Euso Glast H Hess Indra Experiments BIOLOGY (several teams) Lhcb NA NA Nemo Ngs-Opera Phenix Planck-S Siren Snovae Star5 000 Tesla Thémis Virgo WA Total experiments above : CPU (UI) : ~ hours (~ 300 Mh SI-95) Experiments CPU request

HTASC - 14 March Computing Services  Supported platforms: Linux, SunOS, AIX  Dropped support for HP-UX  Currently migrating to RedHat Linux 7.2 and SunOS 5.8 Waiting for remaining users and EDG to drop support for RH6.2  More CPU power added over the last six months : 72 bi-processor Intel Pentium 1.4 GHz, 2 GB RAM, 120 GB disk (november) 192 bi-processor Intel Pentium 2.4 GHz, 2 GB RAM (february)  Today, the computing capacity (batch+interactive) is Linux: 920 CPUs SunOS: 62 CPUs AIX: 70 CPUsTotal > CPUs  Worker nodes storage capacity used for temporary data (reset after job execution)

HTASC - 14 March Storage Services  Extensive use of AFS for user and group files  HPSS and staging system for physics data  Mix of several platforms/protocols SunOS, AIX, Tru64 SCSI, FibreChannel AFS, NFS, RFIO  Shared disk capacity (IBM, Hitachi, Sun) ~50TB  AFS User Home directories Code, programs and some experimental data  Xtage Temporary disk system for data on tape

HTASC - 14 March Storage Services (cont.)  Mass storage (HPSS): 250 TB now, 500 TB expected in dec 03 Installed capacity on tape: 700 TB Up to 8.8 TB/day Originally purchased for Babar but now used by most experiments Babar Objectivity: 130 TB and 25 TB cache disk, others: 120 TB and 4.4TB STK 9840 (20GB tapes, fast mount) and STK 9940 (200GB tapes, slower mount, higher I/O) Accessed by RFIO, mainly rfcp. Supports files larger than 2GB Direct HPSS access from network through BBFTP

HTASC - 14 March Storage Services (cont.)  Semi-permanent storage Suited for small files(which deteriorate HPSS performances) Access with NFS or RFIO API Back-up possible for experiments whose CC-IN2P3 is the « base-site » (Auger, Antares) Working on RFIO transparent access  Back-up, Archive: TSM (Tivoli Storage Manager)  For Home directories, critical experimental data, HPSS metadata, Oracle data  TSM allows data archival (Elliot).  For back up of external data (eg. From Admin. Data of IN2P3, from Biology labs, etc)

HTASC - 14 March Disks AFS : 4 TB HPSS : 4,4 TB Objectivity : 25 TB Oracle :0.4 TB Xstage :1,2 TB Semi-perm. : 1,9 TB TSM : 0.3 TB Local : 10 TB Tapes 1 robot STK – 6 silos, slots 12 drives 9940B 200 GB/tape (7 hpss, 3 TSM, 2 others) 35 drives GB (28 hpss, 4 TSM, 3 others) 8 drives IBM-34900,8 GB (service will stop by end 2003) 1 Robot DLT – 400 Slots 6 DLT DLT 7000 Storage Service (cont)

HTASC - 14 March Network  International connectivity through… RENATER+GEANT to the US (600 Mbps via ESNET and ABILENE in NY) and Europe CERN to the US as alternate (600 Mbps)  Babar is using both links to the US for transferring data between SLAC and Lyon Specific software developed for "filling the pipe" (bbFTP) being extensively used by Babar and D0, amongst others  Dedicated 1 Gb link between Lyon and CERN since january 2003  LAN is composed of a mixture of FastEthernet and GigabitEthernet Ubiquitous wireless service  Connectivity to the other IN2P3 laboratories across the country by RENATER-3 (the French academic and research network, 2.4 Gbps links) All labs have a private connection to RENATER POPs

HTASC - 14 March Grid-related activities  Fully involved in the DataGRID project & partly in DataTag (INRIA)  One of the 5 major test bed sites  Currently all the "conventional" production environment is accessible through the grid interface Jobs submitted to the grid are managed by BQS, the home-grown batch management system Grid jobs can use the same pool of resources than normal jobs (~1000 CPUs) Access to mass storage (HPSS) from remote sites enabled through bbFTP  Benefits: Tests of DataGRID software in a production environment Scalability tests can be performed Users access exactly the same working environment and data whatever the interface they choose to access our facility Operational issues detected early

HTASC - 14 March Grid-related activities (cont.)  Disadvantages Local resources needed for integration of the production environment (AFS, BQS, …). More work needed to achieve a seamless integration between the local and grid worlds Users want us to provide a grid service: how to provide a service around a "moving target" software project?  Some experiments already using the grid interface for "semi- production" Other expressed interest in using it as soon as it gets more stable  Starting from march 2003, the resource broker and associated services for Applications and Development DataGRID testbeds will be hosted and operated in Lyon

HTASC - 14 March Grid-related activities (cont.)  Involved in several other grid projects at regional and national levels  Cooperation agreement signed with IBM to work on grid technology Exchange of experiences Grid technology evaluation Perform experiments of this technology in a production environment Explore technologies for virtualization of storage …

HTASC - 14 March CNRS –IN2P3  Coordination of: WP6 Integration Testbed WP7 Networking WP10 Bioinformatics IPSLEarth Observation (Paris) BBEBioinformatics (Lyon) CREATISImaging and signal processing (Lyon) RESAMHigh Speed networking (Lyon) LIPParallel computing (Lyon) IBCPBioinformatics (Lyon) URECNetworking (Paris –Grenoble) LIMOSBioinformatics (Clermont Ferrant) LBPBioinformatics (Clermont Ferrant) LPCIN2P3 (Clermont-Ferrant) LALIN2P3 (Paris) SubatechIN2P3 (Nantes) LLR-XIN2P3 (Paris) ISNIn2P3 (Grenoble) CC-In2P3IN2P3 (Lyon) LPNHEIN2P3 (Paris) CPPMIN2P3 (Marseille) LAPPIN2P3 (Annecy)