CASTOR: CERN’s data management system

Slides:



Advertisements
Similar presentations
Jens G Jensen Atlas Petabyte store Supporting Multiple Interfaces to Mass Storage Providing Tape and Mass Storage to Diverse Scientific Communities.
Advertisements

Chapter 20 Oracle Secure Backup.
16/9/2004Features of the new CASTOR1 Alice offline week, 16/9/2004 Olof Bärring, CERN.
CASTOR Project Status CASTOR Project Status CERNIT-PDP/DM February 2000.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Applying Data Grids to Support Distributed Data Management Storage Resource Broker Reagan W. Moore Ian Fisk Bing Zhu University of California, San Diego.
What is it? Hierarchical storage software developed in collaboration with five US department of Energy Labs since 1992 Allows storage management of 100s.
Backup Rationalisation Reorganisation of the CERN Computer Centre Backups David Asbury IT/DS Friday 6 December 2002.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
HEPIX 3 November 2000 Current Mass Storage Status/Plans at CERN 1 HEPIX 3 November 2000 H.Renshall PDP/IT.
16/4/2004Storage Resource Sharing with CASTOR1 Olof Barring, Benjamin Couturier, Jean-Damien Durand, Emil Knezo, Sebastien Ponce (CERN) Vitali Motyakov.
The Mass Storage System at JLAB - Today and Tomorrow Andy Kowalski.
The GSI Mass Storage System TAB GridKa, FZ Karlsruhe Sep. 4, 2002 Horst Göringer, GSI Darmstadt
CASPUR Site Report Andrei Maslennikov Sector Leader - Systems Catania, April 2001.
Building Advanced Storage Environment Cheng Yaodong Computing Center, IHEP December 2002.
Data management in grid. Comparative analysis of storage systems in WLCG.
Storage Tank in Data Grid Shin, SangYong(syshin, #6468) IBM Grid Computing August 23, 2003.
Lower Storage projects Alexander Moibenko 02/19/2003.
POW : System optimisations for data management 11 November 2004.
GStore: GSI Mass Storage ITEE-Palaver GSI Horst Göringer, Matthias Feyerabend, Sergei Sedykh
20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 1 History  Test system HPSS 3.2 installation in Oct 1997 IBM AIX machines with IBM 3590 drives.
MOUNT10: Company, Products and Solutions ABAKUS Distribution, a.s. Jaroslav Techl
CASTOR / GridFTP Emil Knezo PPARC-LCG-Fellow CERN IT-ADC GridPP 7 th Collaboration Meeting, Oxford UK July 1st 2003.
D C a c h e Michael Ernst Patrick Fuhrmann Tigran Mkrtchyan d C a c h e M. Ernst, P. Fuhrmann, T. Mkrtchyan Chep 2003 Chep2003 UCSD, California.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
Functional description Detailed view of the system Status and features Castor Readiness Review – June 2006 Giuseppe Lo Presti, Olof Bärring CERN / IT.
Author - Title- Date - n° 1 Partner Logo EU DataGrid, Work Package 5 The Storage Element.
Author - Title- Date - n° 1 Partner Logo WP5 Summary Paris John Gordon WP5 6th March 2002.
 CASTORFS web page - CASTOR web site - FUSE web site -
CASTOR: CERN’s data management system CHEP03 25/3/2003 Ben Couturier, Jean-Damien Durand, Olof Bärring CERN.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
Storage and Storage Access 1 Rainer Többicke CERN/IT.
Test Results of the EuroStore Mass Storage System Ingo Augustin CERNIT-PDP/DM Padova.
CASTOR evolution Presentation to HEPiX 2003, Vancouver 20/10/2003 Jean-Damien Durand, CERN-IT.
CASTOR status Presentation to LCG PEB 09/11/2004 Olof Bärring, CERN-IT.
Data & Storage Services CERN IT Department CH-1211 Genève 23 Switzerland t DSS New tape server software Status and plans CASTOR face-to-face.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
HEPIX Backup Survey David Asbury CERN/IT/FIO HEPIX, Rome, 6 April 2006.
CASTOR project status CASTOR project status CERNIT-PDP/DM October 1999.
CERN IT Department CH-1211 Genève 23 Switzerland t The Tape Service at CERN Vladimír Bahyl IT-FIO-TSI June 2009.
ASCC Site Report Eric Yen & Simon C. Lin Academia Sinica 20 July 2005.
Developments for tape CERN IT Department CH-1211 Genève 23 Switzerland t DSS Developments for tape CASTOR workshop 2012 Author: Steven Murray.
AMS02 Data Volume, Staging and Archiving Issues AMS Computing Meeting CERN April 8, 2002 Alexei Klimentov.
1 5/4/05 Fermilab Mass Storage Enstore, dCache and SRM Michael Zalokar Fermilab.
UNIX Operating System. A Brief Review of Computer System 1. The Hardware CPU, RAM, ROM, DISK, CD-ROM, Monitor, Graphics Card, Keyboard, Mouse, Printer,
Storage & Database Team Activity Report INFN CNAF,
NASA Langley Research Center’s Distributed Mass Storage System (DMSS) Juliet Z. Pao Guest Lecturing at ODU April 8, 1999.
CASTOR new stager proposal CASTOR users’ meeting 24/06/2003 The CASTOR team.
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
15.June 2004Bernd Panzer-Steindel, CERN/IT1 CERN Mass Storage Issues.
CTA: CERN Tape Archive Rationale, Architecture and Status
High Performance Storage System (HPSS) Jason Hick Mass Storage Group HEPiX October 26-30, 2009.
CERN IT-Storage Strategy Outlook Alberto Pace, Luca Mascetti, Julien Leduc
CASTOR: possible evolution into the LHC era
Status and plans Giuseppe Lo Re INFN-CNAF 8/05/2007.
Giuseppe Lo Re Workshop Storage INFN 20/03/2006 – CNAF (Bologna)
Tape Drive Testing.
PC Farms & Central Data Recording
Service Challenge 3 CERN
Emil Knezo PPARC-LCG-Fellow CERN IT-DS-HSM August 2002
Ákos Frohner EGEE'08 September 2008
The INFN Tier-1 Storage Implementation
CTA: CERN Tape Archive Overview and architecture
Data Management cluster summary
2018 Real Dell EMC E Exam Questions Killtest
OffLine Physics Computing
Lee Lueking D0RACE January 17, 2002
INFNGRID Workshop – Bari, Italy, October 2004
Presentation transcript:

CASTOR: CERN’s data management system Data and storage mgmt workshop 17/3/2003 Ben Couturier, Jean-Damien Durand, CERN IT-ADC

CASTOR: CERN's data management system Introduction CERN Advanced STORage Manager Hierarchical Storage Manager used to store user and physics files Manages the secondary and tertiary storage History Development started in 1999 based on SHIFT, CERN's tape and disk management system since beginning of 1990s (SHIFT was awarded the 21st Century Achievement Award by Computerworld in 2001) In production since the beginning of 2001 Currently holds more than 9 million files and 2000 TB of data 17/3/2003 CASTOR: CERN's data management system

Main Characteristics (1) CASTOR Namespace All files belong to the “/castor” hierarchy The rights are standard UNIX rights POSIX Interface The files are accessible through a standard POSIX interface, all calls are rfio_xxx (e.g. rfio_open, rfio_close…) RFIO Protocol All remote file access done using the Remote File IO protocol, developed at CERN. 17/3/2003 CASTOR: CERN's data management system

Main Characteristics (2) Modularity The components in CASTOR have well defined roles and interfaces, it is possible to change a component without affecting the whole system Highly Distributed System CERN uses a very distributed configuration with many disk servers/tape servers. Can also run in more limited environment Scalability The number of disk servers, tape servers, name servers… is not limited Use of RDBMS (Oracle, MySQL) to improve the scalability of some critical components 17/3/2003 CASTOR: CERN's data management system

Main Characteristics (3) Tape drive sharing A large number of drives can be shared between users or dedicated to some users/experiments Drives can be shared with other applications: with TSM, for example High Performance Tape Mover Use of threads and circular buffers Overlaid device and network I/O Grid Interfaces A GridFTP daemon interfaced with CASTOR is currently in test A SRM Interface (V1.0) for CASTOR has been developed 17/3/2003 CASTOR: CERN's data management system

Hardware Compatibility CASTOR runs on: Linux, Solaris, AIX, HP-UX, Digital UNIX, IRIX The clients and some of the servers run on Windows NT/2K Supported drives DLT/SDLT, LTO, IBM 3590, STK 9840, STK9940A/B (and old drives already supported by SHIFT) Libraries SCSI Libraries ADIC Scalar, IBM 3494, IBM 3584, Odetics, Sony DMS24, STK Powderhorn 17/3/2003 CASTOR: CERN's data management system

CASTOR: CERN's data management system CASTOR Components Central servers Name Server Volume Manager Volume and Drive Queue Manager (Manages the volume and drive queues per device group) UPV (Authorization daemon) “Disk” subsystem RFIO (Disk Mover) Stager (Disk Pool Manager and Hierarchical Resource Manager) “Tape” Subsystem RTCOPY daemon (Tape Mover) Tpdaemon (PVR) 17/3/2003 CASTOR: CERN's data management system

CASTOR: CERN's data management system CASTOR Architecture CUPV VDQM server NAME server RFIO Client VDQM server NAME server STAGER RTCPD TPDAEMON (PVR) RTCPD (TAPE MOVER) RFIOD (DISK MOVER) VOLUME manager MSGD DISK POOL 17/3/2003 CASTOR: CERN's data management system

CASTOR: CERN's data management system CASTOR Setup at CERN Disk servers ~ 140 disk servers ~ 70 TB of staging pools ~ 40 stagers Tape drives and servers Libraries 2 sets of 5 Powderhorn silos (2 x 27500 cartridges) 1 Timberwolf (1 x 600 cartridges) 1 L700 (1 x 600 cartridges) Model Nb Drives Nb Servers 9940B 21 20 9940A 28 10 9840 15 5 3590 4 2 DLT7000 6 LTO 3 SDLT 1 17/3/2003 CASTOR: CERN's data management system

Evolution of Data in CASTOR 17/3/2003 CASTOR: CERN's data management system

CASTOR: CERN's data management system Tape Mounts per group 17/3/2003 CASTOR: CERN's data management system

Tape Mounts per drive type 17/3/2003 CASTOR: CERN's data management system

CASTOR: CERN's data management system ALICE Data Challenge Migration rate of 300 MB/s sustained for a week Using 18 STK T9940B drives ~ 20 disk servers managed by 1 stager A separate name server was used for the data challenge 17/3/2003 CASTOR: CERN's data management system

CASTOR: CERN's data management system 17/3/2003 CASTOR: CERN's data management system