Presentation is loading. Please wait.

Presentation is loading. Please wait.

CASTOR Project Status CASTOR Project Status CERNIT-PDP/DM February 2000.

Similar presentations


Presentation on theme: "CASTOR Project Status CASTOR Project Status CERNIT-PDP/DM February 2000."— Presentation transcript:

1 CASTOR Project Status CASTOR Project Status CERNIT-PDP/DM February 2000

2 CASTOR project status/CHEP2000 Agenda  CASTOR objectives  CASTOR components  Current status  Early tests  Possible enhancements  Conclusion

3 February 2000CASTOR project status/CHEP2000 CASTOR  CASTOR stands for “CERN Advanced Storage Manager”  Evolution of SHIFT  Short term goal: handle NA48 data (25 MB/s) and COMPASS data (35 MB/s) in a fully distributed environment  Long term goal: prototype for the software to be used to handle LHC data  Development started in January 1999  CASTOR being put in production at CERN  See: http://wwwinfo.cern.ch/pdp/castor

4 February 2000CASTOR project status/CHEP2000 CASTOR objectives  CASTOR is a disk pool manager coupled with a backend store which provides: Indirect access to tapes HSM functionality  Major objectives are: High performance Good scalability Easy to clone and deploy High modularity to be able to easily replace components and integrate commercial products  Focussed on HEP requirements  Available on most Unix systems and Windows/NT

5 February 2000CASTOR project status/CHEP2000 CASTOR components  Client applications use the stager and RFIO  The backend store consists of: RFIOD (Disk Mover) Name server Volume Manager Volume and Drive Queue Manager RTCOPY daemon + RTCPD (Tape Mover) Tpdaemon (PVR)  Main characteristics of the servers Distributed Critical servers are replicated Use CASTOR Database (Cdb) or commercial databases like Raima and Oracle

6 February 2000CASTOR project status/CHEP2000 CASTOR layout STAGER RFIOD (DISK MOVER) TPDAEMON (PVR) MSGD DISK POOL TMS NAME server VOLUME manager RTCOPY VDQM server RTCPD (TAPE MOVER)

7 February 2000CASTOR project status/CHEP2000 Basic Hierarchical Storage Manager (HSM)  Automatic tape volume allocation  Explicit migration/recall by user  Automatic migration by disk pool manager

8 February 2000CASTOR project status/CHEP2000 Current status  Development complete  New stager with Cdb in production for DELPHI  Mover and HSM being extensively tested

9 February 2000CASTOR project status/CHEP2000 Early tests  RTCOPY  Name Server  ALICE Data Challenge

10 February 2000CASTOR project status/CHEP2000 Hardware configuration for RTCOPY tests (1) SUN E450 SCSI disks (striped FS), ~30MB/s Linux PCs STK Redwood IBM 3590E STK 9840

11 February 2000CASTOR project status/CHEP2000 RTCOPY test results (1)

12 February 2000CASTOR project status/CHEP2000 Hardware configuration for RTCOPY tests (2) SUN E450 Linux PC SCSI disks (striped FS), ~30MB/s EIDE disks, ~14MB/s Linux PCs STK Redwood STK 9840 Gigabit Linux PCs EIDE 100BaseT

13 February 2000CASTOR project status/CHEP2000 RTCOPY test results (2)  A short (1/2 hour) scalability test was run in a distributed environment: 5 disk servers 3 tape servers 9 drives  120 GB transferred  70 MB/s aggregate (if mount time overhead included)  90 MB/s aggregate (if mount time overhead excluded)  This exceeds COMPASS requirements and is just below the ATLAS/CMS requirements

14 February 2000CASTOR project status/CHEP2000 Name server test results (1)

15 February 2000CASTOR project status/CHEP2000 Name server test results (2)

16 February 2000CASTOR project status/CHEP2000 ALICE Data Challenge 10 * PowerPC 604 200 MHz 32MB7 * PowerPC 604 200 MHz 32MBHP Kayak 3COM Fast Ethernet Switch 12 * Redwoods 4 * Linux tape servers 12 * Linux disk servers Gigabit Switch Smart Switch Router

17 February 2000CASTOR project status/CHEP2000 Possible enhancements  RFIO client - name server interface  64 bits support in RFIO (collaboration with IN2P3)  GUI and WEB interface to monitor and administer CASTOR  Enhanced HSM functionality: Transparent migration Intelligent disk space allocation Classes of service Automatic migration between media types Quotas Undelete and Repack functions Import/Export

18 February 2000CASTOR project status/CHEP2000 Conclusion  2 man years of design and development  Easy deployment because of modularity and backward compatibility with SHIFT  Performance limited only by hardware configuration  See: http://wwwinfo.cern.ch/pdp/castor


Download ppt "CASTOR Project Status CASTOR Project Status CERNIT-PDP/DM February 2000."

Similar presentations


Ads by Google