The Mass Storage System at JLAB - Today and Tomorrow Andy Kowalski.

Slides:



Advertisements
Similar presentations
Setting up repositories: Technical Requirements, Repository Software, Metadata & Workflow. Repository services Iryna Kuchma, eIFL Open Access program manager,
Advertisements

Computing Infrastructure
CASTOR Project Status CASTOR Project Status CERNIT-PDP/DM February 2000.
The GSI Mass Storage for Experiment Data DVEE-Palaver GSI Darmstadt Feb. 15, 2005 Horst Göringer, GSI Darmstadt
13 May, 2005GlueX Collaboration Meeting1 Experiences with Large Data Sets Curtis A. Meyer.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
What is it? Hierarchical storage software developed in collaboration with five US department of Energy Labs since 1992 Allows storage management of 100s.
1 Andrew Hanushevsky - HEPiX, October 6-8, 1999 Mass Storage For BaBar at SLAC Andrew Hanushevsky Stanford.
Designing Storage Architectures for Preservation Collections Library of Congress, September 17-18, 2007 Preservation and Access Repository Storage Architecture.
Capacity Planning in SharePoint Capacity Planning Process of evaluating a technology … Deciding … Hardware … Variety of Ways Different Services.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
Storage Area Networks The Basics. Storage Area Networks SANS are designed to give you: More disk space Multiple server access to a single disk pool Better.
DELL PowerEdge 6800 performance for MR study Alexander Molodozhentsev KEK for RCS-MR group meeting November 29, 2005.
Online Systems Status Review of requirements System configuration Current acquisitions Next steps... Upgrade Meeting 4-Sep-1997 Stu Fuess.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
The GSI Mass Storage System TAB GridKa, FZ Karlsruhe Sep. 4, 2002 Horst Göringer, GSI Darmstadt
CASPUR Site Report Andrei Maslennikov Sector Leader - Systems Catania, April 2001.
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
The SLAC Cluster Chuck Boeheim Assistant Director, SLAC Computing Services.
Design & Management of the JLAB Farms Ian Bird, Jefferson Lab May 24, 2001 FNAL LCCWS.
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
Scientific Computing Experimental Physics Lattice QCD Sandy Philpott May 20, 2011 IT Internal Review 12GeV Readiness.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility HEPiX – Fall, 2005.
GStore: GSI Mass Storage ITEE-Palaver GSI Horst Göringer, Matthias Feyerabend, Sergei Sedykh
Linux Servers with JASMine K. Edwards, A. Kowalski, S. Philpott HEPiX May 21, 2003.
20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 1 History  Test system HPSS 3.2 installation in Oct 1997 IBM AIX machines with IBM 3590 drives.
21 st October 2002BaBar Computing – Stephen J. Gowdy 1 Of 25 BaBar Computing Stephen J. Gowdy BaBar Computing Coordinator SLAC 21 st October 2002 Second.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
Integrating JASMine and Auger Sandy Philpott Thomas Jefferson National Accelerator Facility Jefferson Ave. Newport News, Virginia USA 23606
Fermi National Accelerator Laboratory SC2006 Fermilab Data Movement & Storage Multi-Petabyte tertiary automated tape store for world- wide HEP and other.
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
Operated by the Southeastern Universities Research Association for the U.S. Depart. Of Energy Thomas Jefferson National Accelerator Facility Andy Kowalski.
HPSS for Archival Storage Tom Sherwin Storage Group Leader, SDSC
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility (formerly CEBAF - The Continuous Electron Beam Accelerator Facility)
8 October 1999 BaBar Storage at CCIN2P3 p. 1 Rolf Rumler BaBar Storage at Lyon HEPIX and Mass Storage SLAC, California, U.S.A. 8 October 1999 Rolf Rumler,
Disk Farms at Jefferson Lab Bryan Hess
Sep. 17, 2002BESIII Review Meeting BESIII DAQ System BESIII Review Meeting IHEP · Beijing · China Sep , 2002.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
D0 Taking Stock1 By Anil Kumar CD/CSS/DSG June 06, 2005.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
Batch Software at JLAB Ian Bird Jefferson Lab CHEP February, 2000.
CERN IT Department CH-1211 Genève 23 Switzerland t The Tape Service at CERN Vladimír Bahyl IT-FIO-TSI June 2009.
D0 File Replication PPDG SLAC File replication workshop 9/20/00 Vicky White.
1 5/4/05 Fermilab Mass Storage Enstore, dCache and SRM Michael Zalokar Fermilab.
Oct. 6, 1999PHENIX Comp. Mtg.1 CC-J: Progress, Prospects and PBS Shin’ya Sawada (KEK) For CCJ-WG.
W.A.Wojcik/CCIN2P3, HEPiX at SLAC, Oct CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
© 2012 IBM Corporation IBM Linear Tape File System (LTFS) Overview and Demo.
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
12/19/01MODIS Science Team Meeting1 MODAPS Status and Plans Edward Masuoka, Code 922 MODIS Science Data Support Team NASA’s Goddard Space Flight Center.
Compute and Storage For the Farm at Jlab
PC Farms & Central Data Recording
Experiences with Large Data Sets
SAM at CCIN2P3 configuration issues
Operating Systems What are they and why do we need them?
San Diego Supercomputer Center
A Web-Based Data Grid Chip Watson, Ian Bird, Jie Chen,
Lee Lueking D0RACE January 17, 2002
Lecture 4: File-System Interface
Presentation transcript:

The Mass Storage System at JLAB - Today and Tomorrow Andy Kowalski

Jefferson Lab Farm and Mass Storage Systems 2000 FCAL (100MByte) 100 mbit 1000 mbit SCSI From Hall A,C DAQ From CLAS DAQ Work File Servers Cache File Servers DST/Cache File Servers Tape Servers Batch and Interactive Farm DB Server

Farm Total Batch Farm SPECint Dual Processor Linux Systems 4 - Dual Processor Solaris Systems 5 Interactive Farm Systems 1 - Quad Processor Linux System 2 – Dual Processor Linux Systems 2 – Quad Processor Solaris Systems

Disk Storage MetaStor (Work) File Servers – Qty MB Memory 100mbit Ethernet Ultra Wide SCSI RAID Systems RAID-5 Disk Space – 4TB FS3/FS4 - 1TB FS5/FS6 - 3TB Serving NFS User Managed Disk Space

Disk Storage cont. Linux (Cache) File Servers – Qty. 9 Dual 650/700MHz Systems 512MB ECC Memory Gigabit Ethernet Mylex PCI RAID controller RAID-0 Disk Space – 5.8TB Qty. 4 with 400GB Qty. 5 with 876GB Linux (Red Hat 6.2) pre11-va1.1smp Serving NFS Automatically Managed Disk Space

Tape Storage StorageTek PowderHorn tape mounts per hour Holds 6000 tapes Redwood Tape Drives – Qty. 8 50GB tape capacity 10MB/second 9840 Tape Drives – Qty GB tape capacity 10MB/second

Tape Storage cont. Open Storage Manager Computer Associates dropped support January 2000 Not distributed Two installations – mss1 and mss2 TapeServer Front-End Written in Java Makes two OSM servers act as one Stages data to/from disk from/to tape A user interface

Jefferson Lab Farm and Mass Storage Systems 2001 FCAL (100MByte) 100 mbit 1000 mbit SCSI From Hall A,C DAQ From CLAS DAQ Work File Servers Cache File Servers DST/Cache File Servers Batch and Interactive Farm DB Server Tape Servers

Projects JASMine Replacement for OSM Distributed Data Movers and Cache Managers Scalable to the needs of the experiments Smart scheduling Off-site cache or ftp servers for data exporting

Projects cont. File Exports JASMine Cache Software GRID Aware - PPDG FTP BBFTP WEB Tape

JASMine Logical Storage Organization Store A logical entity made up of libraries, servers, data movers, and a database. Storage Group An object that belongs to a store and is itself a collection of storage groups or volume sets. Volume Set An object that belongs to a storage group and is itself a collection of volumes. Volume A unit of storage media. Bitfile The copy of a file that has been copied into a store.

Storage Group 2 Store Storage Group 1 Volume Set 1 Volume Set 2 Storage Group 3 Storage Group 4 Storage Group 5 Vol Volume Set 3 Volume Set 4 Volume Set 5 Vol Bitfiles

JASMine Physical Storage Organization Store A logical entity made up of libraries, servers, data movers, and a database. Library A set of volumes and drives. Drive A media reader or writer. Volume A unit of storage media.

JASMine Services Request Manager Handles user requests and queries. Scheduler Prioritizes user requests for tape access. priority = share / (.01 + (num_a * ACTIVE_WEIGHT) + (num_c * COMPLETED_WEIGHT) ) Log Manager Writes out log and error files and databases. Sends out notices for failures. Library Manager Mount and dismounts tapes as well as other library related tasks.

JASMine Services 2 Data Mover Dispatcher Keeps track of available local resources and starts requests the local system can work on. Cache Manager Manages a disk or disks for pre-staging data to and from tape. Sends and receives data to and from clients. Volume Manager Manages tapes for availability. Drive Manager Manages tape drives for usage.

Dispatcher Cache Manager Drive Manager DriveDisk Library Manager Volume Manager Client Request Manager Scheduler Data Mover Log Manager Library Manager Database Request Manager

What to Expect in 2001 Work File Servers Additional 5 TB Linux File Servers?? Cache File Servers Additional 5-10TB

2001 cont. Tape Storage JASMine Installed 5-10 Data Movers 10 Additional Tape Drives (9840 or 9940)

The End