Tape logging- SAM perspective Doug Benjamin (for the CDF Offline data handling group)

Slides:



Advertisements
Similar presentations
UCL HEP Computing Status HEPSYSMAN, RAL,
Advertisements

24-Apr-03UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS SECURITY SOFTWARE MAINTENANCE BACKUP.
Fermilab Mass Storage System Gene Oleynik Integrated Administration, Fermilab.
Chris Brew RAL PPD Site Report Chris Brew SciTech/PPD.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Proposal for dCache based Analysis Disk Pool for CDF presented by Doug Benjamin Duke University on behalf of the CDF Offline Group.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
CMS 1 M. Biasotto Test performance DPM Massimo Biasotto – INFN Legnaro.
F Fermilab Database Experience in Run II Fermilab Run II Database Requirements Online databases are maintained at each experiment and are critical for.
1 RAL Status and Plans Carmine Cioffi Database Administrator and Developer 3D Workshop, CERN, November 2009.
BNL Oracle database services status and future plans Carlos Fernando Gamboa RACF Facility Brookhaven National Laboratory, US Distributed Database Operations.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
Unclassified// Protected Information// Proprietary Information Building Applications with the MeDICi Integration Framework Scientist/Analyst Miners Plumbers.
Technology Expectations in an Aeros Environment October 15, 2014.
1 © 2006 SolidWorks Corp. Confidential. Clustering  SQL can be used in “Cluster Pack” –A pack is a group of servers that operate together and share partitioned.
Fermilab Oct 17, 2005Database Services at LCG Tier sites - FNAL1 FNAL Site Update By Anil Kumar & Julie Trumbo CD/CSS/DSG FNAL LCG Database.
D0 Taking Stock1 By Anil Kumar CD/CSS/DSG July 10, 2006.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
30-Jun-04UCL HEP Computing Status June UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS.
Winnie Lacesso Bristol Storage June DPM LCG Storage lcgse01 = DPM built in 2005 by Yves Coppens & Pete Gronbech SuperMicro X5DPAGG (Streamline.
CDF data production models 1 Data production models for the CDF experiment S. Hou for the CDF data production team.
CERN - IT Department CH-1211 Genève 23 Switzerland t Tier0 database extensions and multi-core/64 bit studies Maria Girone, CERN IT-PSS LCG.
D0 Farms 1 D0 Run II Farms M. Diesburg, B.Alcorn, J.Bakken, T.Dawson, D.Fagan, J.Fromm, K.Genser, L.Giacchetti, D.Holmgren, T.Jones, T.Levshina, L.Lueking,
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
Business Intelligence Appliance Powerful pay as you grow BI solutions with Engineered Systems.
A Design for KCAF for CDF Experiment Kihyeon Cho (CHEP, Kyungpook National University) and Jysoo Lee (KISTI, Supercomputing Center) The International Workshop.
ERA 4.0. Prerequisite to Install ERA 4.0 :- (Server) Processor 2.0 GHz RAM2 GB HDD80 GB (SATA) OSWindows 2003 with service pack 2/ XP with Service pack.
Linux Servers with JASMine K. Edwards, A. Kowalski, S. Philpott HEPiX May 21, 2003.
CDF Offline Production Farms Stephen Wolbers for the CDF Production Farms Group May 30, 2001.
Computing at CDF ➢ Introduction ➢ Computing requirements ➢ Central Analysis Farm ➢ Conclusions Frank Wurthwein MIT/FNAL-CD for the CDF Collaboration.
9 February 2000CHEP2000 Paper 3681 CDF Data Handling: Resource Management and Tests E.Buckley-Geer, S.Lammel, F.Ratnikov, T.Watts Hardware and Resources.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
SAM Installation Lauri Loebel Carpenter and the SAM Team February
Workshop on Computing for Neutrino Experiments - Summary April 24, 2009 Lee Lueking, Heidi Schellman NOvA Collaboration Meeting.
Remote Control Room and SAM DH Shifts at KISTI for CDF Experiment 김현우, 조기현, 정민호 (KISTI), 김동희, 양유철, 서준석, 공대정, 김지은, 장성현, 칸 아딜 ( 경북대 ), 김수봉, 이재승, 이영장, 문창성,
GLAST Science Support CenterJuly, 2003 LAT Ground Software Workshop Status of the D1 (Event) and D2 (Spacecraft Data) Database Prototypes for DC1 Robert.
The DCS lab. Computer infrastructure Peter Chochula.
Disk Farms at Jefferson Lab Bryan Hess
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Implementation of a reliable and expandable on-line storage for compute clusters Jos van Wezel.
CERN Database Services for the LHC Computing Grid Maria Girone, CERN.
Computational Research in the Battelle Center for Mathmatical medicine.
DØ Online Workshop3-June-1999S. Fuess Online Computing Overview DØ Online Workshop 3-June-1999 Stu Fuess.
DCAF (DeCentralized Analysis Farm) Korea CHEP Fermilab (CDF) KorCAF (DCAF in Korea) Kihyeon Cho (CHEP, KNU) (On the behalf of HEP Data Grid Working Group)
D0 Taking Stock1 By Anil Kumar CD/CSS/DSG June 06, 2005.
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
DCAF(DeCentralized Analysis Farm) for CDF experiments HAN DaeHee*, KWON Kihwan, OH Youngdo, CHO Kihyeon, KONG Dae Jung, KIM Minsuk, KIM Jieun, MIAN shabeer,
Frank Wuerthwein, UCSD Update on D0 and CDF computing models and experience Frank Wuerthwein UCSD For CDF and DO collaborations October 2 nd, 2003 Many.
Adapting SAM for CDF Gabriele Garzoglio Fermilab/CD/CCF/MAP CHEP 2003.
3/16/2005CSS/SCS/Farms and Clustered System 1 CDF Farms Transition Old Production farm (fbs/dfarm/mysql) SAM-farm (Condor/CAF/SAM)
Data Management with SAM at DØ The 2 nd International Workshop on HEP Data Grid Kyunpook National University Daegu, Korea August 22-23, 2003 Lee Lueking.
D0 Farms 1 D0 Run II Farms M. Diesburg, B.Alcorn, J.Bakken, R. Brock,T.Dawson, D.Fagan, J.Fromm, K.Genser, L.Giacchetti, D.Holmgren, T.Jones, T.Levshina,
Database CNAF Barbara Martelli Rome, April 4 st 2006.
GRAPE 현황. Hardware Configuration Master –2 x Dual Core Xeon 5130 (2 GHz) –2 GB memory –500 GB hard disk 4 Slaves –2 x Dual Core Xeon 5130 (2 GHz) –1 GB.
AMS02 Software and Hardware Evaluation A.Eline. Outline  AMS SOC  AMS POC  AMS Gateway Computer  AMS Servers  AMS ProductionNodes  AMS Backup Solution.
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
1 By: Solomon Mikael (UMBC) Advisors: Elena Vataga (UNM) & Pavel Murat (FNAL) Development of Farm Monitoring & Remote Concatenation for CDFII Production.
CDF SAM Deployment Status Doug Benjamin Duke University (for the CDF Data Handling Group)
12/19/01MODIS Science Team Meeting1 MODAPS Status and Plans Edward Masuoka, Code 922 MODIS Science Data Support Team NASA’s Goddard Space Flight Center.
VM Layout. Virtual Machine (Ubuntu Server) VM x.x You can putty into this machine from on campus. Or you can use vSphere to control the hardware.
Computing Infrastructure – Minos 2009/12 ● Downtime schedule – 3 rd Thur monthly ● Dcache upgrades ● Condor / Grid status ● Bluearc performance – cpn lock.
Status report NIKHEF Willem van Leeuwen February 11, 2002 DØRACE.
UBUNTU INSTALLATION
By Anil Kumar CD/CSS/DSG June 06, 2005
Data Storage Requirements
JDAT Production Hardware
James Bellinger University of Wisconsin at Madison 28-June-2011
SAP HANA Cost-optimized Hardware for Non-Production
Status report NIKHEF Willem van Leeuwen February 11, 2002 DØRACE.
Presentation transcript:

Tape logging- SAM perspective Doug Benjamin (for the CDF Offline data handling group)

Data Logger Disks Enstore SAM Data logging Raw Data Control data File Metadata Data Stager (Sam - Fss) (CDF specific - fcdfsg1) SAM Databaser server Offline database SAM data logging process Control data Metadata (tapes) Metadata Same machine

Suggested configuration within Data Logging machine Partition A Partition B Partition X …. Write here read here Write to 1 partition - read from another: rotating through partitions One raided Disk Array subsystem

Available Hardware and Tests to do (in FCC) One AtaBeast disk subsystem w/ 2 raid controllers ( ~ 14 TB) 1 - dual processor front end head node (fcdfdata155 2 XEON, 3.2 Ghz) Can ask for another head node. Tests (includes SAM metadata declaration etc) - Write to one partition and read from another (measure the rates) Go through all partitions in this manner Write 1 GB files to Null movers to measure write to tape system Write to tape (1-2 day test) to measure sustained rate Need people to do the tests and write the code (offline DH will help but we are overloaded)