HAMBURG ZEUTHEN DESY Site Report HEPiX/HEPNT Fermilab 2002-10-23 Knut Woller.

Slides:



Advertisements
Similar presentations
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Jos van Wezel Doris Ressmann GridKa, Karlsruhe TSM as tape storage backend for disk pool managers.
Advertisements

VERITAS Software Corp. BUSINESS WITHOUT INTERRUPTION Fredy Nick SE Manager.
Password?. Project CLASP: Common Login and Access rights across Services Plan
Password?. Project CLASP: Common Login and Access rights across Services Plan
12. March 2003Bernd Panzer-Steindel, CERN/IT1 LCG Fabric status
Jean-Yves Nief, CC-IN2P3 Wilko Kroeger, SCCS/SLAC Adil Hasan, CCLRC/RAL HEPiX, SLAC October 11th – 13th, 2005 BaBar data distribution using the Storage.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
CT NIKHEF June File server CT system support.
12/04/98HEPNT - Windows NT Days1 NT Cluster & MS Dfs Gunter Trowitzsch & DESY WindowsNT Group.
Storage Area Networks The Basics. Storage Area Networks SANS are designed to give you: More disk space Multiple server access to a single disk pool Better.
Database Services for Physics at CERN with Oracle 10g RAC HEPiX - April 4th 2006, Rome Luca Canali, CERN.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
PCGRID ‘08 Workshop, Miami, FL April 18, 2008 Preston Smith Implementing an Industrial-Strength Academic Cyberinfrastructure at Purdue University.
30-Jun-04UCL HEP Computing Status June UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS.
SMS 2003 Deployment and Managing Windows Security Rafal Otto Internet Services Group Department of Information Technology CERN 26 May 2016.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
Paul Scherrer Institut 5232 Villigen PSI HEPIX_AMST / / BJ95 PAUL SCHERRER INSTITUT THE PAUL SCHERRER INSTITUTE Swiss Light Source (SLS) Particle accelerator.
Nov 1, 2000Site report DESY1 DESY Site Report Wolfgang Friebel DESY Nov 1, 2000 HEPiX Fall
608D CloudStack 3.0 Omer Palo Readiness Specialist, WW Tech Support Readiness May 8, 2012.
DATABASE MANAGEMENT SYSTEMS IN DATA INTENSIVE ENVIRONMENNTS Leon Guzenda Chief Technology Officer.
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
Lower Storage projects Alexander Moibenko 02/19/2003.
GStore: GSI Mass Storage ITEE-Palaver GSI Horst Göringer, Matthias Feyerabend, Sergei Sedykh
D C a c h e Michael Ernst Patrick Fuhrmann Tigran Mkrtchyan d C a c h e M. Ernst, P. Fuhrmann, T. Mkrtchyan Chep 2003 Chep2003 UCSD, California.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
Grid Lab About the need of 3 Tier storage 5/22/121CHEP 2012, The need of 3 Tier storage Dmitri Ozerov Patrick Fuhrmann CHEP 2012, NYC, May 22, 2012 Grid.
15-Apr-1999D.P.Kelsey - HEPNT update - HEPiX/RAL1 HEPNT an update David Kelsey CLRC Rutherford Appleton Lab, UK rl.ac.uk
Author - Title- Date - n° 1 Partner Logo WP5 Summary Paris John Gordon WP5 6th March 2002.
LCG Phase 2 Planning Meeting - Friday July 30th, 2004 Jean-Yves Nief CC-IN2P3, Lyon An example of a data access model in a Tier 1.
JLAB Computing Facilities Development Ian Bird Jefferson Lab 2 November 2001.
NA-MIC National Alliance for Medical Image Computing UCSD: Engineering Core 2 Portal and Grid Infrastructure.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
Storage and Storage Access 1 Rainer Többicke CERN/IT.
Owen SyngeTitle of TalkSlide 1 Storage Management Owen Synge – Developer, Packager, and first line support to System Administrators. Talks Scope –GridPP.
Test Results of the EuroStore Mass Storage System Ingo Augustin CERNIT-PDP/DM Padova.
HEPiX FNAL ‘02 25 th Oct 2002 Alan Silverman HEPiX Large Cluster SIG Report Alan Silverman 25 th October 2002 HEPiX 2002, FNAL.
CASPUR Site Report Andrei Maslennikov Lead - Systems Amsterdam, May 2003.
USATLAS dCache System and Service Challenge at BNL Zhenping (Jane) Liu RHIC/ATLAS Computing Facility, Physics Department Brookhaven National Lab 10/13/2005.
TiBS Fermilab – HEPiX-HEPNT Ray Pasetes October 22, 2003.
14 th April 1999CERN Site Report, HEPiX RAL. A.Silverman CERN Site Report HEPiX April 1999 RAL Alan Silverman CERN/IT/DIS.
1 e-Science AHM st Aug – 3 rd Sept 2004 Nottingham Distributed Storage management using SRB on UK National Grid Service Manandhar A, Haines K,
Introduction to The Storage Resource.
System Center Lesson 4: Overview of System Center 2012 Components System Center 2012 Private Cloud Components VMM Overview App Controller Overview.
Scientific Storage at FNAL Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015.
BNL Service Challenge 3 Status Report Xin Zhao, Zhenping Liu, Wensheng Deng, Razvan Popescu, Dantong Yu and Bruce Gibbard USATLAS Computing Facility Brookhaven.
Oct 24, 2002 Michael Ernst, Fermilab DRM for Tier1 and Tier2 centers Michael Ernst Fermilab February 3, 2003.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
Storage and Data Movement at FNAL D. Petravick CHEP 2003.
Oracle for Physics Services and Support Levels Maria Girone, IT-ADC 24 January 2005.
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
Grid Technology CERN IT Department CH-1211 Geneva 23 Switzerland t DBCF GT Upcoming Features and Roadmap Ricardo Rocha ( on behalf of the.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
Lisa Giacchetti AFS: What is everyone doing? LISA GIACCHETTI Operating Systems Support.
10 May 2001WP6 Testbed Meeting1 WP5 - Mass Storage Management Jean-Philippe Baud PDP/IT/CERN.
CASTOR project status CASTOR project status CERNIT-PDP/DM October 1999.
Windows NT at DESY Status report HEP NT 4 th -8 th October 1999 SLAC.
Status of W2K at INFN Gian Piero Siroli, Dept. of Physics, Univ. of Bologna and INFN HEPiX-HEPNT 2000, Jefferson Lab.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
Hans Wenzel CDF CAF meeting October 18 th -19 th CMS Computing at FNAL Hans Wenzel Fermilab  Introduction  CMS: What's on the floor, How we got.
1 5/4/05 Fermilab Mass Storage Enstore, dCache and SRM Michael Zalokar Fermilab.
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CC Monitoring I.Fedorko on behalf of CF/ASI 18/02/2011 Overview.
Thomas Baus Senior Sales Consultant Oracle/SAP Global Technology Center Mail: Phone:
Truly Distributed File Systems Paul Timmins CS 535.
Scientific Linux Connie Sieh CSAM Meeting May 2, 2006.
High Performance Storage System (HPSS) Jason Hick Mass Storage Group HEPiX October 26-30, 2009.
CASTOR: possible evolution into the LHC era
Objective Understand the concepts of modern operating systems by investigating the most popular operating system in the current and future market Provide.
Objective Understand the concepts of modern operating systems by investigating the most popular operating system in the current and future market Provide.
Presentation transcript:

HAMBURG ZEUTHEN DESY Site Report HEPiX/HEPNT Fermilab Knut Woller

HAMBURG ZEUTHEN Knut WollerDESY Site ReportIT–Systems1 Overview I will focus on ongoing activities and projects:  Storage and data management dCache ExaStore  User Registry Project  Windows Migration Project  Mail Consolidation

HAMBURG ZEUTHEN Knut WollerDESY Site ReportIT–Systems2 Storage and Data Management New requirements and challenges:  Need to decrease storage costs  Increasing number of clients burdens HSM  Distributed clients create awkward data paths, and distributed NFS does not scale  The “Traveling Scientist” requires mobility  Users are increasingly unable or unwilling to judge features or cost of a specific store. They just want to use it.

HAMBURG ZEUTHEN Knut WollerDESY Site ReportIT–Systems3 About dCache  Distributed cache between clients and HSM  Collaborative development at DESY & FNAL  In production use at DESY and FNAL  More labs are looking into it  DESY currently runs about 30TB read pool on IDE RAID servers  All major DESY groups use it by now  For us, it is the method to access HSM data

HAMBURG ZEUTHEN Knut WollerDESY Site ReportIT–Systems4 dCache Features  Allows the use of cheap tape media by largely reducing the number of mounts  Coordinates the site wide data staging and reduces data management manpower  Supports several HSMs (OSM, EnStore, Eurogate)  Can be transparently used by applications through C- API (ROOT supports dCache)  Scales well to thousands of clients and hundreds of pool servers  Can be used in GRIDs (bbFTP, gridFTP)

HAMBURG ZEUTHEN Knut WollerDESY Site ReportIT–Systems5 dCache Development  DESY / FNAL project is well advanced  Presentations have been made at recent HEPiX and CHEP conferences  Project information is on  We plan to set up a central read disk pool of 100+TB when we migrate to large, cheap tapes (STK 9940B) in a few months.

HAMBURG ZEUTHEN Knut WollerDESY Site ReportIT–Systems6 ExaStore  Since 1999, major user groups have demanded a “Large Central File Store” at DESY  Features: Multi-Terabyte, high performance, single filesystem view, random access  AFS will not scale to this size  dCache does not fit the requirements  Commercial NAS solutions do not scale well  EXANET came along in 2000 with a product proposal that suits our needs

HAMBURG ZEUTHEN Knut WollerDESY Site ReportIT–Systems7 About ExaStore  Seen from the outside, the ExaStore is a highly scalable, high performance NAS (or a huge virtual disk)  Internally, it is built from disk and CPU servers and independent RAID arrays. ExaStore’s spice is The use of commodity components Their cluster file system Their redundant server mesh  ExaStore scales in (at least) two dimensions: In capacity by adding disks In performance by adding nodes and/or uplinks

HAMBURG ZEUTHEN Knut WollerDESY Site ReportIT–Systems8 Why Exastore at DESY?  Because the current jungle of cross mounted NFS disks is an administrative nightmare  Because NFS data management at DESY today is handled decentrally in the user groups. IT wants to fill this gap to make better resource use.  Because scaling the current system of distributed NFS servers reduces stability and manageability  Because current NAS solutions are limited to 12-18TB per box and a fixed number of uplinks and server nodes  Because we do not think it would be wise to invent our own SAN/NAS solution.

HAMBURG ZEUTHEN Knut WollerDESY Site ReportIT–Systems9 ExaStore Experiences  First test system at DESY since April, in beta test since June (4 nodes, 1.5TB)  No crashes in four months  Performance is not yet where we want it to be, but well on the road  We want to acquire a production system with 8 nodes and 12 TB (management approval pending)

HAMBURG ZEUTHEN Knut WollerDESY Site ReportIT–Systems10 User Registry Project  DESY User Registry is old, limited, inflexible  Number of user groups is increasing  Each new complex software system today comes with a proprietary registry (e.g. mailserver, calendar server, Oracle, SAP, …)  Interfaces to HR database, phonebook etc. are required  We need a site wide metadirectory toolbox  Groups have a large demand for delegations of rights

HAMBURG ZEUTHEN Knut WollerDESY Site ReportIT–Systems11 Project Approach  Design phase started in January  We have a clear functional description now  We looked in to commercial (Tivoli, CA, …) and open source (Ganymede) tools, none of which seem to fit our needs  We are gathering troops to start coding  Platform account (unix, windows, kerberos) should be manageable in Q2/2003  Platform adaptors will take some time

HAMBURG ZEUTHEN Knut WollerDESY Site ReportIT–Systems12 Windows Migration Project  The DESY Windows Domain is still NT4  We started rolling out W2K and WXP clients in the DESYNT domain (mostly notebooks)  Basic software support (netinstall) for WXP desktops in DESYNT available this year  Domain servers are NT4, newer ones W2K .net server look promising, but are not in production use yet  Where possible, we are skipping W2K clients

HAMBURG ZEUTHEN Knut WollerDESY Site ReportIT–Systems13 W2K Migration Status  New project team has been formed within IT  We are finalizing the site wide AD design  New hardware has been / is being acquired  Homedir storage is under reconsideration  We plan to have a working domain in Q1/2003  Migration start foreseen in Q2/2003  DESYNT will stay alive for control systems

HAMBURG ZEUTHEN Knut WollerDESY Site ReportIT–Systems14 Mail Consolidation  We are still in the sad state of supporting sendmail, Exchange, and PMDF  We experience load and capacity problems on all three systems  User ‘requirements’ (real or not) have limited us in the past years  Next step will be mail routing consolidation to get rid of PMDF  We want to end up with one mail router and one mail server solution, both yet unnamed

HAMBURG ZEUTHEN Knut WollerDESY Site ReportIT–Systems15 In General …  … we have been able to increase our IT staff with bright, young colleages (IT is back to 1999 staffing level)  … we start seeing synergy effects by treating windows and unix systems in one group (e.g. Samba, hardware standards)  … we have been able to start a few major efforts and projects  … we are striving for more coherence between Hamburg and Zeuthen  … much of our effort is still required to clean up or legacy from the past (technologically and socially)  … I think we have a few very well working and scalable solutions, e.g. in mass storage (dcache), Linux support, printing

HAMBURG ZEUTHEN Knut WollerDESY Site ReportIT–Systems16 That’s It Thank you for your attention