CASPUR Site Report Andrei Maslennikov Sector Leader - Systems Catania, April 2001.

Slides:



Advertisements
Similar presentations
The RHIC-ATLAS Computing Facility at BNL HEPIX – Edinburgh May 24-28, 2004 Tony Chan RHIC Computing Facility Brookhaven National Laboratory.
Advertisements

CASPUR Site Report Andrei Maslennikov Lead - Systems Edinburgh, May 2004.
CASTOR Project Status CASTOR Project Status CERNIT-PDP/DM February 2000.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
Other File Systems: AFS, Napster. 2 Recap NFS: –Server exposes one or more directories Client accesses them by mounting the directories –Stateless server.
1 Andrew Hanushevsky - HEPiX, October 6-8, 1999 Mass Storage For BaBar at SLAC Andrew Hanushevsky Stanford.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
Backup Rationalisation Reorganisation of the CERN Computer Centre Backups David Asbury IT/DS Friday 6 December 2002.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
File Systems and N/W attached storage (NAS) | VTU NOTES | QUESTION PAPERS | NEWS | VTU RESULTS | FORUM | BOOKSPAR ANDROID APP.
Hands-On Microsoft Windows Server 2008 Chapter 1 Introduction to Windows Server 2008.
Database Services for Physics at CERN with Oracle 10g RAC HEPiX - April 4th 2006, Rome Luca Canali, CERN.
The Mass Storage System at JLAB - Today and Tomorrow Andy Kowalski.
Baydel Founded in 1972 Headquarters: Surrey, England North American Headquarters: San Jose, CA Engineering Driven Organization Specialize in Computer Storage.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
CASPUR SAN News Andrei Maslennikov Orsay, April 2001.
October, Scientific Linux INFN/Trieste B.Gobbo – Compass R.Gomezel - T.Macorini - L.Strizzolo INFN - Trieste.
Building Advanced Storage Environment Cheng Yaodong Computing Center, IHEP December 2002.
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
CASPUR Site Report Andrei Maslennikov Sector Leader - Systems Orsay, April 2001.
An Overview of PHENIX Computing Ju Hwan Kang (Yonsei Univ.) and Jysoo Lee (KISTI) International HEP DataGrid Workshop November 8 ~ 9, 2002 Kyungpook National.
Paul Scherrer Institut 5232 Villigen PSI HEPIX_AMST / / BJ95 PAUL SCHERRER INSTITUT THE PAUL SCHERRER INSTITUTE Swiss Light Source (SLS) Particle accelerator.
Nov 1, 2000Site report DESY1 DESY Site Report Wolfgang Friebel DESY Nov 1, 2000 HEPiX Fall
23 Oct 2002HEPiX FNALJohn Gordon CLRC-RAL Site Report John Gordon CLRC eScience Centre.
CASPUR Site Report Andrei Maslennikov Lead - Systems Karlsruhe, May 2005.
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility HEPiX – Fall, 2005.
Page 1 of 23 NFS Vendors Conference October 25, 2000 Impact of NAS and SAN on Distributed File Systems Steve Widen Research Director, Storage Software.
20-22 September 1999 HPSS User Forum, Santa Fe CERN IT/PDP 1 History  Test system HPSS 3.2 installation in Oct 1997 IBM AIX machines with IBM 3590 drives.
6/26/01High Throughput Linux Clustering at Fermilab--S. Timm 1 High Throughput Linux Clustering at Fermilab Steven C. Timm--Fermilab.
Computer Systems Lab The University of Wisconsin - Madison Department of Computer Sciences Linux Clusters David Thompson
SLAC Site Report Chuck Boeheim Assistant Director, SLAC Computing Services.
CASPUR Storage Lab Andrei Maslennikov CASPUR Consortium Catania, April 2002.
IST Storage & Backup Group 2011 Jack Shnell Supervisor Joe Silva Senior Storage Administrator Dennis Leong.
10/22/2002Bernd Panzer-Steindel, CERN/IT1 Data Challenges and Fabric Architecture.
CMS Software at RAL Fortran Code Software is mirrored into RAL AFS cell every 24 hours  /afs/rl.ac.uk/cms/ Binary libraries available for: HPHP-UX
CASTOR: CERN’s data management system CHEP03 25/3/2003 Ben Couturier, Jean-Damien Durand, Olof Bärring CERN.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
Jefferson Lab Site Report Sandy Philpott Thomas Jefferson National Accelerator Facility (formerly CEBAF - The Continuous Electron Beam Accelerator Facility)
CASTOR evolution Presentation to HEPiX 2003, Vancouver 20/10/2003 Jean-Damien Durand, CERN-IT.
CASPUR Site Report Andrei Maslennikov Lead - Systems Amsterdam, May 2003.
Disk Farms at Jefferson Lab Bryan Hess
CASPUR Site Report Andrei Maslennikov Lead - Systems Rome, April 2006.
TiBS Fermilab – HEPiX-HEPNT Ray Pasetes October 22, 2003.
14 th April 1999CERN Site Report, HEPiX RAL. A.Silverman CERN Site Report HEPiX April 1999 RAL Alan Silverman CERN/IT/DIS.
CASPUR Site Report Andrei Maslennikov Group Leader - Systems RAL, April 1999.
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
LCG LCG-1 Deployment and usage experience Lev Shamardin SINP MSU, Moscow
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
Reliability of KLOE Computing Paolo Santangelo for the KLOE Collaboration INFN LNF Commissione Scientifica Nazionale 1 Roma, 13 Ottobre 2003.
11th April 2003Tim Adye1 RAL Tier A Status Tim Adye Rutherford Appleton Laboratory BaBar UK Collaboration Meeting Liverpool 11 th April 2003.
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
W.A.Wojcik/CCIN2P3, Nov 1, CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
Batch Software at JLAB Ian Bird Jefferson Lab CHEP February, 2000.
A UK Computing Facility John Gordon RAL October ‘99HEPiX Fall ‘99 Data Size Event Rate 10 9 events/year Storage Requirements (real & simulated data)
CERN IT Department CH-1211 Genève 23 Switzerland t The Tape Service at CERN Vladimír Bahyl IT-FIO-TSI June 2009.
W.A.Wojcik/CCIN2P3, HEPiX at SLAC, Oct CCIN2P3 Site report Wojciech A. Wojcik IN2P3 Computing Center URL:
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
Bernd Panzer-Steindel CERN/IT/ADC1 Medium Term Issues for the Data Challenges.
High Performance Storage System (HPSS) Jason Hick Mass Storage Group HEPiX October 26-30, 2009.
RHEV Platform at LHCb Red Hat at CERN 17-18/1/17
IT-DB Physics Services Planning for LHC start-up
PC Farms & Central Data Recording
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
Backup Monitoring – EMC NetWorker
Backup Monitoring – EMC NetWorker
Presentation transcript:

CASPUR Site Report Andrei Maslennikov Sector Leader - Systems Catania, April 2001

A.Maslennikov - Catania Contents: Update on central computers Storage news Tape news AFS CASPUR and HEP Projects for year 2002

A.Maslennikov - Catania Central computers: update IBM SMP: - Doubled capacity: new SP3 frame operational as of February Used the occasion, and installed new frame with AIX Currently running with two control workstations: - Batch system needs to be recompiled (SGEEE) - Some applications that work on stopped working on Hope to be able to solve both problems and merge frames on Switch-2, before the end of May - Will then migrate to AIX 5.1 also all stand-alone machines. Compaq SMP: Sun SMP: - Upgraded half of the park to Tru Upgraded all machines By the end of May: 5.1 on all machines. to Solaris 8. - Finally network install works, and AFS build is available

A.Maslennikov - Catania Storage news Gigabit storage network - Two NAS systems (NFS and AFS) are now on the private Gigabit Ethernet network - Switching device: NPI Keystone, will soon be replaced with Extreme - Only service, number-cruncher and front-end nodes may access NFS and AFS on Gigabit - AFS servers: disabled multihomed function to avoid timeouts on clients, use static routes instead Storage Laboratory - Set it up to see if Linux-based file services may compare in performance with our current solution (NFS on NetApp F760, AFS on Sun R220s), especially for large files - Excellent occasion to investigate other issues, and to compare different methods of file serving, to see where bottlenecks are etc - Will also try new media: tapes and disk systems - A stand-alone report on these activities will be presented tomorrow

A.Maslennikov - Catania Tape news Tape Drives and Robotics - Gave up our part in 9740 STK Library, now it fully belongs to Babar-Italy - All tape-related systems are using FC LTOs, everything is very stable - All tape service machines see and share all tape drives and cartridges on SAN Tape Dispatcher: now downloadable - CASPUR Tape Dispatcher provides a tape locking and mounting service within organization - Allows to share all tape resources between all services that claim the tape access - Performs a minimal bookkeeping, automatically assigns empty tapes from the pool - Handles multiple mount requests on all libraries, supports a drive session - Site-independent - Documented on the web and downloadable now: CASPUR Staging - Staging 1: in production since 1998, very stable. Features multiple i/o streams, fast tape / file database (MySQL), fully-recoverable tape format - New feature is being added: Multimover Support (Staging 2) - Like Tape Dispatcher, Staging System will also be made downloadable

A.Maslennikov - Catania CASPUR: principal resources in 2002 IBM – 134 CPU (375 MHz) Sun – 22 CPU (336 MHz) Compaq – 48 CPU ( MHz ) FC RAID SYSTEMS 4TB AFS 1.2 TB 3x Sun 220R FC SAN NFS 1.2 TB NetApp F760 Gigabit IP FC TAPE SYSTEMS 30 / 60 TB Stager Sun 440

A.Maslennikov - Catania AFS - OpenAFS: solid on Linux Deployed many servers, not a single glitch Current version in use: 1.2.3, on kernel Software-wise, we are ready to migrate to OpenAFS Performance on Linux has yet to be checked (will do it within our Storage Lab) - Maintenance (IBM product is being withdrawn!) OpenAFS on Linux - satisfactory IBM AFS on other architectures - we are covered until the end of 2002 Looks like there is a possibility to purchase maintenance for 2003 from IBM Will anyway study other independent maintenance providers

A.Maslennikov - Catania CASPUR and HEP Relations with INFN: - Fullscale AFS system support (maintenance and hotline) - ASIS mirroring to 18 INFN Sections - Linux tree maintenance incl. bootable CDs at the latest patchlevel - Babar Cluster at CASPUR (contract ends up in November 2002) - WAN Storage Laboratory (CASPUR/CNAF) – just started CASPUR/CERN Collaboration - Contract renewed in the end of Three main lines of joint activities stood out since that date: 1. Storage evaluation (joint Storage Lab test program, later - AFS monitoring) 2. ASIS. CASPUR will take care of architectures that CERN no longer supports: AIX, Tru64, IRIX, HP/UX. A set of reference macines is already up and running at CASPUR, complete with the compilation machinery 3. Security: event logging and visualization (common format of db tables adopted)

A.Maslennikov - Catania Projects for year 2002 Control and Monitoring phase 2 - Sensor Library for all architectures - Server tuning, scalability checks - General clean-up, integration with a new event database - Autoconf on all architectures Security - Consultable event log - IDS system (host-based) - Network topology rearrangement (no incoming connectivity for personal workplaces) Storage evaluation - Vast program of tests in the framework of Storage Laboratory - Technology tracking Authentication - Kerberos 5 - Cross-cell authentication