CASPUR Site Report Andrei Maslennikov Lead - Systems Edinburgh, May 2004.

Slides:



Advertisements
Similar presentations
ARIZONA DEPARTMENT OF ADMINISTRATION INFORMATION SERVICES DIVISION - DATA CENTER.
Advertisements

Die Kooperation von Forschungszentrum Karlsruhe GmbH und Universität Karlsruhe (TH) StorNextFS, a fast global Filesysteme in a heterogeneous Cluster Environment.
© University of Reading IT Services ITS Support for e­ Research Stephen Gough Assistant Director of IT Services 18 June 2008.
The RHIC-ATLAS Computing Facility at BNL HEPIX – Edinburgh May 24-28, 2004 Tony Chan RHIC Computing Facility Brookhaven National Laboratory.
Monday 24 May 2004DAPNIA/Pierre-Francois Honore1 DAPNIA site report.
A couple of slides on RAL PPD Chris Brew CCLRC - RAL - SPBU - PPD.
Designing and Deploying a Citrix Based Enterprise GIS Bob Milton California State Office.
Case Study: Photo.net March 20, What is photo.net? An online learning community for amateur and professional photographers 90,000 registered users.
Computing Infrastructure
Basic Computer Vocabulary
Fast Crash Recovery in RAMCloud
Finnish Material Sciences Grid (M-grid) Arto Teräs Nordic-Sgi Meeting October 28, 2004.
Copyright © 2006 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Technology Education Introduction to Computer Administration Introduction.
XenData SX-520 LTO Archive Servers A series of archive servers based on IT standards, designed for the demanding requirements of the media and entertainment.
Michael T. Brown Hennepin County Enterprise Virtualization Administrator Hennepin County Central IT Enterprise Virtualization Services (EVS) October 01,
HP Virtual Tape Library (VTL) Appliance Powered by IPStor Ross Parker – Sales Director, Northern Europe.
XenData SXL-3000 LTO Archive System Turnkey video archive system with near-line LTO capacities scaling from 150 TB to 750 TB, designed for the demanding.
Duke Atlas Tier 3 Site Doug Benjamin (Duke University)
Smart Storage and Linux An EMC Perspective Ric Wheeler
1 Andrew Hanushevsky - HEPiX, October 6-8, 1999 Mass Storage For BaBar at SLAC Andrew Hanushevsky Stanford.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
Windows Server 2003 Windows Server Family Products Windows Server 2003 Web Edition Windows Server 2003 Standard Edition Windows Server 2003 Enterprise.
Microsoft Load Balancing and Clustering. Outline Introduction Load balancing Clustering.
CT NIKHEF June File server CT system support.
Mass RHIC Computing Facility Razvan Popescu - Brookhaven National Laboratory.
How WebMD Maintains Operational Flexibility with NoSQL Rajeev Borborah, Sr. Director, Engineering Matt Wilson – Director, Production Engineering – Consumer.
Edinburgh Site Report 1 July 2004 Steve Thorn Particle Physics Experiments Group.
Making the Virtualization Decision. Agenda The Virtualization Umbrella Server Virtualization Architectures The Players Getting Started.
UCL Site Report Ben Waugh HepSysMan, 22 May 2007.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
US ATLAS Western Tier 2 Status and Plan Wei Yang ATLAS Physics Analysis Retreat SLAC March 5, 2007.
CASPUR Site Report Andrei Maslennikov Sector Leader - Systems Catania, April 2001.
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
Sakai/OSP Portfolio UvA Bas Toeter Universiteit van Amsterdam
EGO Computing Center site report EGO - Via E. Amaldi S. Stefano a Macerata - Cascina (PI) | Stefano Cortese INFN Computing Workshop –
CASPUR Site Report Andrei Maslennikov Sector Leader - Systems Orsay, April 2001.
Paul Scherrer Institut 5232 Villigen PSI HEPIX_AMST / / BJ95 PAUL SCHERRER INSTITUT THE PAUL SCHERRER INSTITUTE Swiss Light Source (SLS) Particle accelerator.
CASPUR Site Report Andrei Maslennikov Lead - Systems Karlsruhe, May 2005.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
Storage Tank in Data Grid Shin, SangYong(syshin, #6468) IBM Grid Computing August 23, 2003.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
JLab Scientific Computing: Theory HPC & Experimental Physics Thomas Jefferson National Accelerator Facility Newport News, VA Sandy Philpott.
CASPUR Storage Lab Andrei Maslennikov CASPUR Consortium Catania, April 2002.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
HEPix April 2006 NIKHEF site report What’s new at NIKHEF’s infrastructure and Ramping up the LCG tier-1 Wim Heubers / NIKHEF (+SARA)
W2K Integration in the Kerberos5 based AFS cell le.infn.it Enrico M. V. Fasanelli I.N.F.N. – Sezione di Lecce Catania,
The DCS lab. Computer infrastructure Peter Chochula.
CASPUR Site Report Andrei Maslennikov Lead - Systems Amsterdam, May 2003.
CASPUR / GARR / CERN / CNAF / CSP New results from CASPUR Storage Lab Andrei Maslennikov CASPUR Consortium May 2003.
IDE disk servers at CERN Helge Meinhard / CERN-IT CERN OpenLab workshop 17 March 2003.
CASPUR Site Report Andrei Maslennikov Lead - Systems Rome, April 2006.
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
6. Juli 2015 Dietrich Liko Physics Computing 114. Vorstandssitzung.
CASPUR Site Report Andrei Maslennikov Group Leader - Systems RAL, April 1999.
Status of the Bologna Computing Farm and GRID related activities Vincenzo M. Vagnoni Thursday, 7 March 2002.
CERN - IT Department CH-1211 Genève 23 Switzerland t High Availability Databases based on Oracle 10g RAC on Linux WLCG Tier2 Tutorials, CERN,
Randy MelenApril 14, Stanford Linear Accelerator Center Site Report April 1999 Randy Melen SLAC Computing Services/Systems HPC Team Leader.
Macromolecular Crystallography Workshop 2004 Recent developments regarding our Computer Environment, Remote Access and Backup Options.
RHIC/US ATLAS Tier 1 Computing Facility Site Report Christopher Hollowell Physics Department Brookhaven National Laboratory HEPiX Upton,
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
RAL PPD Tier 2 (and stuff) Site Report Rob Harper HEP SysMan 30 th June
Mass Storage at SARA Peter Michielse (NCF) Mark van de Sanden, Ron Trompert (SARA) GDB – CERN – January 12, 2005.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
NL Service Challenge Plans
Christof Hanke, HEPIX Spring Meeting 2008, CERN
2018 Real Dell EMC E Exam Questions Killtest
ENA Cloud Services.
Presentation transcript:

CASPUR Site Report Andrei Maslennikov Lead - Systems Edinburgh, May 2004

A.Maslennikov - Edinburgh Contents Update on central computers Storage news Network highlights Projects 2004

A.Maslennikov - Edinburgh Central computers IBM SMP: - 3 frames with 80 POWER-4 CPUs at 1.1 GHz and 144 GB of RAM - 1 legacy frame with 64 POWER-3 CPUs at 375 MHz and 64 GB of RAM - AIX 5.2 ML2+, AFS and SGEE on all nodes - Very stable, all CPUs are heavily used - Under lease until 2006; will be probably upgrading to POWER-5 in 2005 HP SMP: CPU ES45 nodes at 1.25 GHz, 64GB of RAM and 1.2 TB of local FC disk - 6 Legacy ES40 nodes at MHz used for BIOGRID project - Tru64 5.1a++ on all nodes, AFS + SGEEE on 5 standalone nodes - True Cluster on 9 nodes (AFS via Translator; powerful Solaris 9 gateway, memcache, modified SSH) - Requires a lot of attention, but very fast and fully used (mainly computational chemistry apps) - Arriving: 32-CPU EV7 node Itanium-2 SMP: - 1 single CPU, 5 biprocessor and 1 quad nodes (900 MHz - 1 Ghz – 1.5 GHz) - RH AS 3 on one node, all others run CERN CEL3/AS3 Build for ia64, AFS, SGEEE NEC SX-6i: - single CPU 4GB RAM, 8 GFLOP - speedup up to 10x against POWER4 for some apps, currently considering SMP purchase Reference: several biprocessor Intel/32bit and AMD/64bit

A.Maslennikov - Edinburgh Storage update AFS: 4 cells on site and 6 outside - OpenAFS on Linux - Main Servers: SuperMicro 2x2.8 GHZ, June 2004: 6 TB (Infortrend SATA/FC) - Vice partitions in SGI XFS – only one XFS-related problem in 1.5 years - Standalone backup server on GigE, 84 GB/hour with 2 LTO2 drives - 3 cells are running Heimdal KDC since 6 months - AFS-aware SSH 3.8p1 binary builds (GSSAPI, K5 or AFSpw login+token) - Linux / WXP Heimdal single sign-on and AFS homedir in one of the cells. Administration: ssh but have just successfully tested AD (w. help of INFN-Lecce) - Will soon be migrating INFNs national cell to K5-MIT (cross-realm and Win issues)

A.Maslennikov - Edinburgh Storage update - 2 NFS (Mountain View Data): - In production since 1.5 years, very stable (runs off XFS, no crashes so far) - 2 SuperMicro 2x2.8 GHZ, June 2004: 8 TB (Infortrend SATA/FC) TB under staging (5 TB archived) Digital Library services on GFS: - Science Server, Web of Science web services – heavy load scientific magazines, 2.5 million articles in fulltext PDF, searchable DB - Needed for load balancing: shared filestore with locking - On Sistina GFS since 6 months, 3 SM 2-way servers, 16 TB (Infortrend SATA/FC) - EXT3 copy of everything (tape backup is too slow for this number of files)

A.Maslennikov - Edinburgh Network highlights Plentitude of networks under control of Clavister FW - Internal workplaces, training class, visitors room – only outgoing connectivity - Internal and external DMZs, lab networks, internal DNS – quite complex - Private NAS GigE network outside FW - FW is far from saturation Internet Exchange Point - NAMEX - About 20 big customers (Telecom, Tiscali, Albacom, mobile operators, industry) - Traffic: around 1 Gbit / sec F-Root Name Server - Second in Europe after Madrid, first (and still the only one) in Italy IPv6 - Active member of 6NET project - CASPURs web site can be reached on IPv6

A.Maslennikov - Edinburgh CASPUR: principal resources in 2004 IBM – 150 CPUs ( MHz) Itanium2 – 15 CPUs ( GHz) HP 60 CPUs (667 – 1200MHz) FC SAN FC TAPE SYSTEMS 60 / 120 TB FC RAID SYSTEMS 32 TB Private NAS GigE AFS 6TB NFS 8 TB AFS Backup and Data Movers Digital Library 16TB Internet Internal infrastructure Internal GigEs TSM Backup NEC 6Xi

A.Maslennikov - Edinburgh Some activities in 2004 Technology tracking (in collab. with CERN and other centers) – 1 FTE - New storage devices - New software solutions in the field of storage - Excellent relationship with vendors, tested so far: more than 600 KUSD worth of hardware Staging IIa – 1 FTE (funded by CSP/Turin) - New version of Tape Dispatcher coming out (general clean-up, virtual tape library support) - Remote FC tape / libraries will be supported Data replication over WAN (in collab. with ENEA and GARR) – 0.5 FTE - Several centers with identical data inside and outside RDBMS - Each center has to be fully autonomous but should be able to forward any new data to all other centers - Bidirectional DB and plain data exchanges with eventual mediation at the head organization - Data mirroring with non-disruptive release scheme University La Sapienza – student accounts - Provide an account (space, personal web page, mail etc) for each of the students - In progress: active discussions with Interdepartmental Computing Authority (CITICORD)