Liverpool HEP – Site Report May 2007 John Bland, Robert Fay.

Slides:



Advertisements
Similar presentations
Southgrid Status Pete Gronbech: 21 st March 2007 GridPP 18 Glasgow.
Advertisements

Northgrid Status Alessandra Forti Gridpp24 RHUL 15 April 2010.
Liverpool HEP - Site Report June 2008 Robert Fay, John Bland.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH Home server AFS using openafs 3 DB servers. Web server AFS Mail Server.
Oxford PP Computing Site Report HEPSYSMAN 28 th April 2003 Pete Gronbech.
Birmingham site report Lawrie Lowe HEP System Managers Meeting, RAL,1 st July 2004.
UCL HEP Computing Status HEPSYSMAN, RAL,
24-Apr-03UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS SECURITY SOFTWARE MAINTENANCE BACKUP.
RAL Particle Physics Dept. Site Report. Gareth Smith RAL PPD About 2 staff mainly on windows and general infrastructure About 1.5 staff on departmental.
A couple of slides on RAL PPD Chris Brew CCLRC - RAL - SPBU - PPD.
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware and software on Prague farms Brief statistics about running LHC experiments.
Site Report: The Linux Farm at the RCF HEPIX-HEPNT October 22-25, 2002 Ofer Rind RHIC Computing Facility Brookhaven National Laboratory.
Computing Infrastructure
Liverpool HEP - Site Report June 2010 John Bland, Robert Fay.
ITT NETWORK SOLUTIONS. Quick Network Facts Constant 100 Mbps operation for users Infrastructure ready for 1000 Mbps operation to the user Cisco routing.
National Grid's Contribution to LHCb IFIN-HH Serban Constantinescu, Ciubancan Mihai, Teodor Ivanoaica.
Report of Liverpool HEP Computing during 2007 Executive Summary. Substantial and significant improvements in the local computing facilities during the.
Chris Brew RAL PPD Site Report Chris Brew SciTech/PPD.
INSTALLING LINUX.  Identify the proper Hardware  Methods for installing Linux  Determine a purpose for the Linux Machine  Linux File Systems  Linux.
Birmingham site report Lawrie Lowe: System Manager Yves Coppens: SouthGrid support HEP System Managers’ Meeting, RAL, May 2007.
Site Report HEPHY-UIBK Austrian federated Tier 2 meeting
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
Liverpool HEP - Site Report June 2009 John Bland, Robert Fay.
Gareth Smith RAL PPD HEP Sysman. April 2003 RAL Particle Physics Department Site Report.
Edinburgh Site Report 1 July 2004 Steve Thorn Particle Physics Experiments Group.
Liverpool HEP - Site Report June 2011 John Bland, Robert Fay.
Tier 3g Infrastructure Doug Benjamin Duke University.
RHUL1 Site Report Royal Holloway Sukhbir Johal Simon George Barry Green.
Computing/Tier 3 Status at Panjab S. Gautam, V. Bhatnagar India-CMS Meeting, Sept 27-28, 2007 Delhi University, Delhi Centre of Advanced Study in Physics,
UCL Site Report Ben Waugh HepSysMan, 22 May 2007.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
27/04/05Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of Manchester
The SLAC Cluster Chuck Boeheim Assistant Director, SLAC Computing Services.
30-Jun-04UCL HEP Computing Status June UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS.
David Hutchcroft on behalf of John Bland Rob Fay Steve Jones And Mike Houlden [ret.] * /.\ /..‘\ /'.‘\ /.''.'\ /.'.'.\ /'.''.'.\ ^^^[_]^^^ * /.\ /..‘\
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
Site Report May 2006 RHUL Simon George Sukhbir Johal Royal Holloway, University of London, Egham, Surrey TW20 0EX HEP SYSMAN May 2006.
Paul Scherrer Institut 5232 Villigen PSI HEPIX_AMST / / BJ95 PAUL SCHERRER INSTITUT THE PAUL SCHERRER INSTITUTE Swiss Light Source (SLS) Particle accelerator.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility HEPiX – Fall, 2005.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
28 April 2003Imperial College1 Imperial College Site Report HEP Sysman meeting 28 April 2003.
RAL PPD Computing A tier 2, a tier 3 and a load of other stuff Rob Harper, June 2011.
23 April 2002HEP SYSMAN meeting1 Cambridge HEP Group - site report April 2002 John Hill.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
RAL Site Report John Gordon IT Department, CLRC/RAL HEPiX Meeting, JLAB, October 2000.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
13th October 2011Graduate Lectures1 Oxford University Particle Physics Unix Overview Pete Gronbech Senior Systems Manager and GridPP Project Manager.
1st July 2004HEPSYSMAN RAL - Oxford Site Report1 Oxford University Particle Physics Site Report Pete Gronbech Systems Manager.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
CEA DSM Irfu IRFU site report. CEA DSM Irfu HEPiX Fall 0927/10/ Computing centers used by IRFU people IRFU local computing IRFU GRIF sub site Windows.
KOLKATA Grid Site Name :- IN-DAE-VECC-02Monalisa Name:- Kolkata-Cream VO :- ALICECity:- KOLKATACountry :- INDIA Shown many data transfers.
14th October 2010Graduate Lectures1 Oxford University Particle Physics Unix Overview Pete Gronbech Senior Systems Manager and SouthGrid Technical Co-ordinator.
Brunel University, School of Engineering and Design, Uxbridge, UB8 3PH, UK Henry Nebrensky (not a systems manager) SIRE Group.
HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.
Gareth Smith RAL PPD RAL PPD Site Report. Gareth Smith RAL PPD RAL Particle Physics Department Overview About 90 staff (plus ~25 visitors) Desktops mainly.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
IHEP(Beijing LCG2) Site Report Fazhi.Qi, Gang Chen Computing Center,IHEP.
2-Sep-02Steve Traylen, RAL WP6 Test Bed Report1 RAL and UK WP6 Test Bed Report Steve Traylen, WP6
CERN Computer Centre Tier SC4 Planning FZK October 20 th 2005 CERN.ch.
11th October 2012Graduate Lectures1 Oxford University Particle Physics Unix Overview Pete Gronbech Senior Systems Manager and GridPP Project Manager.
BaBar Cluster Had been unstable mainly because of failing disks Very few (
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
Belle II Physics Analysis Center at TIFR
Glasgow Site Report (Group Computing)
HIGH-PERFORMANCE COMPUTING SYSTEM FOR HIGH ENERGY PHYSICS
QMUL Site Report by Dave Kant HEPSYSMAN Meeting /09/2019
Presentation transcript:

Liverpool HEP – Site Report May 2007 John Bland, Robert Fay

Staff Status Two members of staff left in the past year: Michael George August 2006, Andy Washbrook December 2006 Replaced by two full time HEP system administrators John Bland, Robert Fay December 2006 One full time Grid administrator Paul Trepka One part time hardware technician Dave Muskett

Current Hardware Desktops ~100 Desktops: Upgraded to Scientific Linux 4.3, Windows XP Minimum spec of 2GHz x86, 1GB RAM + TFT Monitor 48 new machines, rest upgraded to equivalent spec Laptops ~30 Laptops: Mixed architectures, specs and OSes. Batch Farm Scientific Linux 3.0.4, + software repository (0.7TB), storage (1.3TB) 40 dual 800MHz P3s with 1GB RAM Split 30 batch / 10 interactive Using Torque/PBS Used for general analysis jobs

Current hardware – continued Matrix 10 node dual 2.40GHz Xeon, 1GB RAM 6TB RAID array Used for CDF batch analysis and data storage HEP Servers User file store + bulk storage via NFS (Samba front end for Windows) Web (Apache), (Sendmail) and database (MySQL) User authentication via NIS (+Samba for Windows) Quad Xeon 2.40GHz shell server and ssh server Core servers have a failover spare

Current Hardware – continued MAP2 Cluster 960 node (Dell PowerEdge 650) cluster 280 nodes shared with other departments Each node has 3GHz P4, 1GB RAM, 120GB local storage 12 racks (480 nodes) dedicated to LCG jobs 5 racks (200 nodes) used for local batch processing Front end machines for ATLAS, T2K, Cockcroft 13 dedicated GRID servers for CE, SE, UI etc Each rack has two 24 port gigabit switches All racks connected into VLANs via Force10 managed switch

Storage RAID All file stores are using at least RAID5. New servers starting to use RAID6. All RAID arrays using 3ware 7xxx/9xxx controllers on Scientific Linux 4.3. Arrays monitored with 3ware 3DM2 software. File stores New User and critical software store, RAID6+HS, 2.25TB ~3.5TB general purpose hepstores for bulk storage 1.4TB + 0.7TB batchstore+batchsoft for the Batch farm cluster 1.4TB hepdata for backups and scratch space 2.8TB RAID5 for LCG storage element New 10TB RAID5 for LCG SE (2.6 kernel) with 16x750GB SATAII.

Network Topology Force10 Gigabit Switch WAN Switch NAT LCG servers MAP2 OfficesServers Switch 1GB link

Proposed Network Upgrades Topology - Future Force10 Gigabit Switch WAN firewall LCG servers MAP2 OfficesServers 3GB VLAN 1GB link

Network Upgrades Recently upgraded core Force10 E600 managed switch to increase throughput and capacity. Now have 450 gigabit ports (240 at line rate) Use as central departmental switch, using VLANs Increasing bandwidth to WAN using link aggregation to 2-3GBit/s Possible increase to departmental backbone to 2GBit/s Adding departmental firewall/gateway Network intrusion monitoring with snort Most office PCs and laptops are on internal private network Wireless Wireless is currently provided by Computer Services Department HEP wireless in planning

Security & Monitoring Security Logwatch (looking to develop filters to reduce noise) University firewall + local firewall + network monitoring (snort) Secure server room with swipe card access Monitoring Core network traffic usage monitored with ntop/MRTG (all traffic to be monitored after network upgrade) Use sysstat on core servers for recording system statistics Rolling out system monitoring on all servers and worker nodes, using SNMP, MRTG (simple graphing) and Nagios Hardware temperature monitors on water cooled racks, to be supplemented by software monitoring on nodes via SNMP.

Printing We have three group printers Monochrome laser in shared area Colour led Colour ink/phaser Accessible from Linux using CUPS and automatic queue browsing Accessible from Windows using Samba/CUPS, and auto driver installs Large format posters printed through university print queues.