HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford.

Slides:



Advertisements
Similar presentations
Monday 24 May 2004DAPNIA/Pierre-Francois Honore1 DAPNIA site report.
Advertisements

Liverpool HEP – Site Report May 2007 John Bland, Robert Fay.
Liverpool HEP - Site Report June 2008 Robert Fay, John Bland.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH Home server AFS using openafs 3 DB servers. Web server AFS Mail Server.
UCL HEP Computing Status HEPSYSMAN, RAL,
24-Apr-03UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS SECURITY SOFTWARE MAINTENANCE BACKUP.
Report of Liverpool HEP Computing during 2007 Executive Summary. Substantial and significant improvements in the local computing facilities during the.
Chris Brew RAL PPD Site Report Chris Brew SciTech/PPD.
Birmingham site report Lawrie Lowe: System Manager Yves Coppens: SouthGrid support HEP System Managers’ Meeting, RAL, May 2007.
17th October 2013Graduate Lectures1 Oxford University Particle Physics Unix Overview Pete Gronbech Senior Systems Manager and GridPP Project Manager.
Southgrid Status Pete Gronbech: 27th June 2006 GridPP 16 QMUL.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
NorthGrid status Alessandra Forti Gridpp13 Durham, 4 July 2005.
New Cluster for Heidelberg TRD(?) group. New Cluster OS : Scientific Linux 3.06 (except for alice-n5) Batch processing system : pbs (any advantage rather.
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Edinburgh Site Report 1 July 2004 Steve Thorn Particle Physics Experiments Group.
05/18/03Maurizio Davini Hepix2003 Department of Physics University of Pisa Site Report Maurizio Davini Department of Physics and INFN Pisa.
RHUL1 Site Report Royal Holloway Sukhbir Johal Simon George Barry Green.
UCL Site Report Ben Waugh HepSysMan, 22 May 2007.
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
27/04/05Sabah Salih Particle Physics Group The School of Physics and Astronomy The University of Manchester
October, Scientific Linux INFN/Trieste B.Gobbo – Compass R.Gomezel - T.Macorini - L.Strizzolo INFN - Trieste.
30-Jun-04UCL HEP Computing Status June UCL HEP Computing Status April DESKTOPS LAPTOPS BATCH PROCESSING DEDICATED SYSTEMS GRID MAIL WEB WTS.
David Hutchcroft on behalf of John Bland Rob Fay Steve Jones And Mike Houlden [ret.] * /.\ /..‘\ /'.‘\ /.''.'\ /.'.'.\ /'.''.'.\ ^^^[_]^^^ * /.\ /..‘\
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
NIKHEF Test Bed Status David Groep
Paul Scherrer Institut 5232 Villigen PSI HEPIX_AMST / / BJ95 PAUL SCHERRER INSTITUT THE PAUL SCHERRER INSTITUTE Swiss Light Source (SLS) Particle accelerator.
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
28 April 2003Imperial College1 Imperial College Site Report HEP Sysman meeting 28 April 2003.
RAL PPD Computing A tier 2, a tier 3 and a load of other stuff Rob Harper, June 2011.
Developing & Managing A Large Linux Farm – The Brookhaven Experience CHEP2004 – Interlaken September 27, 2004 Tomasz Wlodek - BNL.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
Manchester HEP Desktop/ Laptop 30 Desktop running RH Laptop Windows XP & RH OS X Home server AFS using openafs 3 DB servers Kerberos 4 we will move.
Support in setting up a non-grid Atlas Tier 3 Doug Benjamin Duke University.
1 PRAGUE site report. 2 Overview Supported HEP experiments and staff Hardware on Prague farms Statistics about running LHC experiment’s DC Experience.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
13th October 2011Graduate Lectures1 Oxford University Particle Physics Unix Overview Pete Gronbech Senior Systems Manager and GridPP Project Manager.
1st July 2004HEPSYSMAN RAL - Oxford Site Report1 Oxford University Particle Physics Site Report Pete Gronbech Systems Manager.
Company LOGO “ALEXANDRU IOAN CUZA” UNIVERSITY OF IAŞI” Digital Communications Department Status of RO-16-UAIC Grid site in 2013 System manager: Pînzaru.
Deploying a Network of GNU/Linux Clusters with Rocks / Arto Teräs Slide 1(18) Deploying a Network of GNU/Linux Clusters with Rocks Arto Teräs.
Tier2 Centre in Prague Jiří Chudoba FZU AV ČR - Institute of Physics of the Academy of Sciences of the Czech Republic.
Manchester Site report Sabah Salih HEPP The University of Manchester UK HEP Tier3.
Southgrid Technical Meeting Pete Gronbech: May 2005 Birmingham.
14th October 2010Graduate Lectures1 Oxford University Particle Physics Unix Overview Pete Gronbech Senior Systems Manager and SouthGrid Technical Co-ordinator.
Brunel University, School of Engineering and Design, Uxbridge, UB8 3PH, UK Henry Nebrensky (not a systems manager) SIRE Group.
RAL Site report John Gordon ITD October 1999
SiGNET – Slovenian Production Grid Marko Mikuž Univ. Ljubljana & J. Stefan Institute on behalf of SiGNET team ICFA DDW’06 Kraków, 10 th October 2006.
Gareth Smith RAL PPD RAL PPD Site Report. Gareth Smith RAL PPD RAL Particle Physics Department Overview About 90 staff (plus ~25 visitors) Desktops mainly.
2-Sep-02Steve Traylen, RAL WP6 Test Bed Report1 RAL and UK WP6 Test Bed Report Steve Traylen, WP6
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
11th October 2012Graduate Lectures1 Oxford University Particle Physics Unix Overview Pete Gronbech Senior Systems Manager and GridPP Project Manager.
LCG LCG-1 Deployment and usage experience Lev Shamardin SINP MSU, Moscow
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni Genève, November 2000.
CNAF Database Service Barbara Martelli CNAF-INFN Elisabetta Vilucchi CNAF-INFN Simone Dalla Fina INFN-Padua.
15-Feb-02Steve Traylen, RAL WP6 Test Bed Report1 RAL/UK WP6 Test Bed Report Steve Traylen, WP6 PPGRID/RAL, UK
Ole’ Miss DOSAR Grid Michael D. Joy Institutional Analysis Center.
R. Krempaska, October, 2013 Wir schaffen Wissen – heute für morgen Controls Security at PSI Current Status R. Krempaska, A. Bertrand, C. Higgs, R. Kapeller,
Scientific Computing in PPD and other odds and ends Chris Brew.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
10/18/01Linux Reconstruction Farms at Fermilab 1 Steven C. Timm--Fermilab.
Jefferson Lab Site Report Kelvin Edwards Thomas Jefferson National Accelerator Facility Newport News, Virginia USA
CFI 2004 UW A quick overview with lots of time for Q&A and exploration.
Brief introduction about “Grid at LNS”
Presentation transcript:

HEP Computing Status Sheffield University Matt Robinson Paul Hodgson Andrew Beresford

Interactive Cluster 30 self built linux boxes AMD Athlon XP cpu’s, 256/512 meg ram OS Scientific Linux megabit network Use NIS for authentication, NFS mount /home etc System install using kickstart + post install scripts Separate backup machine 15 Laptops mostly dual boot Some MAC’s and one Windows Box 3 Disk servers mounted as /data1 /data2 etc (few TB)

Batch Cluster 100 cpu farm Athlon XP 2400/2800 OS Scientific Linux 303 NFS mounted /home and /data OpenPBS batch system for job submission Gigabit Backbone with 100 MBit to worker nodes Disk server provides 1.3 TB as /data Raid5 Entire cluster assembled in house from OEM components for less than 50k Hard part was finding air-conditioned room with sufficient power

Cluster Usage

Software PAW, CERNLIB etc Geant4 ROOT Atlas FLUKA ANSYS, LS-DYNA

Comments - Issues Have tightened up security in last year Strict firewall policy, limited machine exemption Blocking scripts prevent ssh access after 3 authentication failures within 1 hour Cheap disks allow construction of large disk arrays Very happy with SL3 for desktop machines Use FC3 for Laptops – 2.6 kernel

The Sheffield LCG Cluster

Division of Hardware 162 x AMD Opteron 250 (2.4 GHz) 4 GB RAM/box (2 GB/CPU) 72 GB U320 10K RPM local SCSI disk Currently running 32 bit SL303 for maximum compatibility with grid. ~2.5 TB storage for experiments. Middleware: Probably the most purple cluster in the grid.

Looking Sinister

Status

Usage so far We can take quite a bit more.

Monitoring Ganglia with modified webfrontend to present queue information

Installation Service nodes connected to VPN and Internet PXE Installation via VPN allows complete control of dhcpd and named RedHat kickstart + post install script ssh servers not exposed RGMA always the hardest part Stumbled across routing rules. WN install takes about 30 minutes, can do up to 40 simultaneously.

Future plans Keep up with middleware updates Increase available storage as required in ~3-4 TB steps Fix things that break Try not to mess anything up by screwing around Look toward operating with 64 bit OS. Matt Robinson: