White Rose Grid Infrastructure Overview

Slides:



Advertisements
Similar presentations
SCARF Duncan Tooke RAL HPCSG. Overview What is SCARF? Hardware & OS Management Software Users Future.
Advertisements

White Rose Grid Infrastructure Overview Chris Cartledge Deputy Director Corporate Information and Computing Services, The University of Sheffield
Beowulf Supercomputer System Lee, Jung won CS843.
Publishing applications on the web via the Easa Portal and integrating the Sun Grid Engine Publishing applications on the web via the Easa Portal and integrating.
Information Technology Center Introduction to High Performance Computing at KFUPM.
AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members November 18, 2008.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
CSC Site Update HP Nordic TIG April 2008 Janne Ignatius Marko Myllynen Dan Still.
Academic and Research Technology (A&RT)
UK e-Science and the White Rose Grid Paul Townend Distributed Systems and Services Group Informatics Research Institute University of Leeds.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Introduction to Grid Computing with High Performance Computing Mike Griffiths White Rose Grid e-Science Centre of Excellence.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
Bob Thome, Senior Director of Product Management, Oracle SIMPLIFYING YOUR HIGH AVAILABILITY DATABASE.
Operational computing environment at EARS Jure Jerman Meteorological Office Environmental Agency of Slovenia (EARS)
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
University of Southampton Clusters: Changing the Face of Campus Computing Kenji Takeda School of Engineering Sciences Ian Hardy Oz Parchment Southampton.
Introduction to the HPCC Dirk Colbry Research Specialist Institute for Cyber Enabled Research.
David Hutchcroft on behalf of John Bland Rob Fay Steve Jones And Mike Houlden [ret.] * /.\ /..‘\ /'.‘\ /.''.'\ /.'.'.\ /'.''.'.\ ^^^[_]^^^ * /.\ /..‘\
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
The Cray XC30 “Darter” System Daniel Lucio. The Darter Supercomputer.
J. J. Rehr & R.C. Albers Rev. Mod. Phys. 72, 621 (2000) A “cluster to cloud” story: Naturally parallel Each CPU calculates a few points in the energy grid.
ScotGRID:The Scottish LHC Computing Centre Summary of the ScotGRID Project Summary of the ScotGRID Project Phase2 of the ScotGRID Project Phase2 of the.
Andrew McNabNorthGrid, GridPP8, 23 Sept 2003Slide 1 NorthGrid Status Andrew McNab High Energy Physics University of Manchester.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
Edinburgh Investment in e-Science Infrastructure Dr Arthur Trew.
28 April 2003Imperial College1 Imperial College Site Report HEP Sysman meeting 28 April 2003.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
The Birmingham Environment for Academic Research Setting the Scene Peter Watkins, School of Physics and Astronomy (on behalf of the Blue Bear team)
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members November 10, 2009.
HEP SYSMAN 23 May 2007 National Grid Service Steven Young National Grid Service Manager Oxford e-Research Centre University of Oxford.
2011/08/23 國家高速網路與計算中心 Advanced Large-scale Parallel Supercluster.
ClinicalSoftwareSolutions Patient focused.Business minded. Slide 1 Opus Server Architecture Fritz Feltner Sept 7, 2007 Director, IT and Systems Integration.
Evangelos Markatos and Charalampos Gkikas FORTH-ICS Athens, th Mar Institute of Computer Science - FORTH Christos.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
Virtualization Assessment. Strategy for web hosting Reduce costs by consolidating services onto the fewest number of physical machines
E-Science Centre of Excellence 1 The White Rose Grid Peter Dew Chair of the White Rose Grid Executive.
The UK National Grid Service Andrew Richards – CCLRC, RAL.
Multicore Applications in Physics and Biochemical Research Hristo Iliev Faculty of Physics Sofia University “St. Kliment Ohridski” 3 rd Balkan Conference.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
CFI 2004 UW A quick overview with lots of time for Q&A and exploration.
Brief introduction about “Grid at LNS”
Accessing the VI-SEEM infrastructure
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
HPC usage and software packages
Deploying Regional Grids Creates Interaction, Ideas, and Integration
Clouds , Grids and Clusters
Cluster / Grid Status Update
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
UK Grid: Moving from Research to Production
Grid infrastructure development: current state
The Academic Service Partnership
Constructing a system with multiple computers or processors
NGS Oracle Service.
WORKFLOW PETRI NETS USED IN MODELING OF PARALLEL ARCHITECTURES
The National Grid Service
Introduce yourself Presented by
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
The National Grid Service Mike Mineter NeSC-TOE
Types of Parallel Computers
QMUL Site Report by Dave Kant HEPSYSMAN Meeting /09/2019
Cluster Computers.
Presentation transcript:

White Rose Grid Infrastructure Overview Chris Cartledge Deputy Director Corporate Information and Computing Services, The University of Sheffield C.Cartledge@sheffield.ac.uk +44 114 222 3008

Contents History YHMAN Web site Grid capabilities Current computation capabilities Planned machines Usage YHMAN Grid capabilities Contacts Training FEC, Futures

White Rose Grid History 2001: SRIF Opportunity, joint procurement Leeds led: Peter Dew, Joanna Schmidt 3 clusters Sun SPARC system, Solaris Leeds, Maxima: 6800 (20 processors), 4*V880 (8 proc) Sheffield, Titania: 10 (later 11)* V880 (8 proc) York, Pascali: 6800 (20 proc), Fimbrata: V880 1 cluster 2.2, 2.4 GHz Intel Xeon, Myrinet Leeds, Snowdon 292 CPUs, linux

White Rose Grid History continued Joint working to enable use across sites but heterogenous: a range of systems each system primarily to meet local needs up to 25% for users from the other sites Key services common Sun Grid Engine to control work in the clusters Globus to link clusters registration

WRG Web Site There is a shared web site: http://www.wrgrid.org.uk/ Linked to/from local sites Covers other related projects and resources e-Science Centre of Excellence Leeds SAN and specialist graphics equipment Sheffield ppGrid node York, UKLight work

Current Facilities: Leeds Everest: supplied by Sun/ Streamline Dual core Opteron: power & space efficient 404 CPU cores, 920GB memory 64-bit Linux (SuSE 9.3) OS Low latency Myrinet interconnect 7 * 8-way (4 chips with 2 cores), 32GB 64 * 4-way (2 chips with 2 cores), 8GB

Leeds (continued) SGE, Globus/GSI Intel, GNU, PGI compilers. Shared memory & Myrinet MPI NAG, FFTW, BLAS, LAPACK, etc Libraries 32- and 64-bit software versions

Maxima transition Snowdon transition Maintenance to June 2006, expensive Need to move all home directories to SAN Users can still use it, but “at risk” Snowdon transition Maintenance until June 2007 Home directories already on the SAN Users encouraged to move

Sheffield Iceberg: Sun Microsystems/ Streamline 160 * 2.4GHz AMD Opteron (PC technology) processors 64-bit Scientific Linux (Redhat based) 20 * 4-way, 16GB, fast Myrinet for parallel/large 40 * 2-way, 4GB for high high throughput GNU and Portland Group compilers, NAG Sun Grid Engine (6), MPI, OpenMP, Globus Abaqus, Ansys, Fluent, Maple, Matlab

Also At Sheffield GridPP (Particle Physics Grid) 160 * 2.4GHz AMD Opteron 80* 2-way, 4GB 32-bit Scientific Linux ppGrid stack 2nd most productive Very successful!

Popular! Sheffield Utilisation high Lots of users: 827 White Rose: 37 Since installation: 40% Last 3 months: 80% White Rose: 26%

York £205k from SRIF 3 £100k computing systems £50k storage system remainder ancillary equipment, contingency Shortlist agreed(?) - for June Compute, possibly 80-100 core, Opteron Storage, possibly 10TB

Other Resources YHMAN Leased fibre 2Gb/s Performance Wide area MetroLAN UKLight Archiving Disaster recovery

Grid Resources Queuing Globus Toolkit Storage Resource Broker Sun Grid Engine (6) Globus Toolkit 2.4 is installed and working issue over GSI-SSH on 64-bit OS (ancient GTK) Globus 4 being looked at Storage Resource Broker being worked on

Training Available across White Rose Universities Sheffield: RTP - 4 units, 5 credits each High Performance and Grid Computing Programming and Application Development for Computational Grids Techniques for High Performance Computing including Distributed Computing Grid Computing and Application Development

Contacts Leeds: Joanna Schmidt j.g.schmidt@leeds.ac.uk , +44 (0)113 34 35375 Sheffield Michael Griffiths or Peter Tillotson m.griffiths@sheffield.ac.uk p.tillotson@sheffield.ac.uk +44 (0) 114 2221126, +44 (0) 114 2223039 York: Aaron Turner aaron@cs.york.ac.uk ,+44 (0) 190 4567708

Futures FEC will have an impact Different Funding models a challenge Can we maintain 25% use from other sites? how can we fund continuing GRID work? Different Funding models a challenge Leeds: departmental shares Sheffield: unmetered service York: based in Computer Science Relationship opportunities NGS, WUN, region, suppliers?

Achievements White Rose Grid: not hardware, services People(!): familiar in working with Grid Experience of working as a virtual organisation Intellectual property in training Success: Research Engaging with Industry Solving user problems