We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Published byNicole Sarr
Modified over 2 years ago
White Rose Grid Infrastructure Overview Chris Cartledge Deputy Director Corporate Information and Computing Services, The University of Sheffield
22/03/06© The University of Sheffield / Department of Marketing and Communications 2 Contents History Web site Current computation capabilities Planned machines Usage YHMAN Grid capabilities Contacts Training FEC, Futures
22/03/06© The University of Sheffield / Department of Marketing and Communications 3 White Rose Grid History 2001: SRIF Opportunity, joint procurement Leeds led: Peter Dew, Joanna Schmidt 3 clusters Sun SPARC system, Solaris Leeds, Maxima: 6800 (20 processors), 4*V880 (8 proc) Sheffield, Titania: 10 (later 11)* V880 (8 proc) York, Pascali: 6800 (20 proc), Fimbrata: V880 1 cluster 2.2, 2.4 GHz Intel Xeon, Myrinet Leeds, Snowdon 292 CPUs, linux
22/03/06© The University of Sheffield / Department of Marketing and Communications 4 White Rose Grid History continued Joint working to enable use across sites but heterogenous: a range of systems each system primarily to meet local needs up to 25% for users from the other sites Key services common Sun Grid Engine to control work in the clusters Globus to link clusters registration
22/03/06© The University of Sheffield / Department of Marketing and Communications 5 WRG Web Site There is a shared web site: Linked to/from local sites Covers other related projects and resources e-Science Centre of Excellence Leeds SAN and specialist graphics equipment Sheffield ppGrid node York, UKLight work
22/03/06© The University of Sheffield / Department of Marketing and Communications 6 Current Facilities: Leeds Everest: supplied by Sun/ Streamline Dual core Opteron: power & space efficient 404 CPU cores, 920GB memory 64-bit Linux (SuSE 9.3) OS Low latency Myrinet interconnect 7 * 8-way (4 chips with 2 cores), 32GB 64 * 4-way (2 chips with 2 cores), 8GB
22/03/06© The University of Sheffield / Department of Marketing and Communications 7 Leeds (continued) SGE, Globus/GSI Intel, GNU, PGI compilers. Shared memory & Myrinet MPI NAG, FFTW, BLAS, LAPACK, etc Libraries 32- and 64-bit software versions
22/03/06© The University of Sheffield / Department of Marketing and Communications 8 Maxima transition Maintenance to June 2006, expensive Need to move all home directories to SAN Users can still use it, but “at risk” Snowdon transition Maintenance until June 2007 Home directories already on the SAN Users encouraged to move
22/03/06© The University of Sheffield / Department of Marketing and Communications 9 Sheffield Iceberg: Sun Microsystems/ Streamline 160 * 2.4GHz AMD Opteron (PC technology) processors 64-bit Scientific Linux (Redhat based) 20 * 4-way, 16GB, fast Myrinet for parallel/large 40 * 2-way, 4GB for high high throughput GNU and Portland Group compilers, NAG Sun Grid Engine (6), MPI, OpenMP, Globus Abaqus, Ansys, Fluent, Maple, Matlab
22/03/06© The University of Sheffield / Department of Marketing and Communications 10 Also At Sheffield GridPP (Particle Physics Grid) 160 * 2.4GHz AMD Opteron 80* 2-way, 4GB 32-bit Scientific Linux ppGrid stack 2 nd most productive Very successful!
22/03/06© The University of Sheffield / Department of Marketing and Communications 11 Popular! Sheffield Lots of users: 827 White Rose: 37 Utilisation high Since installation: 40% Last 3 months: 80% White Rose: 26%
22/03/06© The University of Sheffield / Department of Marketing and Communications 12 York £205k from SRIF 3 £100k computing systems £50k storage system remainder ancillary equipment, contingency Shortlist agreed(?) - for June Compute, possibly core, Opteron Storage, possibly 10TB
22/03/06© The University of Sheffield / Department of Marketing and Communications 13 Other Resources YHMAN Leased fibre 2Gb/s Performance Wide area MetroLAN UKLight Archiving Disaster recovery
22/03/06© The University of Sheffield / Department of Marketing and Communications 14 Grid Resources Queuing Sun Grid Engine (6) Globus Toolkit 2.4 is installed and working issue over GSI-SSH on 64-bit OS (ancient GTK) Globus 4 being looked at Storage Resource Broker being worked on
22/03/06© The University of Sheffield / Department of Marketing and Communications 15 Training Available across White Rose Universities Sheffield: RTP - 4 units, 5 credits each High Performance and Grid Computing Programming and Application Development for Computational Grids Techniques for High Performance Computing including Distributed Computing Grid Computing and Application Development
22/03/06© The University of Sheffield / Department of Marketing and Communications 16 Contacts Leeds: Joanna Schmidt +44 (0) Sheffield Michael Griffiths or Peter Tillotson +44 (0) , +44 (0) York: Aaron Turner (0) 190
22/03/06© The University of Sheffield / Department of Marketing and Communications 17 Futures FEC will have an impact Can we maintain 25% use from other sites? how can we fund continuing GRID work? Different Funding models a challenge Leeds: departmental shares Sheffield: unmetered service York: based in Computer Science Relationship opportunities NGS, WUN, region, suppliers?
22/03/06© The University of Sheffield / Department of Marketing and Communications 18 Achievements White Rose Grid: not hardware, services People(!): familiar in working with Grid Experience of working as a virtual organisation Intellectual property in training Success: Research Engaging with Industry Solving user problems
SCARF Duncan Tooke RAL HPCSG. Overview What is SCARF? Hardware & OS Management Software Users Future.
Publishing applications on the web via the Easa Portal and integrating the Sun Grid Engine Publishing applications on the web via the Easa Portal and integrating.
Introduction to Grid Computing with High Performance Computing Mike Griffiths White Rose Grid e-Science Centre of Excellence.
QMUL e-Science Research Cluster Introduction (New) Hardware Performance Software Infrastucture What still needs to be done.
The National Grid Service Mike Mineter.
© University of Reading IT Services ITS Support for e Research Stephen Gough Assistant Director of IT Services 18 June 2008.
Batch Scheduling at LeSC with Sun Grid Engine David McBride Systems Programmer London e-Science Centre Department of Computing, Imperial College.
Finnish Material Sciences Grid (M-grid) Arto Teräs Nordic-Sgi Meeting October 28, 2004.
Pricing for Utility-driven Resource Management and Allocation in Clusters Chee Shin Yeo and Rajkumar Buyya Grid Computing and Distributed Systems (GRIDS)
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
RAL Site Report Andrew Sansum e-Science Centre, CCLRC-RAL HEPiX May 2004.
CSC Site Update HP Nordic TIG April 2008 Janne Ignatius Marko Myllynen Dan Still.
Operational computing environment at EARS Jure Jerman Meteorological Office Environmental Agency of Slovenia (EARS)
ScotGrid: a Prototype Tier-2 Centre – Steve Thorn, Edinburgh University SCOTGRID: A PROTOTYPE TIER-2 CENTRE Steve Thorn Authors: A. Earl, P. Clark, S.
INFSO-RI Enabling Grids for E-sciencE EGEE and the National Grid Service Mike Mineter
by D. Fisher (2 + 1) + 4 = 2 + (1 + 4) Associative Property of Addition 1.
UNITED NATIONS Shipment Details Report – January 2006.
Multicore Applications in Physics and Biochemical Research Hristo Iliev Faculty of Physics Sofia University “St. Kliment Ohridski” 3 rd Balkan Conference.
Addition 1’s to
L ondon e-S cience C entre Application Scheduling in a Grid Environment Nine month progress talk Laurie Young.
Tony Doyle - University of Glasgow GridPP EDG - UK Contributions Architecture Testbed-1 Network Monitoring Certificates & Security Storage Element R-GMA.
25 seconds left….. 24 seconds left….. 23 seconds left…..
Particle physics – the computing challenge CERN Large Hadron Collider –2007 –the worlds most powerful particle accelerator –10 petabytes (10 million billion.
1 Chapter 11: Data Centre Administration Objectives Data Centre Structure Data Centre Structure Data Centre Administration Data Centre Administration Data.
Introduction for University Staff CiCS welcomes you to the University of Sheffield 12/06/2014Allan Wright © The University of Sheffield 1.
HEP SYSMAN 23 May 2007 National Grid Service Steven Young National Grid Service Manager Oxford e-Research Centre University of Oxford.
Test B, 100 Subtraction Facts
Addition Facts = = =
Beowulf Supercomputer System Lee, Jung won CS843.
Time for a BREAK! You have 45 Minutes. Time Left 44.
The UK National Grid Service Andrew Richards – CCLRC, RAL.
Threads, SMP, and Microkernels Chapter 4 1. Process Resource ownership - process includes a virtual address space to hold the process image Scheduling/execution-
Bob Thome, Senior Director of Product Management, Oracle SIMPLIFYING YOUR HIGH AVAILABILITY DATABASE.
E-Science Centre of Excellence 1 The White Rose Grid Peter Dew Chair of the White Rose Grid Executive.
2011/08/23 國家高速網路與計算中心 Advanced Large-scale Parallel Supercluster.
CS 6143 COMPUTER ARCHITECTURE II SPRING 2014 ACM Principles and Practice of Parallel Programming, PPoPP, 2006 Panel Presentations Parallel Processing is.
© University of Reading David Spence 20 April 2014 e-Research: Activities and Needs.
O AK R IDGE N ATIONAL L ABORATORY U.S. D EPARTMENT OF E NERGY Facilities and How They Are Used ORNL/Probe Randy Burris Dan Million – facility administrator.
SE-292 High Performance Computing Memory Hierarchy R. Govindarajan
KAIST Computer Architecture Lab. The Effect of Multi-core on HPC Applications in Virtualized Systems Jaeung Han¹, Jeongseob Ahn¹, Changdae Kim¹, Youngjin.
DRIVER Long Term Preservation for Enhanced Publications in the DRIVER Infrastructure 1 WePreserve Workshop, October 2008 Dale Peters, Scientific Technical.
PSSA Preparation. Question 1(no calculator) D Question 2 (no calculator)
1 Nia Sutton Becta Total Cost of Ownership of ICT in schools.
Deploying Virtualised Infrastructures for Improved Efficiency and Reduced Cost Adrian Groeneveld Senior Product Marketing Manager Adrian Groeneveld Senior.
September 6, 2006 The Future of GroupWise Phillip Karren GroupWise Product Manager Ken Muir Director of GroupWise Engineering.
Intel VTune Yukai Hong Department of Mathematics National Taiwan University July 24, 2008.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Operating Systems Operating Systems - Winter 2011 Dr. Melanie Rieback Design and Implementation.
A couple of slides on RAL PPD Chris Brew CCLRC - RAL - SPBU - PPD.
© 2017 SlidePlayer.com Inc. All rights reserved.