Founded in 2010: UCL, Southampton, Oxford and Bristol Key Objectives of the Consortium: Prove the concept of shared, regional e-infrastructure services.

Slides:



Advertisements
Similar presentations
S.L.LloydATSE e-Science Visit April 2004Slide 1 GridPP – A UK Computing Grid for Particle Physics GridPP 19 UK Universities, CCLRC (RAL & Daresbury) and.
Advertisements

HPCx Power for the Grid Dr Alan D Simpson HPCx Project Director EPCC Technical Director.
South-east Regional e- Research Consortium (SeReRC) Keith Haines, Jon Blower, Dan Bretherton, Alastair Gemmell Reading e-Science Centre University of Reading.
SCARF Duncan Tooke RAL HPCSG. Overview What is SCARF? Hardware & OS Management Software Users Future.
STFC and the UK e-Infrastructure Initiative The Hartree Centre Prof. John Bancroft Project Director, the Hartree Centre Member, e-Infrastructure Leadership.
OeRC and the South East Regional e-Research Consortium Anne Trefethen, David Wallom.
Research Computing and Facilitating Services CLMS Symposium 28 th June 2012 Clare Gryce Head of Research Computing & Facilitating Services.
E-Infrastructure Networking David Salmon. Topics e-Infrastructure funding – What has Janet been doing ? Emerging Issues – Some practicalities Broader.
LinkSCEEM-2: A computational resource for the Eastern Mediterranean.
Monte-Carlo method and Parallel computing  An introduction to GPU programming Mr. Fang-An Kuo, Dr. Matthew R. Smith NCHC Applied Scientific Computing.
SAN DIEGO SUPERCOMPUTER CENTER Niches, Long Tails, and Condos Effectively Supporting Modest-Scale HPC Users 21st High Performance Computing Symposia (HPC'13)
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
IDC HPC User Forum Conference Appro Product Update Anthony Kenisky, VP of Sales.
VO Sandpit, November 2009 NERC Big Data And what’s in it for NCEO? June 2014 Victoria Bennett CEDA (Centre for Environmental Data Archival)
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Institutional Research Computing at WSU: Implementing a community-based approach Exploratory Workshop on the Role of High-Performance Computing in the.
Scientific Data Infrastructure in CAS Dr. Jianhui Scientific Data Center Computer Network Information Center Chinese Academy of Sciences.
Introduction to LinkSCEEM and SESAME 15 June 2014, ibis Hotel, Amman - Jordan Presented by Salman Matalgah Computing Group leader SESAME.
High Performance Computing G Burton – ICG – Oct12 – v1.1 1.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
DRAFT 1 Institutional Research Computing at WSU: A community-based approach Governance model, access policy, and acquisition strategy for consideration.
CEMS: The Facility for Climate and Environmental Monitoring from Space Victoria Bennett, ISIC/CEDA/NCEO RAL Space.
JASMIN/CEMS and EMERALD Scientific Computing Developments at STFC Peter Oliver, Martin Bly Scientific Computing Department Oct 2012.
What is EGI? The European Grid Infrastructure enables access to computing resources for European scientists from all fields of science, from Physics to.
University of Southampton Clusters: Changing the Face of Campus Computing Kenji Takeda School of Engineering Sciences Ian Hardy Oz Parchment Southampton.
By Arun Bhandari Course: HPC Date: 01/28/12. GPU (Graphics Processing Unit) High performance many core processors Only used to accelerate certain parts.
SIMPLE DOES NOT MEAN SLOW: PERFORMANCE BY WHAT MEASURE? 1 Customer experience & profit drive growth First flight: June, minute turn at the gate.
VO Sandpit, November 2009 e-Infrastructure to enable EO and Climate Science Dr Victoria Bennett Centre for Environmental Data Archival (CEDA)
UPPMAX and UPPNEX: Enabling high performance bioinformatics Ola Spjuth, UPPMAX
The Research Computing Center Nicholas Labello
GGF-16 Athens Production Grid Computing in the UK Neil Geddes CCLRC Director, e-Science.
CRISP & SKA WP19 Status. Overview Staffing SKA Preconstruction phase Tiered Data Delivery Infrastructure Prototype deployment.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
The Birmingham Environment for Academic Research Setting the Scene Peter Watkins, School of Physics and Astronomy (on behalf of the Blue Bear team)
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
GridPP Deployment & Operations GridPP has built a Computing Grid of more than 5,000 CPUs, with equipment based at many of the particle physics centres.
Aims to: ● Generate commercial advantage for the College ● Enhance economic and social impact through delivery of an integrated programme of knowledge.
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP Update IDC HPC Forum.
, VilniusBaltic Grid1 EG Contribution to NEEGRID Martti Raidal On behalf of Estonian Grid.
The National Grid Service Mike Mineter
Introduction to Hartree Centre Resources: IBM iDataPlex Cluster and Training Workstations Rob Allan Scientific Computing Department STFC Daresbury Laboratory.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks EGEE: Enabling grids for E-Science Bob Jones.
SESAME-NET: Supercomputing Expertise for Small And Medium Enterprises Todor Gurov Associate Professor IICT-BAS
STFC in INDIGO DataCloud WP3 INDIGO DataCloud Kickoff Meeting Bologna April 2015 Ian Collier
Hartree Centre systems overview. Public nameInternal nameTechnologyService type Blue WonderInvictax86 SandyBridgeproduction Blue WonderNapierx86 IvyBridgeproduction.
STFC’s National Laboratories Round Table on Synergies and Complementarity among Laboratories John Womersley Chief Executive, STFC 13 th Pisa meeting on.
Hopper The next step in High Performance Computing at Auburn University February 16, 2016.
HPC-related R&D in 863 Program Depei Qian Sino-German Joint Software Institute (JSI) Beihang University Aug. 27, 2010.
University GPU Club Tues 29 Oct
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
Earth System Modelling: an HPC perspective Mike Ashworth & Rupert Ford Scientific Computing Department and STFC Hartree Centre STFC Daresbury Laboratory.
KTNUK Simon Yarwood July Introducing the KTN The UK’s Innovation Network The KTN is the UK’s innovation network. We bring together.
RI EGI-InSPIRE RI Pre-OMB meeting Preparation for the Workshop “EGI towards H2020” NGI_UK John Gordon and.
NIIF HPC services for research and education
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
HPC usage and software packages
Low-Cost High-Performance Computing Via Consumer GPUs
Scientific Computing Department
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
Grid infrastructure development: current state
National e-Infrastructure Vision
Appro Xtreme-X Supercomputers
Small site approaches - Sussex
The National Grid Service
Overview of HPC systems and software available within
Office of Information Technology February 16, 2016
The National Grid Service Mike Mineter NeSC-TOE
Authors: Dana Petcu, Viorel Negru, Florin Fortiș
The Cambridge Research Computing Service
Presentation transcript:

Founded in 2010: UCL, Southampton, Oxford and Bristol Key Objectives of the Consortium: Prove the concept of shared, regional e-infrastructure services. Focus on providing a sustainable, effective research platform service for the Consortium members. Drive Consortium collaboration Drive industry engagement Explore all areas of e-infrastructure (Research Data, Scientific Software, …) Governance: Strategic/Policy: Executive Board, User Group Centre/Project/Operations: Project Board, Operations Group.

UK Government decided there was a need for regional research infrastructure to link into national facilities National HPC Northern 8 Strathclyde/Glasgow (West) Mid + Leicester, Loughborough

EPSRC Regional HPC call Dec 11 Oxford, UCL, Southampton, Bristol (+STFC RAL) £2.82 million Capital £701K recurrent (1 year only) Centre for Innovation, two facilities: 1. General Purpose Intel based HPC cluster (IRIDIS) £1.7 million, 12,000 cores in year two. Based and operated by Southampton 2. GPGPU cluster (EMERALD) £1.1 million (GPGPU cluster based on 372 NVIDIA M2090 GPUs. ) Largest in UK. 2 nd largest in Europe. Based at Harwell campus Operated by RAL/STFC on behalf of the Consortium To support multi- disciplinary research, with a centre of gravity in engineering and physical sciences, reaching out to other disciplines To encourage and enable industrial usage and collaboration

Two HPC systems to create unique regional facility System 1. – Hosted at Southampton University System provided by OCF and IBM Based on IBM iDataplex system Upgrade to existing system to create – 12,000 core Intel x86 Westmere system – 113 TFLOP peak performance – High speed infiniband connection – High speed GPFS parallel file system – Managed by MOAB/Torque

System 2. - hosted by STFC – Rutherford Appleton Laboratories Largest in UK, Second largest GP-GPU system in Europe Built by HP, integrating Panasas storage and Gnodal networking Based on nVidia Tesla M2090 GPU cards 114 Tflop Performance measured for Top nVidia M2090 GP-GPUs 3 Login Nodes 84 HP Compute nodes (3 gpu and 8 gpu per node mix) 135TB Panasas ActiveStor 11 Storage Dual node interconnectivity across entire cluster 10Gbe (Gnodal) QDR Infiniband Managed via LSF scheduler

earth sciences and earth systems modelling, climate modelling, biomaterials, catalysis, renewable energy, earth materials, astronomy and astrophysics, atmospheric physics, chemistry, biochemistry, zoology, oncology, human genetics, neuroscience and neuroimaging, structural biology usage by processor hours, 4000 core partition

molecular dynamics, chemistry, CFD (aeronautics and mathematics), software engineering (optimisation), biological signalling, computational statistics, chemistry, biochemistry, zoology, neuroscience and neuroimaging, maths, finance, statistics industrial: pharmaceuticals, aerospace usage by CPU hours