DiRAC-3 – The future Jeremy Yates, STFC DiRAC HPC Facility.

Slides:



Advertisements
Similar presentations
1 Copyright © 2012 Oracle and/or its affiliates. All rights reserved. Convergence of HPC, Databases, and Analytics Tirthankar Lahiri Senior Director, Oracle.
Advertisements

STFC and the UK e-Infrastructure Initiative The Hartree Centre Prof. John Bancroft Project Director, the Hartree Centre Member, e-Infrastructure Leadership.
– please look at our wiki! Part of the new national e-Infrastructure
Parallel Research at Illinois Parallel Everywhere
VO Sandpit, November 2009 NERC Big Data And what’s in it for NCEO? June 2014 Victoria Bennett CEDA (Centre for Environmental Data Archival)
ASKAP Central Processor: Design and Implementation Calibration and Imaging Workshop 2014 ASTRONOMY AND SPACE SCIENCE Ben Humphreys | ASKAP Software and.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
1 Supplemental line if need be (example: Supported by the National Science Foundation) Delete if not needed. Supporting Polar Research with National Cyberinfrastructure.
Bill Wrobleski Director, Technology Infrastructure ITS Infrastructure Services.
Gordon: Using Flash Memory to Build Fast, Power-efficient Clusters for Data-intensive Applications A. Caulfield, L. Grupp, S. Swanson, UCSD, ASPLOS’09.
An Introduction to Infrastructure Ch 11. Issues Performance drain on the operating environment Technical skills of the data warehouse implementers Operational.
GridPP Steve Lloyd, Chair of the GridPP Collaboration Board.
SDSC RP Update TeraGrid Roundtable Reviewing Dash Unique characteristics: –A pre-production/evaluation “data-intensive” supercomputer based.
Introduction and Overview Questions answered in this lecture: What is an operating system? How have operating systems evolved? Why study operating systems?
National Center for Supercomputing Applications Observational Astronomy NCSA projects radio astronomy: CARMA & SKA optical astronomy: DES & LSST access:
CCS machine development plan for post- peta scale computing and Japanese the next generation supercomputer project Mitsuhisa Sato CCS, University of Tsukuba.
Institute for Digital Research and Education (IDRE) UCLA’s CI Vision Research CI DataNet Cyber Learning Institute for Digital Research and Education (IDRE)
Copyright © 2010, Scryer Analytics, LLC. All rights reserved. Optimizing SAS System Performance − A Platform Perspective Patrick McDonald Scryer Analytics,
VO Sandpit, November 2009 e-Infrastructure to enable EO and Climate Science Dr Victoria Bennett Centre for Environmental Data Archival (CEDA)
Physics Steven Gottlieb, NCSA/Indiana University Lattice QCD: focus on one area I understand well. A central aim of calculations using lattice QCD is to.
Scientific Computing Experimental Physics Lattice QCD Sandy Philpott May 20, 2011 IT Internal Review 12GeV Readiness.
ATLAS in LHCC report from ATLAS –ATLAS Distributed Computing has been working at large scale Thanks to great efforts from shifters.
1 Application Scalability and High Productivity Computing Nicholas J Wright John Shalf Harvey Wasserman Advanced Technologies Group NERSC/LBNL.
SAN DIEGO SUPERCOMPUTER CENTER SDSC's Data Oasis Balanced performance and cost-effective Lustre file systems. Lustre User Group 2013 (LUG13) Rick Wagner.
1 Tools for improving the performance of Moodle and other PHP applications Denis Boroja, Manager EAAS Peter Rowley, Director A&I Ellio Mourinho, Programmer.
Next Generation Operating Systems Zeljko Susnjar, Cisco CTG June 2015.
VO Sandpit, November 2009 e-Infrastructure for Climate and Atmospheric Science Research Dr Matt Pritchard Centre for Environmental Data Archival (CEDA)
CPS 4150 Computer Organization Fall 2006 Ching-Song Don Wei.
20-21 October 2015UK-T0 Workshop1 Experience of Data Transfer to the Tier-1 from a DIRAC Perspective Lydia Heck Institute for Computational Cosmology Manager.
Slide David Britton, University of Glasgow IET, Oct 09 1 Prof. David Britton GridPP Project leader University of Glasgow UK-T0 Meeting 21 st Oct 2015 GridPP.
Scientific Storage at FNAL Gerard Bernabeu Altayo Dmitry Litvintsev Gene Oleynik 14/10/2015.
Storage Services Collecting the pieces of the puzzle TNC 2008, Brugge, 21 May 2008 Jan Meijer.
Future computing strategy Some considerations Ian Bird WLCG Overview Board CERN, 28 th September 2012.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
Predrag Buncic Future IT challenges for ALICE Technical Workshop November 6, 2015.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
ATLAS Distributed Computing perspectives for Run-2 Simone Campana CERN-IT/SDC on behalf of ADC.
Parallel Computers Today Oak Ridge / Cray Jaguar > 1.75 PFLOPS Two Nvidia 8800 GPUs > 1 TFLOPS Intel 80- core chip > 1 TFLOPS  TFLOPS = floating.
Indiana University Faculty Geoffrey Fox, David Crandall, Judy Qiu, Gregor von Laszewski Data Science at Digital Science Center.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Welcome PROFESSIONAL HARDWARE: THE KEY TO SUCCESS FOR 3D DESIGN – PART 1 –
Petascale Computing Resource Allocations PRAC – NSF Ed Walker, NSF CISE/ACI March 3,
Alessandro De Salvo CCR Workshop, ATLAS Computing Alessandro De Salvo CCR Workshop,
The Evolution of the Italian HPC Infrastructure Carlo Cavazzoni CINECA – Supercomputing Application & Innovation 31 Marzo 2015.
1 David Britton, University of Glasgow IET, Oct 09 1 Pete Clarke University of Edinburgh GridPP36 Pitlochry April 12th 2016 Computing future News & Views.
Get Data to Computation eudat.eu/b2stage B2STAGE How to shift large amounts of data Version 4 February 2016 This work is licensed under the.
Architecture of a platform for innovation and research Erik Deumens – University of Florida SC15 – Austin – Nov 17, 2015.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
EGI-InSPIRE EGI-InSPIRE RI The European Grid Infrastructure Steven Newhouse Director, EGI.eu Project Director, EGI-InSPIRE 29/06/2016CoreGrid.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
Predrag Buncic CERN Data management in Run3. Roles of Tiers in Run 3 Predrag Buncic 2 ALICEALICE ALICE Offline Week, 01/04/2016 Reconstruction Calibration.
The Science Data Processor and Regional Centre Overview Paul Alexander UK Science Director the SKA Organisation Leader the Science Data Processor Consortium.
Centre of Excellence in Physics at Extreme Scales Richard Kenway.
Azure.
A Brief Introduction to NERSC Resources and Allocations
Tools and Services Workshop
Joslynn Lee – Data Science Educator
Electron Ion Collider New aspects of EIC experiment instrumentation and computing, as well as their possible impact on and context in society (B) COMPUTING.
SuperB and its computing requirements
Diskpool and cloud storage benchmarks used in IT-DSS
Report from Computing Advisory Panel
National e-Infrastructure Vision
UK Status and Plans Scientific Computing Forum 27th Oct 2017
ALICE Computing Model in Run3
Azure.
Scientific Computing At Jefferson Lab
חוברת שקפים להרצאות של ד"ר יאיר ויסמן מבוססת על אתר האינטרנט:
Toward a Unified HPC and Big Data Runtime
External Sorting The slides for this text are organized into chapters. This lecture covers Chapter 11. Chapter 1: Introduction to Database Systems Chapter.
SANDIE: Optimizing NDN for Data Intensive Science
Presentation transcript:

DiRAC-3 – The future Jeremy Yates, STFC DiRAC HPC Facility

Three Services and a data heart DiRAC-2 has successfully management 5 services at four sites for 3 years. Science Case and Technical Case reviewed – 40x uplift in computing and data systems needed. – 10x from hardware, 4x from better code and workflows Code and workflows will be need to be developed and optimised to run on new systems and architectures. – Very few users write efficient code – Investment and methodology in place to do this (for now) User centricity requires – Cloud - user sees a single “service” – data federation and data services – Adoption of AAAI

Lattice QCD Astrophysics Cosmology Nuclear Physics Particle Physics Particle Astrophysics Planetary Science Solar Physics DiRAC-3: Illustrative Structural Diagram Extreme Scaling Data Intensive Memory Intensive Data Management Internet Analytics Many Core Coding Data Analytics Programming Code Optimization Fine Tuning Parallel Management Multi-threading Disaster Recovery Data Handling Archiving Tight compute/storage coupling to facilitate confrontation of complex simulations with Petascale data sets Maximal computational effort applied to a problem of fixed size Larger memory footprint as the problem size grows with increasing machine power

Data Management Internet Analytics Disaster Recovery 100PB TAPE DiRAC-3 Extreme Scaling 10 Pflops Many Core 100GBs 3D 16GB Cache per card IO 100Gbs Infiniband Standard 10PB (PFS) 10 – 20PB Data Storage 1-3 Pflops ≥256TB RAM IO 150GB/s all to all IB WORLD Virtualization (to desktop) Data Intensive > 8GB/core 1.25 Pflops X86 24TB SMP 14PB Secondary Disk 2PB PFS Very Fast SSD? >250GB/s all to all IB Memory Intensive TAPE 100PB Analytics TBD 1PB Buffer

Where do we fit in Sibling Facility to UK-T- Joined by the need to confront models with data – Physically this is how we should be joined up Need to establish relationships with LSST, SKA, Euclid….. AAAI and Cloud means that although operationally we may be distinct, from a user perspective we can look like another resource/service

Workshop Algorithms to Architecture: – A NeI Project Directors Group workshop for Data Intensive Computing in the Physical Sciences HPC is very data intensive these days…….. – RCUK will fund this – Jan/Feb next year – Essentially for Archer, DiRAC, UK-T0, JASMINE communities A real drill down into our problem sets Knowledge and skills sharing

Thanks for inviting me It’s been great so far