A Brief Introduction to NERSC Resources and Allocations

Slides:



Advertisements
Similar presentations
STFC and the UK e-Infrastructure Initiative The Hartree Centre Prof. John Bancroft Project Director, the Hartree Centre Member, e-Infrastructure Leadership.
Advertisements

Discussion of Infrastructure Clouds A NERSC Magellan Perspective Lavanya Ramakrishnan Lawrence Berkeley National Lab.
U.S. Department of Energy’s Office of Science Archived File The file below has been archived for historical reference purposes only. The content and links.
U.S. Department of Energy’s Office of Science Basic Energy Sciences Advisory Committee Dr. Daniel A. Hitchcock October 21, 2003
U.S. Department of Energy Office of Science Advanced Scientific Computing Research Program NERSC Users Group Meeting Department of Energy Update June 12,
Presented by Suzy Tichenor Director, Industrial Partnerships Program Computing and Computational Sciences Directorate Oak Ridge National Laboratory DOE.
The LHC Computing Grid – February 2008 The Worldwide LHC Computing Grid Dr Ian Bird LCG Project Leader 15 th April 2009 Visit of Spanish Royal Academy.
BERAC Charge A recognized strength of the Office of Science, and BER is no exception, is the development of tools and technologies that enable science.
Office of Science U.S. Department of Energy U.S. Department of Energy’s Office of Science Dr. Raymond L. Orbach Under Secretary for Science U.S. Department.
April 2009 OSG Grid School - RDU 1 Open Science Grid John McGee – Renaissance Computing Institute University of North Carolina, Chapel.
Lawrence Berkeley National Laboratory Kathy Yelick Associate Laboratory Director for Computing Sciences.
Scientific Data Infrastructure in CAS Dr. Jianhui Scientific Data Center Computer Network Information Center Chinese Academy of Sciences.
The NERSC Global File System (NGF) Jason Hick Storage Systems Group Lead CAS2K11 September 11-14,
U.S. Department of Energy Office of Science Advanced Scientific Computing Research Program CASC, May 3, ADVANCED SCIENTIFIC COMPUTING RESEARCH An.
Research Support Services Research Support Services.
ISG We build general capability Introduction to Olympus Shawn T. Brown, PhD ISG MISSION 2.0 Lead Director of Public Health Applications Pittsburgh Supercomputing.
U.S. Department of Energy Office of Science Advanced Scientific Computing Research Program NERSC Users Group Meeting Department of Energy Update September.
Corral: A Texas-scale repository for digital research data Chris Jordan Data Management and Collections Group Texas Advanced Computing Center.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
The Research Computing Center Nicholas Labello
RNA-Seq 2013, Boston MA, 6/20/2013 Optimizing the National Cyberinfrastructure for Lower Bioinformatic Costs: Making the Most of Resources for Publicly.
Presented by Leadership Computing Facility (LCF) Roadmap Buddy Bland Center for Computational Sciences Leadership Computing Facility Project.
1 Application Scalability and High Productivity Computing Nicholas J Wright John Shalf Harvey Wasserman Advanced Technologies Group NERSC/LBNL.
Template This is a template to help, not constrain, you. Modify as appropriate. Move bullet points to additional slides as needed. Don’t cram onto a single.
1 OFFICE OF ADVANCED SCIENTIFIC COMPUTING RESEARCH The NERSC Center --From A DOE Program Manager’s Perspective-- A Presentation to the NERSC Users Group.
Advanced Research Computing Projects & Services at U-M
Template This is a template to help, not constrain, you. Modify as appropriate. Move bullet points to additional slides as needed. Don’t cram onto a single.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
DiRAC-3 – The future Jeremy Yates, STFC DiRAC HPC Facility.
Galaxy Community Conference July 27, 2012 The National Center for Genome Analysis Support and Galaxy William K. Barnett, Ph.D. (Director) Richard LeDuc,
NERSC User Group Meeting June 3, 2002 FY 2003 Allocation Process Francesca Verdier NERSC User Services Group Lead
Considering Time in Designing Large-Scale Systems for Scientific Computing Nan-Chen Chen 1 Sarah S. Poon 2 Lavanya Ramakrishnan 2 Cecilia R. Aragon 1,2.
Nigel Lockyer Fermilab Operations Review 16 th -18 th May 2016 Fermilab in the Context of the DOE Mission.
Petascale Computing Resource Allocations PRAC – NSF Ed Walker, NSF CISE/ACI March 3,
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
Nigel Lockyer Fermilab Operations Review 16 th -18 th May 2016 Fermilab in the Context of the DOE Mission.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
Grids and SMEs: Experience and Perspectives Emanouil Atanassov, Todor Gurov, and Aneta Karaivanova Institute for Parallel Processing, Bulgarian Academy.
Scientific Data Processing Portal and Heterogeneous Computing Resources at NRC “Kurchatov Institute” V. Aulov, D. Drizhuk, A. Klimentov, R. Mashinistov,
1 "The views expressed in this presentation are those of the author and do not necessarily reflect the views of the European Commission" NCP infoday Capacities.
DOE Office of Science Graduate Student Research (SCGSR) Program
DOE Office of Science Graduate Student Research (SCGSR) Program
Buying into “Summit” under the “Condo” model
What is HPC? High Performance Computing (HPC)
Early Results of Deep Learning on the Stampede2 Supercomputer
Tools and Services Workshop
The demonstration of Lustre in EAST data system
Joslynn Lee – Data Science Educator
BigPanDA Technical Interchange Meeting July 20, 2017 Hong Ma
The LHC Computing Grid Visit of Mtro. Enrique Agüera Ibañez
DOE Facilities - Drivers for Science: Experimental and Simulation Data
Grid Computing.
Grid Canada Testbed using HEP applications
Scientific Computing At Jefferson Lab
EGI – Organisation overview and outreach
DOE Office of Science Graduate Student Research (SCGSR) Program
Development of the Nanoconfinement Science Gateway
ESnet and Science DMZs: an update from the US
Early Results of Deep Learning on the Stampede2 Supercomputer
Introduce yourself Presented by
EOSCpilot All Hands Meeting 8 March 2018 Pisa
CARLA Buenos Aires, Argentina - Sept , 2017
TeraScale Supernova Initiative
Defining the Grid Fabrizio Gagliardi EMEA Director Technical Computing
Expand portfolio of EGI services
The Cambridge Research Computing Service
A Possible OLCF Operational Model for HEP (2019+)
Socio-Economic Impact of ESS
Presentation transcript:

A Brief Introduction to NERSC Resources and Allocations Richard Gerber & Scott French NERSC User Services Group UC Berkeley Research IT Reading Group February 12, 2015

NERSC is the Production HPC & Data Facility for DOE Office of Science Research Largest funder of physical science research in U.S. Materials, Chemistry, Geophysics Bio Energy, Environment Computing Particle Physics, Astrophysics Nuclear Physics Fusion Energy, Plasma Physics

NERSC Mission and Strategic Areas NERSC Mission: To accelerate scientific discovery at the DOE Office of Science (SC) through high performance computing and data analysis Strategic Objectives: Meet the ever-growing computing and data needs of our users by Providing usable exascale computing and storage systems Transitioning DOE SC codes to execute effectively on manycore architectures Influencing the computer industry to ensure that future systems meet the mission needs of DOE SC Enables our users to tackle some of the most computationally challenging scientific problems today

NERSC: Science First Focus on Science Large diverse user community 17 Journal Covers in 2013 Focus on Science A word-class resource to support world-class science. 1,500 refereed journal publications per year Supports Nobel-prize winning projects: Chemistry 2013, Physics 2011, Peace 2007 Large diverse user community 5,000 users, 700 projects From 48 states, 65% from universities Many large international collaborations Science-driven systems and services Designed to support science Optimized for scientific productivity at cutting-edge scale

Some Characteristics of NERSC Large, state-of-the-art computing and data systems Consulting, system admin, 24x7 operations support Well maintained software environment, prebuilt optimized applications & libraries Designed for massive parallelism, but supports all scales Easily share data and codes Easy to use account management Large permanent archival data storage Ongoing technology refreshes Word-class cybersecurity Open science environment Web-based science /data gateways

NERSC Demographics 5,000 users from 48 U.S. states 642 international users

NERSC 2013 Allocations By DOE Office DOE View of Workload ASCR Advanced Scientific Computing Research BER Biological & Environmental Research BES Basic Energy Sciences FES Fusion Energy Sciences HEP High Energy Physics NP Nuclear Physics NERSC 2013 Allocations By DOE Office

Science View of Workload NERSC 2013 Allocations By Science Area

NERSC Supports Jobs of all Kinds and Sizes High Throughput: Statistics, Systematics, Analysis, UQ Larger Physical Systems, Higher Fidelity

NERSC Systems Today Ethernet & IB Fabric WAN 3.6 PB 5 x SFA12KE 5 PB NERSC Presentation 4/16/2018 NERSC Systems Today 7.6 PB Local Scratch 163 GB/s 16 x FDR IB Edison: 2.57PF, 357 TB RAM Cray XC30 5,576 nodes, 134K Cores Global Scratch 3.6 PB 5 x SFA12KE /project 5 PB DDN9900 & NexSAN /home 250 TB NetApp 5460 50 PB stored, 240 PB capacity, 20 years of community data HPSS 80 GB/s 50 GB/s 5 GB/s 12 GB/s 16 x QDR IB 2.2 PB Local Scratch 70 GB/s Hopper: 1.3PF, 212 TB RAM Cray XE6 6,384 nodes 150K Cores Production Clusters Carver, PDSF, JGI,KBASE,HEP 14x QDR Vis & Analytics Data Transfer Nodes Adv. Arch. Testbeds Science Gateways Ethernet & IB Fabric Science Friendly Security Production Monitoring Power Efficiency WAN 2 x 10 Gb 1 x 100 Gb Software Defined Networking

NERSC Allocations In essence, an allocation is a grant; Not of money, but of access to: NERSC compute resources (specific quantity of CPU time) NERSC storage resources (live and archival) Extensive user support (Consulting, 24x7 Operations, etc.) Eligibility: All work done at NERSC must be within the DOE Office of Science mission Many projects are directly funded by the main Office of Science program areas (ASCR, BES, BER, FES, HEP, NP) Others are chosen because they are compatible with and of interest to the program area missions

NERSC Allocations (Cont’d) Different allocation types are available DOE Production: Awards made directly by DOE program managers (the way most NERSC users get time) Mid-sized to large allocations; Yearly ALCC: ASCR Leadership Computing Challenge Large allocations; Yearly DDR: Director’s Discretionary Reserve Strategic allocation awards Startup: Awarded at NERSC’s discretion Small (up to 50k hours); Rolling If moving up from an institutional cluster environment, a startup is likely the right way to go Allocation Type % Time DOE Production 80 % ALCC 10 % DDR Startup / Education N/A

Startup allocations Apply any time of year Decisions typically made within three weeks Must be able to demonstrate how your project fits within the specific Office of Science mission areas A startup allocation is not the end goal; Instead, it’s an opportunity to: Get your codes up and running on NERSC resources while also being scientifically productive Perform detailed tuning and performance characterization to support a new DOE Production, ALCC, etc. application Covered in detail on the web: http://www.nersc.gov/users/accounts/allocations/first-allocation/

Thank you.