Template This is a template to help, not constrain, you. Modify as appropriate. Move bullet points to additional slides as needed. Don’t cram onto a single.

Slides:



Advertisements
Similar presentations
Cultural Heritage in REGional NETworks REGNET Project Meeting Content Group
Advertisements

HPCx Power for the Grid Dr Alan D Simpson HPCx Project Director EPCC Technical Director.
Refining High Performance FORTRAN Code from Programming Model Dependencies Ferosh Jacob University of Alabama Department of Computer Science
Future Directions for NSF Advanced Computing Infrastructure to support US Science in CASC, April 25, 2014 Jon Eisenberg Director, CSTB v2.
The Who, What, Why and How of High Performance Computing Applications in the Cloud Soheila Abrishami 1.
A Scalable Heterogeneous Parallelization Framework for Iterative Local Searches Martin Burtscher 1 and Hassan Rabeti 2 1 Department of Computer Science,
Amoeba Distributed Operating System James Schultz CPSC 550 Spring 2007.
Building a Cluster Support Service Implementation of the SCS Program UC Computing Services Conference Gary Jung SCS Project Manager
Chapter 13 Embedded Systems
Introduction to the new mainframe: Large-Scale Commercial Computing © Copyright IBM Corp., All rights reserved. Chapter 1: The new mainframe.
Performance Engineering and Debugging HPC Applications David Skinner
Present and Future Computing Requirements for Simulation and Analysis of Reacting Flow John Bell CCSE, LBNL NERSC ASCR Requirements for 2017 January 15,
Basics of Operating Systems March 4, 2001 Adapted from Operating Systems Lecture Notes, Copyright 1997 Martin C. Rinard.
Microsoft ® Application Virtualization 4.6 Infrastructure Planning and Design Published: September 2008 Updated: February 2010.
Two Questions Coaching Program [Your Name] [Your Address] [Date] [please name the file: your-name-2questions.pptx] —e.g. bill-marshall-2questions.pptx.
U.S. Department of the Interior U.S. Geological Survey David V. Hill, Information Dynamics, Contractor to USGS/EROS 12/08/2011 Satellite Image Processing.
Data Center Infrastructure
Microsoft ® SQL Server ® 2008 and SQL Server 2008 R2 Infrastructure Planning and Design Published: February 2009 Updated: January 2012.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
HPC Technology Track: Foundations of Computational Science Lecture 2 Dr. Greg Wettstein, Ph.D. Research Support Group Leader Division of Information Technology.
ICOM 5995: Performance Instrumentation and Visualization for High Performance Computer Systems Lecture 7 October 16, 2002 Nayda G. Santiago.
Office of Science U.S. Department of Energy Evaluating Checkpoint/Restart on the IBM SP Jay Srinivasan
Principles of Scalable HPC System Design March 6, 2012 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Results Matter. Trust NAG. Numerical Algorithms Group Mathematics and technology for optimized performance Alternative Processors Panel IDC, Tucson, Sept.
U.S. Department of Energy Office of Science Advanced Scientific Computing Research Program NERSC Users Group Meeting Department of Energy Update September.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
Uncovering the Multicore Processor Bottlenecks Server Design Summit Shay Gal-On Director of Technology, EEMBC.
Results of the HPC in Europe Taskforce (HET) e-IRG Workshop Kimmo Koski CSC – The Finnish IT Center for Science April 19 th, 2007.
NERSC NUG Meeting 5/29/03 Seaborg Code Scalability Project Richard Gerber NERSC User Services.
Offline Coordinators  CMSSW_7_1_0 release: 17 June 2014  Usage:  Generation and Simulation samples for run 2 startup  Limited digitization and reconstruction.
1 ENERGY 211 / CME 211 Lecture 26 November 19, 2008.
IPlant Collaborative Tools and Services Workshop iPlant Collaborative Tools and Services Workshop Collaborating with iPlant.
4.2.1 Programming Models Technology drivers – Node count, scale of parallelism within the node – Heterogeneity – Complex memory hierarchies – Failure rates.
1 Metrics for the Office of Science HPC Centers Jonathan Carter User Services Group Lead NERSC User Group Meeting June 12, 2006.
The Future of the iPlant Cyberinfrastructure: Coming Attractions.
Current Situation and CI Requirements OOI CyberInfrastructure Science User Requirements Workshop: San Diego January 23-24, 2008.
Alternative ProcessorsHPC User Forum Panel1 HPC User Forum Alternative Processor Panel Results 2008.
ASCAC-BERAC Joint Panel on Accelerating Progress Toward GTL Goals Some concerns that were expressed by ASCAC members.
Common software needs and opportunities for HPCs Tom LeCompte High Energy Physics Division Argonne National Laboratory (A man wearing a bow tie giving.
Multicore Computing Lecture 1 : Course Overview Bong-Soo Sohn Associate Professor School of Computer Science and Engineering Chung-Ang University.
Experts in numerical algorithms and HPC services Compiler Requirements and Directions Rob Meyer September 10, 2009.
Template This is a template to help, not constrain, you. Modify as appropriate. Move bullet points to additional slides as needed. Don’t cram onto a single.
Programmability Hiroshi Nakashima Thomas Sterling.
National Energy Research Scientific Computing Center (NERSC) NERSC View of the Greenbook Bill Saphir Chief Architect NERSC Center Division, LBNL 6/23/2004.
Full and Para Virtualization
Future computing strategy Some considerations Ian Bird WLCG Overview Board CERN, 28 th September 2012.
Comprehensive Scientific Support Of Large Scale Parallel Computation David Skinner, NERSC.
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
Porting processes to threads with MPC instead of forking Some slides from Marc Tchiboukdjian (IPDPS’12) : Hierarchical Local Storage Exploiting Flexible.
ComPASS Summary, Budgets & Discussion Panagiotis Spentzouris, Fermilab ComPASS PI.
Desktop Workload Characterization for CMP/SMT and Implications for Operating System Design Sven Bachthaler Fernando Belli Alexandra Fedorova Simon Fraser.
NERSC User Group Meeting June 3, 2002 FY 2003 Allocation Process Francesca Verdier NERSC User Services Group Lead
PEER 2003 Meeting 03/08/031 Interdisciplinary Framework Major focus areas Structural Representation Fault Systems Earthquake Source Physics Ground Motions.
Petascale Computing Resource Allocations PRAC – NSF Ed Walker, NSF CISE/ACI March 3,
Building PetaScale Applications and Tools on the TeraGrid Workshop December 11-12, 2007 Scott Lathrop and Sergiu Sanielevici.
This submission will be used by Area of Distinction Juries and the Caudill Class Jury. The slides may also be paired with jury comments in the Convention.
Cofax Scalability Document Version Scaling Cofax in General The scalability of Cofax is directly related to the system software, hardware and network.
Benefits. CAAR Project Phases Each of the CAAR projects will consist of a: 1.three-year Application Readiness phase ( ) in which the code refactoring.
Multicore Computing Lecture 1 : Course Overview Bong-Soo Sohn Associate Professor School of Computer Science and Engineering Chung-Ang University.
A Brief Introduction to NERSC Resources and Allocations
Status and Challenges: January 2017
for the Offline and Computing groups
NP-ASCR Workshop Purposes of review
Software Architecture in Practice
Template for IXPUG EMEA Ostrava, 2016
Scalable Parallel Interoperable Data Analytics Library
Alternative Processor Panel Results 2008
Name of Your Outcome Presenter’s Name, Organization and
DiRAC Technical Application
Contention-Aware Resource Scheduling for Burst Buffer Systems
Presentation transcript:

Template This is a template to help, not constrain, you. Modify as appropriate. Move bullet points to additional slides as needed. Don’t cram onto a single slide in small font. Feel free to add content, but stay within your time limit

Present and Future Computing Requirements for [enter your project] Presenter Affiliation NERSC ASCR Requirements for 2017 January 15, 2014 LBNL

1. Project Description List of Pi(s)/Institution(s) Summarize your project(s) and its scientific objectives through 2017 Our present focus is … By 2017 we expect to …

2. Computational Strategies We approach this problem computationally at a high level by … The codes we use are … These codes are characterized by these algorithms: … Our biggest computational challenges are … Our parallel scaling is limited by … We expect our computational approach and/or codes to change (or not) by 2017 in this way …

3. Current HPC Usage (see slide notes) Machines currently using (NERSC or elsewhere) Hours used in (list different facilities) Typical parallel concurrency and run time, number of runs per year Data read/written per run Memory used per (node | core | globally) Necessary software, services or infrastructure Data resources used (/scratch,HPSS, NERSC Global File System, etc.) and amount of data stored

4. HPC Requirements for 2017 (Key point is to directly link NERSC requirements to science goals ) Compute hours needed (in units of Hopper hours) Changes to parallel concurrency, run time, number of runs per year Changes to data read/written Changes to memory needed per ( core | node | globally ) Changes to necessary software, services or infrastructure

5. Strategies for New Architectures (1 of 2) Does your software have CUDA/OpenCL directives; if yes, are they used, and if not, are there plans for this? Does your software run in production now on Titan using the GPUs? Does your software have OpenMP directives now; if yes, are they used, and if not, are there plans for this? Does your software run in production now on Mira or Sequoia using threading? Is porting to, and optimizing for, the Intel MIC architecture underway or planned?

5. Strategies for New Architectures (2 of 2) Have there been or are there now other funded groups or researchers engaged to help with these activities? If you answered "no" for the questions above, please explain your strategy for transitioning your software to energy-efficient, manycore architectures What role should NERSC play in the transition to these architectures? What role should DOE and ASCR play in the transition to these architectures? Other needs or considerations or comments on transition to manycore:

5. Special I/O Needs Does your code use checkpoint/restart capability now? Do you foresee that a burst buffer architecture would provide significant benefit to you or users of your code? Scenarios for possible Burst Buffer use are on NERSC8-use-case-v1.2a.pdf

6. Summary What new science results might be afforded by improvements in NERSC computing hardware, software and services? Recommendations on NERSC architecture, system configuration and the associated service requirements needed for your science NERSC generally refreshes systems to provide on average a 2X performance increase every year. What significant scientific progress could you achieve over the next 5 years with access to 32X your current NERSC allocation? What "expanded HPC resources" are important for your project? General discussion