NICS Update Bruce Loftis 16 December 2009. National Institute for Computational Sciences University of Tennessee and ORNL partnership  NICS is the 2.

Slides:



Advertisements
Similar presentations
Xsede eXtreme Science and Engineering Discovery Environment Ron Perrott University of Oxford 1.
Advertisements

1 US activities and strategy :NSF Ron Perrott. 2 TeraGrid An instrument that delivers high-end IT resources/services –a computational facility – over.
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Cray X1 and Black Widow at ORNL Center for Computational.
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
Appro Xtreme-X Supercomputers A P P R O I N T E R N A T I O N A L I N C.
Last Lecture The Future of Parallel Programming and Getting to Exascale 1.
Project Athena Workshop – 7-8 June ECMWF Jim Kinter – Project Overview Team Workshop 7-8 June 2010 ECMWF – Reading, UK Project Athena: Overview.
From Athena to Minerva: A Brief Overview Ben Cash Minerva Project Team, Minerva Workshop, GMU/COLA, September 16, 2013.
Architecture and Implementation of Lustre at the National Climate Computing Research Center Douglas Fuller National Climate Computing Research Center /
IDC HPC User Forum Conference Appro Product Update Anthony Kenisky, VP of Sales.
Cyberinfrastructure for Scalable and High Performance Geospatial Computation Xuan Shi Graduate assistants supported by the CyberGIS grant Fei Ye (2011)
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
Plans for Exploitation of the ORNL Titan Machine Richard P. Mount ATLAS Distributed Computing Technical Interchange Meeting May 17, 2013.
NSF Vision and Strategy for Advanced Computational Infrastructure Vision: NSF Leadership in creating and deploying a comprehensive portfolio…to facilitate.
An Introduction to the Open Science Data Cloud Heidi Alvarez Florida International University Robert L. Grossman University of Chicago Open Cloud Consortium.
Oak Ridge National Laboratory — U.S. Department of Energy 1 The ORNL Cluster Computing Experience… John L. Mugler Stephen L. Scott Oak Ridge National Laboratory.
The Creation of a Big Data Analysis Environment for Undergraduates in SUNY Presented by Jim Greenberg SUNY Oneonta on behalf of the SUNY wide team.
1 ORNL Computing Story Arthur Maccabe Director, Computer Science and Mathematics Division May 2010 Managed by UT-Battele for the Department of Energy.
Update on Center for High Performance Computing..
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
SDSC RP Update TeraGrid Roundtable Reviewing Dash Unique characteristics: –A pre-production/evaluation “data-intensive” supercomputer based.
National Center for Supercomputing Applications The Computational Chemistry Grid: Production Cyberinfrastructure for Computational Chemistry PI: John Connolly.
U.S. Department of Energy Office of Science Advanced Scientific Computing Research Program CASC, May 3, ADVANCED SCIENTIFIC COMPUTING RESEARCH An.
Presented by National Institute for Computational Sciences (NICS): Education, Outreach and Training Julia C. White User Support National Institute for.
Advancing Scientific Discovery through TeraGrid Scott Lathrop TeraGrid Director of Education, Outreach and Training University of Chicago and Argonne National.
UIUC CSL Global Technology Forum © NVIDIA Corporation 2007 Computing in Crisis: Challenges and Opportunities David B. Kirk.
1 TeraGrid ‘10 August 2-5, 2010, Pittsburgh, PA State of TeraGrid in Brief John Towns TeraGrid Forum Chair Director of Persistent Infrastructure National.
TeraGrid Overview Cyberinfrastructure Days Internet2 10/9/07 Mark Sheddon Resource Provider Principal Investigator San Diego Supercomputer Center
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
HPC Business update HP Confidential – CDA Required
S&T IT Research Support 11 March, 2011 ITCC. Fast Facts Team of 4 positions 3 positions filled Focus on technical support of researchers Not “IT” for.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Presented by Leadership Computing Facility (LCF) Roadmap Buddy Bland Center for Computational Sciences Leadership Computing Facility Project.
Cray Innovation Barry Bolding, Ph.D. Director of Product Marketing, Cray September 2008.
Scientific Advisory Committee – September 2011COLA Information Systems COLA’s Information Systems 2011.
PSC’s CRAY-XT3 Preparation and Installation Timeline.
TeraGrid Quarterly Meeting Arlington, VA Sep 6-7, 2007 NCSA RP Status Report.
1 NSF/TeraGrid Science Advisory Board Meeting July 19-20, San Diego, CA Brief TeraGrid Overview and Expectations of Science Advisory Board John Towns TeraGrid.
NICS RP Update TeraGrid Round Table March 10, 2011 Ryan Braby NICS HPC Operations Group Lead.
Revision - 01 Intel Confidential Page 1 Intel HPC Update Norfolk, VA April 2008.
National Center for Supercomputing Applications University of Illinois at Urbana–Champaign Visualization Support for XSEDE and Blue Waters DOE Graphics.
Education, Outreach and Training (EOT) and External Relations (ER) Scott Lathrop Area Director for EOT Extension Year Plans.
Future Grid Future Grid Overview. Future Grid Future GridFutureGridFutureGrid The goal of FutureGrid is to support the research that will invent the future.
Securing the Grid & other Middleware Challenges Ian Foster Mathematics and Computer Science Division Argonne National Laboratory and Department of Computer.
Presented by NCCS Hardware Jim Rogers Director of Operations National Center for Computational Sciences.
Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views.
An Architectural Approach to Managing Data in Transit Micah Beck Director & Associate Professor Logistical Computing and Internetworking Lab Computer Science.
Parallel Computers Today Oak Ridge / Cray Jaguar > 1.75 PFLOPS Two Nvidia 8800 GPUs > 1 TFLOPS Intel 80- core chip > 1 TFLOPS  TFLOPS = floating.
Scheduling a 100,000 Core Supercomputer for Maximum Utilization and Capability September 2010 Phil Andrews Patricia Kovatch Victor Hazlewood Troy Baer.
Petascale Computing Resource Allocations PRAC – NSF Ed Walker, NSF CISE/ACI March 3,
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
TG ’08, June 9-13, State of TeraGrid John Towns Co-Chair, TeraGrid Forum Director, Persistent Infrastructure National Center for Supercomputing.
The Evolution of the Italian HPC Infrastructure Carlo Cavazzoni CINECA – Supercomputing Application & Innovation 31 Marzo 2015.
Fermi National Accelerator Laboratory & Thomas Jefferson National Accelerator Facility SciDAC LQCD Software The Department of Energy (DOE) Office of Science.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
Earth System Modelling: an HPC perspective Mike Ashworth & Rupert Ford Scientific Computing Department and STFC Hartree Centre STFC Daresbury Laboratory.
Presented by SciDAC-2 Petascale Data Storage Institute Philip C. Roth Computer Science and Mathematics Future Technologies Group.
INTRODUCTION TO XSEDE. INTRODUCTION  Extreme Science and Engineering Discovery Environment (XSEDE)  “most advanced, powerful, and robust collection.
NIIF HPC services for research and education
Engineering (Richard D. Braatz and Umberto Ravaioli)
What is HPC? High Performance Computing (HPC)
Modern supercomputers, Georgian supercomputer project and usage areas
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
Grid infrastructure development: current state
Appro Xtreme-X Supercomputers
Jeffrey P. Gardner Pittsburgh Supercomputing Center
Shared Research Computing Policy Advisory Committee (SRCPAC)
NSF cloud Chameleon: Phase 2 Networking
Introduction to RDS Datasets
The Cambridge Research Computing Service
Presentation transcript:

NICS Update Bruce Loftis 16 December 2009

National Institute for Computational Sciences University of Tennessee and ORNL partnership  NICS is the 2 nd NSF Track 2 center –Builds on strengths of UT and ORNL  NICS operates the first academic petascale supercomputer in the world  Staff of 25 2Managed by UT-Battelle for the Department of Energy

NICS Timeline 3 Track-2B Award announced for University of Tennessee September 2007 Cray XT3 Production 7K cores, 40 TFJune 2008 Cray XT4 Production 18K cores,166 TFAugust 2008 Cray XT5 Production 65K cores, 600 TF 2.3 GHz, quad-core Barcelona February 2009 Cray XT5 Istanbul upgrade, Production ~100,000 cores, 2.6 GHz, 1 PF November 2009 (Actual: Oct 5, 2009) Delivered over 305M hours to researchers!

Upgrade to Istanbul Processors September 2009 BarcelonaIstanbulImpact Clock Speed2.3 GHz2.6 GHz Core Counts8 per node12 per node  70% more FLOPS Memory Bandwidth 18.1 GB/s/node > 21.2 GB/s/node  17% more memory bandwidth Level 3 Cache4 MB/node12 MB/node  3x more cache Memory Upgrademixed16 GB / compute node

COLA NICS JAMSTEC U of Tokyo ECMWF COLACenter for Ocean-Land-Atmosphere Studies, Calverton, MD, USA ECMWFEuropean Center for Medium-Range Weather Forecasts, Reading, England JAMSTECJapan Agency for Marine-Earth Science and Technology, Tokyo, Japan UTokyoUniversity of Tokyo, Japan NICSNational Institute for Computational Sciences, University of Tennessee, Knoxville, TN USA An International, Dedicated High-End Computing Project to Revolutionize Climate Modeling

NICAM Satelite Observation Model Simulation The only global atmospheric model capable of resolving clouds

User Survey – Big Results –Requirements for computing capability and archival storage will continue to grow  15 respondents expect to store as much as 1 Pbyte annually in the next 5 years –Significant interest in exploring GPGPU’s to increase computing capability –Concern about visualization and analysis of output data  moving large datasets home or elsewhere is a challenge  remote visualization is appealing

Keeneland – An NSF-Funded Partnership to Enable Large-Scale Computational Science on Heterogeneous Architectures  Track-2D: Experimental HPC System of Innovative Design – Large GPU cluster –Initial delivery Fermi system – Spring 2010 –Full scale system – Spring 2012  Partners are: Georgia Tech, NICS, U of Tennessee, Oak Ridge, NVIDIA, HP  Software tools, application development  TeraGrid Resource Provider  Operations, User Support at NICS  Education, Outreach, Training for scientists, students, industry 9

Sean Ahern, PI Partnership between UT, ORNL, LBNL, U of Wisconsin and NCSA Purpose: provide TeraGrid and XD users with remote and shared memory resources for exploring, analyzing, and visualizing large- scale data Central hardware: Nautilus, a shared memory SGI UltraViolet system with: ‣ 1024 Intel Nehalem EX cores ‣ 4 TB of shared memory ‣ 16 GPUs ‣ ~1 PB filesystem ‣ TeraGrid and Kraken connectivity Provide wide range of software tools for data analysis, visualization, and workflow management

Timeline for Deployment

NICS Education, Outreach, and Training  “Introduction to Petascale Computing” workshop for the 2009 Richard Tapia Celebration of Diversity in Computing  Co-sponsored and organized workshop for NIMBioS Center at UT-Knoxville.  Outreach to student chapter of the Society of Women Engineers at UT- Knoxville.  Two million CPU hours committed to Institute for Computing in the Humanities, Arts and Social Sciences.  Participated in student and teacher programs at TG ‘09 and SC09.  Virtual School for Computational Science and Engineering: two Summer Schools hosted during summer of 2009; hosting three week-long schools in  Co-sponsored Hex-Core Cray XT5 workshops in December 2009 (ORNL) and February 2010 (UC-Berkeley)  TeraGrid/Blue Waters technical workshop, Austin, March 2010  TeraGrid Introductory training via ReadyTalk, quarterly, online.  Introductory workshops for UTK users and undergraduate classes.

Thank You !!