Presented by NCCS Hardware Jim Rogers Director of Operations National Center for Computational Sciences.

Slides:



Advertisements
Similar presentations
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Cray X1 and Black Widow at ORNL Center for Computational.
Advertisements

Exascale Computing: Challenges and Opportunities Ahmed Sameh and Ananth Grama NNSA/PRISM Center, Purdue University.
HPC in Poland Marek Niezgódka ICM, University of Warsaw
Kei Davis and Fabrizio Petrini Europar 2004, Pisa Italy 1 CCS-3 P AL STATE OF THE ART.
DOSAR Workshop VI April 17, 2008 Louisiana Tech Site Report Michael Bryant Louisiana Tech University.
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
Appro Xtreme-X Supercomputers A P P R O I N T E R N A T I O N A L I N C.
♦ Commodity processor with commodity inter- processor connection Clusters Pentium, Itanium, Opteron, Alpha GigE, Infiniband, Myrinet, Quadrics, SCI NEC.
Today’s topics Single processors and the Memory Hierarchy
Contact: Hirofumi Amano at Kyushu 40 Years of HPC Services In this memorable year, the.
Information Technology Center Introduction to High Performance Computing at KFUPM.
New HPC technologies Arunas Birmontas, BGM BalticGrid II Kick-off meeting, Vilnius May 13, 2008.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO IEEE Symposium of Massive Storage Systems, May 3-5, 2010 Data-Intensive Solutions.
Silicon Graphics, Inc. Poster Presented by: SGI Proprietary Technologies for Breakthrough Research Rosario Caltabiano North East Higher Education & Research.
CS 213 Commercial Multiprocessors. Origin2000 System – Shared Memory Directory state in same or separate DRAMs, accessed in parallel Upto 512 nodes (1024.
Top500: Red Storm An abstract. Matt Baumert 04/22/2008.
CSC Site Update HP Nordic TIG April 2008 Janne Ignatius Marko Myllynen Dan Still.
Earth Simulator Jari Halla-aho Pekka Keränen. Architecture MIMD type distributed memory 640 Nodes, 8 vector processors each. 16GB shared memory per node.
Real Parallel Computers. Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra, Meuer, Simon Parallel.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
What’s a Supercomputer Good for Anyway? Ruth Poole – IBM Software Engineer Blue Gene Control System.
Presented by CCS Network Roadmap Steven M. Carter Network Task Lead Center for Computational Sciences.
1 First-Principles Molecular Dynamics for Petascale Computers François Gygi Dept of Applied Science, UC Davis
National Center for Supercomputing Applications University of Illinois at Urbana-Champaign Future of High Performance Computing Thom Dunning National Center.
SDSC RP Update TeraGrid Roundtable Reviewing Dash Unique characteristics: –A pre-production/evaluation “data-intensive” supercomputer based.
U.S. Department of Energy Office of Science Advanced Scientific Computing Research Program CASC, May 3, ADVANCED SCIENTIFIC COMPUTING RESEARCH An.
CCS machine development plan for post- peta scale computing and Japanese the next generation supercomputer project Mitsuhisa Sato CCS, University of Tsukuba.
Principles of Scalable HPC System Design March 6, 2012 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Seaborg Cerise Wuthrich CMPS Seaborg  Manufactured by IBM  Distributed Memory Parallel Supercomputer  Based on IBM’s SP RS/6000 Architecture.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Dr Mark Parsons Commercial Director, EPCC HECToR The latest UK National High Performance Computing Service.
Rensselaer Why not change the world? Rensselaer Why not change the world? 1.
Jaguar Super Computer Topics Covered Introduction Architecture Location & Cost Bench Mark Results Location & Manufacturer Machines in top 500 Operating.
Checksum Strategies for Data in Volatile Memory Authors: Humayun Arafat(Ohio State) Sriram Krishnamoorthy(PNNL) P. Sadayappan(Ohio State) 1.
ESMF Performance Evaluation and Optimization Peggy Li(1), Samson Cheung(2), Gerhard Theurich(2), Cecelia Deluca(3) (1)Jet Propulsion Laboratory, California.
Presented by Leadership Computing Facility (LCF) Roadmap Buddy Bland Center for Computational Sciences Leadership Computing Facility Project.
SoCal Infrastructure OptIPuter Southern California Network Infrastructure Philip Papadopoulos OptIPuter Co-PI University of California, San Diego Program.
Cray Innovation Barry Bolding, Ph.D. Director of Product Marketing, Cray September 2008.
PDSF at NERSC Site Report HEPiX April 2010 Jay Srinivasan (w/contributions from I. Sakrejda, C. Whitney, and B. Draney) (Presented by Sandy.
JLab Scientific Computing: Theory HPC & Experimental Physics Thomas Jefferson National Accelerator Facility Newport News, VA Sandy Philpott.
1 Application Scalability and High Productivity Computing Nicholas J Wright John Shalf Harvey Wasserman Advanced Technologies Group NERSC/LBNL.
- Rohan Dhamnaskar. Overview  What is a Supercomputer  Some Concepts  Couple of examples.
2009/4/21 Third French-Japanese PAAP Workshop 1 A Volumetric 3-D FFT on Clusters of Multi-Core Processors Daisuke Takahashi University of Tsukuba, Japan.
DoE I/O characterizations, infrastructures for wide-area collaborative science, and future opportunities Jeffrey S. Vetter, Micah Beck, Philip Roth Future.
Brent Gorda LBNL – SOS7 3/5/03 1 Planned Machines: BluePlanet SOS7 March 5, 2003 Brent Gorda Future Technologies Group Lawrence Berkeley.
A look at computing performance and usage.  3.6GHz Pentium 4: 1 GFLOPS  1.8GHz Opteron: 3 GFLOPS (2003)  3.2GHz Xeon X5460, quad-core: 82 GFLOPS.
Cray Environmental Industry Solutions Per Nyberg Earth Sciences Business Manager Annecy CAS2K3 Sept 2003.
PSC’s CRAY-XT3 Preparation and Installation Timeline.
Presented by NCCS Network Roadmap Josh Lothian High Performance Computing Operations National Center for Computational Sciences.
IDC HPC User Forum April 14 th, 2008 A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering
NICS RP Update TeraGrid Round Table March 10, 2011 Ryan Braby NICS HPC Operations Group Lead.
NICS Update Bruce Loftis 16 December National Institute for Computational Sciences University of Tennessee and ORNL partnership  NICS is the 2.
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY The Center for Computational Sciences 1 State of the CCS SOS 8 April 13, 2004 James B. White.
Cray XD1 Reconfigurable Computing for Application Acceleration.
Presented by Buddy Bland Leadership Computing Facility Project Director National Center for Computational Sciences Leadership Computing Facility (LCF)
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
Parallel Computers Today Oak Ridge / Cray Jaguar > 1.75 PFLOPS Two Nvidia 8800 GPUs > 1 TFLOPS Intel 80- core chip > 1 TFLOPS  TFLOPS = floating.
Scheduling a 100,000 Core Supercomputer for Maximum Utilization and Capability September 2010 Phil Andrews Patricia Kovatch Victor Hazlewood Troy Baer.
Presented by Philip L. Andrews Project Director National Institute for Computational Sciences Hardware National Institute for Computational Sciences (NICS):
Parallel Computers Today LANL / IBM Roadrunner > 1 PFLOPS Two Nvidia 8800 GPUs > 1 TFLOPS Intel 80- core chip > 1 TFLOPS  TFLOPS = floating point.
Modern supercomputers, Georgian supercomputer project and usage areas
Appro Xtreme-X Supercomputers
Super Computing By RIsaj t r S3 ece, roll 50.
Parallel Computers Today
Nicole Ondrus Top 500 Parallel System Presentation
Advanced Computer Architecture 5MD00 / 5Z033 TOP 500 supercomputers
TeraScale Supernova Initiative
Presentation transcript:

Presented by NCCS Hardware Jim Rogers Director of Operations National Center for Computational Sciences

2 Rogers_NCCS_Hardware_SC quad- core 2.3 GHz 18,048 GB Memory TRACK II 2048 quad- core 850 MHz 4096 GB Memory BLUE/GENE PIBM LINUX NSTG (56) 3 GHz 76 GB Memory 4.5 TB VISUALIZATION CLUSTER (128) 2.2 GHz 128 GB Memory 9 TB CRAY XIE PHOENIX (1,024) 0.5 GHz 2 TB Memory 44 TB NCCS resources Network routers UltraScience 10 GigE 1 GigE Control network 7 Systems October 2007 summary Scientific visualization lab Supercomputers 51,110 CPUs 68 TB Memory 336 TFlops Total shared disk TB 5 PB data storage 35 megapixels Powerwall Test systems 96-processor Cray XT3 Evaluation platforms 144-processor Cray XD1 with FPGAs SRC Mapstation Clearspeed BlueGene (at ANL) 5 PB Backup storage (23,662) 2.6GHz 45 TB Memory 900 TB CRAY XT4 JAGUAR IBM HPSS 5 TB Many storage devices supported

3 Rogers_NCCS_Hardware_SC07 Hardware roadmap As it looks to the future, the National Center for Computational Sciences expects to lead the accelerating field of high-performance computing. Upgrades will boost Jaguar’s performance to 250 teraflops by the end of 2007, followed by installation of two separate petascale systems in 2009.

4 Rogers_NCCS_Hardware_SC07 Jaguar system specifications 54 TF100 TF250 TF1.000 TF Compute processors 5,212 Dual-core 2.6 GHz Opteron 11,508 Dual-core 2.6 GHz Opterons 7,816 Quad-core Opterons 24,000 Multi-core Opterons SIO processors 82 Single-core 2.4 GHz Opteron 116 Dual-core 2.6 GHz Opteron 124 Dual-core Opteron 544 Quad-core Opteron Memory per socket/total 4 GB / 20 TB total system 4 GB / 45 TB total system 8 GB / 63 TB total system 32 GB / 768 TB total system Interconnect bandwidth per socket Seastar GB/s Seastar GB/s Seastar GB/s Gemini Disk space 120 TB900 TB 5-15 PB Disk bandwidth 14 GB/s5 GB/sTotal 55 GB/s240 GB/s

5 Rogers_NCCS_Hardware_SC07 Phoenix – Cray X1E  Ultra-high bandwidth  Globally addressable memory  Addresses large-scale problems that cannot be done on any other computer CRAY X1E: 1,024 vector processors, 18.5 TF

6 Rogers_NCCS_Hardware_SC07 Today: Jaguar – Cray XT4  120 TF Cray XT4  2.6 GHz dual-core AMD Opteron processors  11,508 compute nodes  900 TB disk  124 cabinets  Currently partitioned as 96 cabinets of catamount and 32 cabinets of compute node linux  November 2007: All cabinets compute node linux

7 Rogers_NCCS_Hardware_SC07 Cray XT4 scalable architecture Jaguar – Cray XT4 architecture  Designed to scale to 10,000s of processors  Measured MPI bandwidth of 1.8 GB/s  3-D torus topology

8 Rogers_NCCS_Hardware_SC07 Phoenix – Cray X1E Phoenix Utilization by Discipline 100 OctDecFebAprJunAug Utilization (%) Astro Atomic Biology Chemistry Climate Fusion Industry Materials Other

9 Rogers_NCCS_Hardware_SC07 Jaguar – Cray XT4 Jaguar Utilization by Discipline 100 OctDecFebAprJunAug Utilization (%) Astro Biology Chemistry Climate Fusion Industry Materials Other Combustion Nuclear

10 Rogers_NCCS_Hardware_SC07 Contact Jim Rogers Director of Operations National Center for Computational Sciences (865) Rogers_NCCS_Hardware_SC07