Cray Environmental Industry Solutions Per Nyberg Earth Sciences Business Manager Annecy CAS2K3 Sept 2003.

Slides:



Advertisements
Similar presentations
Our Global Company J. Rivera. Introduction OGC Properties is a subsidiary of Our Global Company which provides services to people world- wide. This presentation.
Advertisements

O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Center for Computational Sciences Cray X1 and Black Widow at ORNL Center for Computational.
Engenio 7900 HPC Storage System. 2 LSI Confidential LSI In HPC LSI (Engenio Storage Group) has a rich, successful history of deploying storage solutions.
IDA Defense Science Study Group 11/99Mark D. Hill, University of Wisconsin-Madison Exploiting Market Realities to Address National Security’s High-Performance.
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
Appro Xtreme-X Supercomputers A P P R O I N T E R N A T I O N A L I N C.
Supercomputing Challenges at the National Center for Atmospheric Research Dr. Richard Loft Computational Science Section Scientific Computing Division.
♦ Commodity processor with commodity inter- processor connection Clusters Pentium, Itanium, Opteron, Alpha GigE, Infiniband, Myrinet, Quadrics, SCI NEC.
Ver 0.1 Page 1 SGI Proprietary Introducing the CRAY SV1 CRAY SV1-128 SuperCluster.
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
Zhao Lixing.  A supercomputer is a computer that is at the frontline of current processing capacity, particularly speed of calculation.  Supercomputers.
SAN DIEGO SUPERCOMPUTER CENTER Niches, Long Tails, and Condos Effectively Supporting Modest-Scale HPC Users 21st High Performance Computing Symposia (HPC'13)
National Center for Atmospheric Research John Clyne 4/27/11 4/26/20111.
GPU System Architecture Alan Gray EPCC The University of Edinburgh.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO SDSC RP Update October 21, 2010.
VERITAS Software Corp. BUSINESS WITHOUT INTERRUPTION Fredy Nick SE Manager.
E-Learning at Oracle: State of the Initiative and Lessons Learned Daniel Tkach Principal, Worldwide Marketing Oracle e-Business Suite.
HELICS Petteri Johansson & Ilkka Uuhiniemi. HELICS COW –AMD Athlon MP 1.4Ghz –512 (2 in same computing node) –35 at top500.org –Linpack Benchmark 825.
Top500: Red Storm An abstract. Matt Baumert 04/22/2008.
1 HPC and the ROMS BENCHMARK Program Kate Hedstrom August 2003.
Hitachi SR8000 Supercomputer LAPPEENRANTA UNIVERSITY OF TECHNOLOGY Department of Information Technology Introduction to Parallel Computing Group.
James Hogan Paul McLellan DAC Is this a system or chaos? Aggregate: “… the properties of components sum to the whole ”
Real Parallel Computers. Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra, Meuer, Simon Parallel.
ECMWF Slide 1CAS2K3, Annecy, 7-10 September 2003 Report from ECMWF Walter Zwieflhofer European Centre for Medium-Range Weather Forecasting.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Microsoft Virtual Academy. Microsoft Virtual Academy.
SGI Proprietary SGI Update IDC HPC User Forum September, 2008.
+. Background Design & Structure Motives & FitsPerformance Problems & Success Factors Microsoft: leading software companies – developing, manufacturing.
National Weather Service National Weather Service Central Computer System Backup System Brig. Gen. David L. Johnson, USAF (Ret.) National Oceanic and Atmospheric.
SGI Contributions to Supercomputing by 2010 Steve Reinhardt Director of Engineering
PIPER JAFFRAY COMPANIES APRIL 13, CAUTION REGARDING FORWARD-LOOKING STATEMENTS Statements contained in this presentation that are not historical.
Principles of Scalable HPC System Design March 6, 2012 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Maximizing The Compute Power With Mellanox InfiniBand Connectivity Gilad Shainer Wolfram Technology Conference 2006.
Seaborg Cerise Wuthrich CMPS Seaborg  Manufactured by IBM  Distributed Memory Parallel Supercomputer  Based on IBM’s SP RS/6000 Architecture.
CONFIDENTIAL Mellanox Technologies, Ltd. Corporate Overview Q1, 2007.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Jaguar Super Computer Topics Covered Introduction Architecture Location & Cost Bench Mark Results Location & Manufacturer Machines in top 500 Operating.
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
9-Sept-2003CAS2003, Annecy, France, WFS1 Distributed Data Management at DKRZ Distributed Data Management at DKRZ Wolfgang Sell Hartmut Fichtel Deutsches.
ESMF Performance Evaluation and Optimization Peggy Li(1), Samson Cheung(2), Gerhard Theurich(2), Cecelia Deluca(3) (1)Jet Propulsion Laboratory, California.
Frank Casilio Computer Engineering May 15, 1997 Multithreaded Processors.
Center for Computational Sciences O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY Vision for OSC Computing and Computational Sciences
March 9, 2015 San Jose Compute Engineering Workshop.
Cray Innovation Barry Bolding, Ph.D. Director of Product Marketing, Cray September 2008.
3 rd Party Software Gail Alverson August 5, 2005.
2009/4/21 Third French-Japanese PAAP Workshop 1 A Volumetric 3-D FFT on Clusters of Multi-Core Processors Daisuke Takahashi University of Tsukuba, Japan.
© 2010 DataDirect Networks. Confidential Information D D N D E L I V E R S DataDirect Networks Update Mike Barry, HPC Sales April IDC HPC User Forum.
Brent Gorda LBNL – SOS7 3/5/03 1 Planned Machines: BluePlanet SOS7 March 5, 2003 Brent Gorda Future Technologies Group Lawrence Berkeley.
1 Cray Inc. 11/28/2015 Cray Inc Slide 2 Cray Cray Adaptive Supercomputing Vision Cray moves to Linux-base OS Cray Introduces CX1 Cray moves.
Dell Banking & Securities HPC/ GRID Solutions Blake Gonzales HPC Computer Scientist.
1 THE EARTH SIMULATOR SYSTEM By: Shinichi HABATA, Mitsuo YOKOKAWA, Shigemune KITAWAKI Presented by: Anisha Thonour.
11 January 2005 High Performance Computing at NCAR Tom Bettge Deputy Director Scientific Computing Division National Center for Atmospheric Research Boulder,
COMP381 by M. Hamdi 1 Clusters: Networks of WS/PC.
OGC Properties Pat Smith. About Us Financial Highlights Forecast Revenue Business Overview Awards Copyright OGC Properties3 Agenda.
SOS7 What will Cray do for Supercomputing in this Decade? Asaph Zemach Cray Inc.
O AK R IDGE N ATIONAL L ABORATORY U. S. D EPARTMENT OF E NERGY The Center for Computational Sciences 1 State of the CCS SOS 8 April 13, 2004 James B. White.
Presented by NCCS Hardware Jim Rogers Director of Operations National Center for Computational Sciences.
Tackling I/O Issues 1 David Race 16 March 2010.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
Parallel Computers Today Oak Ridge / Cray Jaguar > 1.75 PFLOPS Two Nvidia 8800 GPUs > 1 TFLOPS Intel 80- core chip > 1 TFLOPS  TFLOPS = floating.
Scheduling a 100,000 Core Supercomputer for Maximum Utilization and Capability September 2010 Phil Andrews Patricia Kovatch Victor Hazlewood Troy Baer.
Petascale Computing Resource Allocations PRAC – NSF Ed Walker, NSF CISE/ACI March 3,
Fermi National Accelerator Laboratory & Thomas Jefferson National Accelerator Facility SciDAC LQCD Software The Department of Energy (DOE) Office of Science.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
Appro Xtreme-X Supercomputers
The Earth Simulator System
Super Computing By RIsaj t r S3 ece, roll 50.
BlueGene/L Supercomputer
Presentation transcript:

Cray Environmental Industry Solutions Per Nyberg Earth Sciences Business Manager Annecy CAS2K3 Sept 2003

SLIDE 2 Sept 2003 CAS2K3 Topics Update on Cray Inc. Cray Earth Sciences Focus Cray X1 Application Performance

SLIDE 3 Sept 2003 CAS2K3 In the news…

SLIDE 4 Sept 2003 CAS2K3 Recent Key Accomplishments Cray X1 Introduction Department of Energy Office of Science X1 Evaluation Department of Energy ASCI Sandia Red Storm Contract DARPA Contract/Cascade Project Financial Foundation Organizational Growth

SLIDE 5 Sept 2003 CAS2K3 Cray X1 Introduction Five early-production systems shipped in First customer ship was on schedule in December >260 Nodes delivered; > 13 Tflops. Production ramp-up in 2003.

SLIDE 6 Sept 2003 CAS2K3 DOE Office of Science X1 Evaluation

SLIDE 7 Sept 2003 CAS2K3 ORNL Evaluation

SLIDE 8 Sept 2003 CAS2K3 Sandia Red Storm ASCI Project $90+M contract with Sandia National Labs. Design and development of a massively- parallel, high-bandwidth system. Key system characteristics –Massively parallel system – 10,000 AMD 2 GHz processors –High-bandwidth mesh-based custom interconnect –High-performance I/O subsystem –Fault tolerant Full system delivery in 2004

SLIDE 9 Sept 2003 CAS2K3 DARPA – Cascade Project Advanced Research Program –Goal of a “trans-petaflops system” –Robust, easier to program, more broadly applicable Phase I –Started in June 2002 for one year –Five total vendors –University partners Phase II –Cray proposal selected –Three total vendors –$49.9M Three-year contract Phase III planned for 2006

SLIDE 10 Sept 2003 CAS2K3 Financial Highlights 2002Q1 ~2003Q2 - Six contiguous profitable quarters Increasing revenue growth –2003 revenue guidance: ~$220 million –X1 sales are expected to be ~$140 million. Significant R&D funding Public offering in February 2003 raised $48M Total Debt $15,712$4,537$3,546 Cash and Cash Equivalents $12,377$23,916$61,309 Mar 31, 2003 Working Capital ($5,724)$27,351$82,412 Total Assets $127,087$145,245$201,653 Shareholders Equity $39,750$83,561$138,162 Dec 31, 2002 Dec 31, 2001 $ in Thousands

SLIDE 11 Sept 2003 CAS2K3

SLIDE 12 Sept 2003 CAS2K3 Cray Earth Sciences Focus Key market for Cray technologies. –Market that pushes the frontiers of supercomputing. –Cray has unique offering. Rapidly growing and developing a team of environmental applications analysts. –>15 PhD level Applications Analysts worldwide. –6 hired in within last year. –Focused on continuously supporting key weather, climate and ocean applications. ORNL collaboration –Focus on high-end climate science. –Porting and optimization of NCAR CCSM to X1.

SLIDE 13 Sept 2003 CAS2K3 Cray X1 Design Approach Cray X1 was designed from the ground up as an MPP architecture with vector processors: Highly scalable, shared memory MPP. Scalable operating system with single system image. Heterogeneous Storage Area Network. RAID Server Cray X1 RAID Tape Archive Data Centric Storage Area Network

SLIDE 14 Sept 2003 CAS2K3 Cray X1 Features Fast Single Processor: –12.8 Gflops MSP Memory Bandwidth: –Single processor: 24 GB/s STREAM TRIAD –Scaled: 1171 GB/s (15 Nodes/1 Chassis) STREAM TRIAD Network Performance: –A 128 node system has a typical latency of ~1µs and a bisection bandwidth of ~820 GB/s. Scalable Architecture – not clustered: –2-D Torus network Scalable operating system: –Single Operating System to 512 MSPs (>6 Tflops) ADIC StorNext SAN and HSM.

SLIDE 15 Sept 2003 CAS2K3 Cray X1 LC Chassis 16 Nodes; > 800 Gflops  GB/s Local node memory Peak BW = 16 slices x 11.4 GB/s/slice = GB/s Capacity = 16, 32 (late 2003), 64 (late 2004) GB 512 Banks Inter-node network Two ports per M-chip 1.6 GB/s peak both directions per port 2D Torus Two I/O channel pairs per node = 4 x 1.2 GB/s = 4.8 GB/s Cray X1 Node 51.2 GFLOPS

SLIDE 16 Sept 2003 CAS2K3 128 Nodes – 512 CPUs – 6.4 TFLOPS

SLIDE 17 Sept 2003 CAS2K3 Cray SAN Direction Heterogeneous Client Nodes Fail Over Meta Data Servers. A True Native Heterogeneous SAN based on ADIC

SLIDE 18 Sept 2003 CAS2K3 X1 STREAM Results

SLIDE 19 Sept 2003 CAS2K3 MM5 Ver t3a Benchmark Internal Data – Work-in-Progress

SLIDE 20 Sept 2003 CAS2K3 IFS T511L60 Performance

SLIDE 21 Sept 2003 CAS2K3 POP x1 Benchmark

SLIDE 22 Sept 2003 CAS2K3 Cray Product Line Roadmap Cray X1 Cray T3E-1350 Cray T90 Cray SV1ex: Cray X1e (2004) Shared Technologies & Insights Black Widow + Black Widow Sustained Pflops Sustained Pflops - Sustained Pflops Black Widow Cray X1–The first in a series of extreme performance systems from Cray Extreme Performance, Highly Differentiated Supercomputers DARPA Pflops Program – Cascade project ASCI Red Storm project

SLIDE 23 Sept 2003 CAS2K3 Summary Scientific High-Performance Computing is Cray’s sole focus. Cray has a unique offering. Cray has a strong product roadmap and is committed to our mission. Cray is executing successfully on both financial and operational fronts. Cray is well suited as a partner to the environmental community.

Thank You for Your Attention.