Appro Xtreme-X Supercomputers

Slides:



Advertisements
Similar presentations
The Development of Mellanox - NVIDIA GPUDirect over InfiniBand A New Model for GPU to GPU Communications Gilad Shainer.
Advertisements

©2009 HP Confidential template rev Ed Turkel Manager, WorldWide HPC Marketing 4/7/2011 BUILDING THE GREENEST PRODUCTION SUPERCOMPUTER IN THE.
Engenio 7900 HPC Storage System. 2 LSI Confidential LSI In HPC LSI (Engenio Storage Group) has a rich, successful history of deploying storage solutions.
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
Appro Xtreme-X Supercomputers A P P R O I N T E R N A T I O N A L I N C.
2. Computer Clusters for Scalable Parallel Computing
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
Program Systems Institute Russian Academy of Sciences1 Program Systems Institute Research Activities Overview Extended Version Alexander Moskovsky, Program.
IDC HPC User Forum Conference Appro Product Update Anthony Kenisky, VP of Sales.
Supermicro © 2009Confidential HPC Case Study & References.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO IEEE Symposium of Massive Storage Systems, May 3-5, 2010 Data-Intensive Solutions.
1 AppliedMicro X-Gene ® ARM Processors Optimized Scale-Out Solutions for Supercomputing.
1 petaFLOPS+ in 10 racks TB2–TL system announcement Rev 1A.
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
© Hitachi Data Systems Corporation All rights reserved. 1 1 Det går pænt stærkt! Tony Franck Senior Solution Manager.
Aim High…Fly, Fight, Win NWP Transition from AIX to Linux Lessons Learned Dan Sedlacek AFWA Chief Engineer AFWA A5/8 14 MAR 2011.
Module 9 PS-M4110 Overview <Place supporting graphic here>
Appro Products and Solutions Anthony Kenisky, Vice President of Sales Appro, Premier Provider of Scalable Supercomputing Solutions: 4/16/09.
© Copyright 2010 Hewlett-Packard Development Company, L.P. 1 HP + DDN = A WINNING PARTNERSHIP Systems architected by HP and DDN Full storage hardware and.
Bob Thome, Senior Director of Product Management, Oracle SIMPLIFYING YOUR HIGH AVAILABILITY DATABASE.
1 Advanced Storage Technologies for High Performance Computing Sorin, Faibish EMC NAS Senior Technologist IDC HPC User Forum, April 14-16, Norfolk, VA.
Roland Dreier Technical Lead – Cisco Systems, Inc. OpenIB Maintainer Sean Hefty Software Engineer – Intel Corporation OpenIB Maintainer Yaron Haviv CTO.
SDSC RP Update TeraGrid Roundtable Reviewing Dash Unique characteristics: –A pre-production/evaluation “data-intensive” supercomputer based.
Copyright 2009 Fujitsu America, Inc. 0 Fujitsu PRIMERGY Servers “Next Generation HPC and Cloud Architecture” PRIMERGY CX1000 Tom Donnelly April
CCS machine development plan for post- peta scale computing and Japanese the next generation supercomputer project Mitsuhisa Sato CCS, University of Tsukuba.
Workload Optimized Processor
Site Lightning Report: MWT2 Mark Neubauer University of Illinois at Urbana-Champaign US ATLAS Facilities UC Santa Cruz Nov 14, 2012.
Maximizing The Compute Power With Mellanox InfiniBand Connectivity Gilad Shainer Wolfram Technology Conference 2006.
CONFIDENTIAL Mellanox Technologies, Ltd. Corporate Overview Q1, 2007.
InfiniSwitch Company Confidential. 2 InfiniSwitch Agenda InfiniBand Overview Company Overview Product Strategy Q&A.
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Gilad Shainer, VP of Marketing Dec 2013 Interconnect Your Future.
Business Intelligence Appliance Powerful pay as you grow BI solutions with Engineered Systems.
Appro Products and Solutions Anthony Kenisky, Vice President of Sales Appro, Premier Provider of Scalable Supercomputing Solutions: 9/1/09.
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
Oracle RAC and Linux in the real enterprise October, 02 Mark Clark Director Merrill Lynch Europe PLC Global Database Technologies October, 02 Mark Clark.
HPC Business update HP Confidential – CDA Required
March 9, 2015 San Jose Compute Engineering Workshop.
Presented by Leadership Computing Facility (LCF) Roadmap Buddy Bland Center for Computational Sciences Leadership Computing Facility Project.
Cray Innovation Barry Bolding, Ph.D. Director of Product Marketing, Cray September 2008.
SAN DIEGO SUPERCOMPUTER CENTER SDSC's Data Oasis Balanced performance and cost-effective Lustre file systems. Lustre User Group 2013 (LUG13) Rick Wagner.
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP Update IDC HPC Forum.
IDC HPC User Forum April 14 th, 2008 A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering
Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011.
Presented by NCCS Hardware Jim Rogers Director of Operations National Center for Computational Sciences.
Tackling I/O Issues 1 David Race 16 March 2010.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
GPU Solutions Universal I/O Double-Sided Datacenter Optimized Twin Architecture SuperBlade ® Storage SuperBlade ® Configuration Training Francis Lam Blade.
Rackable Systems Company Update SWsoft Conference Jeff Stilwell – Sr. Director of Systems Engineering.
Voltaire and the CERN openlab collaborate on Grid technology project using InfiniBand May 27, 2004 Patrick Chevaux EMEA Business Development
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
MarketsandMarkets Presents MarketsandMarkets Presents Mobile datacenter-Micro Datacenter Market worth $20 Billion by 2018 Mobile datacenter-Micro Datacenter.
Extreme Scale Infrastructure
E2800 Marco Deveronico All Flash or Hybrid system
Enhancements for Voltaire’s InfiniBand simulator
NIIF HPC services for research and education
EonNAS.
iSCSI Storage Area Network
DIRECT IMMERSION COOLED IMMERS HPC CLUSTERS BUILDING EXPERIENCE
Welcome! Thank you for joining us. We’ll get started in a few minutes.
OCP: High Performance Computing Project
DIRECT IMMERSION COOLED IMMERS HPC CLUSTERS BUILDING EXPERIENCE
Super Computing By RIsaj t r S3 ece, roll 50.
Low Latency Analytics HPC Clusters
SiCortex Update IDC HPC User Forum
IBM Power Systems.
Cost Effective Network Storage Solutions
Ernst Haunschmid, TU WIEN EOSC, 30th October 2018
Presentation transcript:

Appro Xtreme-X Supercomputers A P P R O I N T E R N A T I O N A L I N C 1

Company Overview :: Corporate Snapshot Leading developer of high performance servers, clusters and supercomputers Established in 1991 Headquartered in Milpitas, CA Sales & Service office in Houston, TX Manufacturing Hardware in Asia Global Presence via Strategic and Channel Partners 72% Profitable CAGR over past 3 years Deployed the second largest Supercomputer in Japan Six top ranked computing systems listed in the Top 500 Delivering balanced architecture for scalable performance Target Markets Financial Services Government / Defense Manufacturing Oil & Gas 2

Strategic Partnership :: Appro & NEC Join Forces in HPC Market NEC has a strong presence in the EMEA HPC Market with over 20 years of experience This is a breakthrough for Appro’s entry into the EMEA HPC market Provides sustainable competitive advantages enabling both companies to participate in this growing market segment Appro and NEC look forward to working together to offer powerful, flexible and reliable solutions to EMEA HPC markets Formal Press Announcement will go out on Tuesday, 9/16/08

HPC Experience :: Past Performance History NOAA Cluster 2006. 9 2006. 11 NOAA Cluster LLNL Atlas Cluster 9,216 Cores 18.4TB System Memory 44 TFlops 1,424 Cores 2.8TB System Memory 15 TFlops 2007. 6 2008. 2 LLNL Minos Cluster DE Shaw Research Cluster 6,912 Cores 13.8TB System Memory 33 TFlops 4,608 Cores 9.2 TB System Memory 49 TFlops

HPC Experience :: Past Performance History TLCC Cluster 2008. 4 2008. 6 TLCC Cluster Tsukuba University Cluster 48,384 Cores LLNL, LANL, SNL 426 TFlops 10,784 Cores Quad-rail IB 95 TFlops 2008. 7 2008. 8 Renault F1 CFD Cluster LLNL Hera Cluster 4,000 Cores Dual-rail IB 38 TFlops 13,824 Cores 120 TFlops

HPC Challenges :: Changes in the Industry Petascale deployments (4000+ node deployment) Balanced Systems (CPU/Memory/Network) Scalability (SW & Network) Reliability (Real RAS: Network, Node, SW) Facilities (Space, Power & Cooling) Integrated exotics (GPU cluster) Solutions still being evaluated

Petascale Deployments :: Based on a Scalable Multi-Tier Architecture : InfiniBand for Computing : 10GbE Operation : GbE Management InfiniBand Network Operation Network (10GbE) External Firewall Router (GbE) Compute Node IO I/O Server Group Parallel File System Servers or Bridge 4X IB 4x IB 2x GbE per node 2x 10GbE 2x GbE N GbE Mgmt Storage Controllers FC or GbE Global File System GbE or 10GbE

Petascale Deployments :: Scalable cluster management software 3D Torus Network Topology Support Stateless Operation Job Scheduling Dual Rail Networks BIOS Synchronization ACE Middle Ware-Hooks Instant SW Provisioning Virtual Cluster Manager Failover & Recovery IB-Subnet Manager Standard Linux OS Support Remote Lights out Management “Appro Cluster Engine™ software turns a cluster of Servers into a,” functional, usable, reliable and available computing system” Jim Ballew, CTO Appro

Petascale Deployments :: Innovative Cooling and Density needed Top View Up to 30% Improvement in Density with Greater Cooling Efficiency Delivers Cold Air directly to the equipment for optimum cooling efficiency. Delivers comfortable air temperature to the room for return to Chillers Back-to-Back Rack configuration saves floor space in the datacenter and encloses the Cold isles inside the racks FRU and maintenance is done from the front side of the rack cabinet

Petascale Deployments :: Path to PetaFLOP Computing Appro Xtreme-X Supercomputer - Modular Scalable Performance Number of Racks 1 2 8 48 96 192 Number of Processors 128 256 1024 5,952 11,904 23,808 Number of Cores 512 1,024 4096 23,808 47,616 95,232 Peak Performance 6TF/s 12TF/s 49TF/s 279TF/s 558TF/s 1.1PF/s Memory Capacity 1.5TB 3TB 12TB 72TB 143TB 286TB Memory BW Ration GB/s per GF/s - 0.68GB/s per GF/s Memory Capacity Ratio GB per GF/s - 0.26GB per GF/s IO Fabric Interconnect – Dual-Rail QDR IO BW Ratio GB/sec per GF/s - 0.17GB/s per GF/s Usable Node-Node BW GB/s - 6.4GB/s Node-Node Latency - <2us Performance Numbers are Based on 2.93GHz Intel Nehalem Processors and Includes only Compute Nodes ....

Xtreme-X Supercomputer :: Possible Path to PetaFLOP GPU Computing GPU Computing Cluster – Solution still being evaluated Number of Racks 3 5 10 18 34 Number of Blades 64 128 256 512 1024 Number of GPUs 32 64 128 256 512 Peak GPU Performance 128TF 256TF 512TF 1PF 2PF Peak CPU Performance 6TF 12TF 24TF 48TF 96TF Max Memory Capacity 1.6TB 3.2TB 6.4TB 13TB 26TB Bandwidth to GPU – 6.4GB/sec Node Memory Bandwidth – 32GB/sec Max IO Bandwidth (2 QDR X4 IB) - 6.4GB/sec Node to Node Latency – 2us ....

Appro Xtreme-X Supercomputers Thank you Questions? A P P R O I N T E R N A T I O N A L I N C 12