Maximizing The Compute Power With Mellanox InfiniBand Connectivity Gilad Shainer Wolfram Technology Conference 2006.

Slides:



Advertisements
Similar presentations
System Area Network Abhiram Shandilya 12/06/01. Overview Introduction to System Area Networks SAN Design and Examples SAN Applications.
Advertisements

The Development of Mellanox - NVIDIA GPUDirect over InfiniBand A New Model for GPU to GPU Communications Gilad Shainer.
1 Agenda … HPC Technology & Trends HPC Platforms & Roadmaps HP Supercomputing Vision HP Today.
©2009 HP Confidential template rev Ed Turkel Manager, WorldWide HPC Marketing 4/7/2011 BUILDING THE GREENEST PRODUCTION SUPERCOMPUTER IN THE.
Brocade VDX 6746 switch module for Hitachi Cb500
Proprietary or Commodity? Interconnect Performance in Large-scale Supercomputers Author: Olli-Pekka Lehto Supervisor: Prof. Jorma Virtamo Instructor: D.Sc.
Appro Xtreme-X Supercomputers A P P R O I N T E R N A T I O N A L I N C.
♦ Commodity processor with commodity inter- processor connection Clusters Pentium, Itanium, Opteron, Alpha GigE, Infiniband, Myrinet, Quadrics, SCI NEC.
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
Program Systems Institute Russian Academy of Sciences1 Program Systems Institute Research Activities Overview Extended Version Alexander Moskovsky, Program.
25 Years of Changing the World Q3 FY08. SGI PROPRIETARY Who Is SGI Our people provide the best compute, storage and visualization solutions on the planet…
IDC HPC User Forum Conference Appro Product Update Anthony Kenisky, VP of Sales.
Supermicro © 2009Confidential HPC Case Study & References.
A Comparative Study of Network Protocols & Interconnect for Cluster Computing Performance Evaluation of Fast Ethernet, Gigabit Ethernet and Myrinet.
Introduction What is Parallel Algorithms? Why Parallel Algorithms? Evolution and Convergence of Parallel Algorithms Fundamental Design Issues.
Hitachi SR8000 Supercomputer LAPPEENRANTA UNIVERSITY OF TECHNOLOGY Department of Information Technology Introduction to Parallel Computing Group.
Cluster Computing Slides by: Kale Law. Cluster Computing Definition Uses Advantages Design Types of Clusters Connection Types Physical Cluster Interconnects.
A Scalable, Commodity Data Center Network Architecture Mohammad Al-Fares, Alexander Loukissas, Amin Vahdat Presented by Gregory Peaker and Tyler Maclean.
Real Parallel Computers. Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra, Meuer, Simon Parallel.
Scheduling Strategies for HPC as a Service (HPCaaS) for Bio-Science Applications Sep 2009 High Performance Interconnects for Distributed Computing (HPI-DC)
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Cluster Computers. Introduction Cluster computing –Standard PCs or workstations connected by a fast network –Good price/performance ratio –Exploit existing.
1 Some Context for This Session…  Performance historically a concern for virtualized applications  By 2009, VMware (through vSphere) and hardware vendors.
1 Down Place Hammersmith London UK 530 Lytton Ave. Palo Alto CA USA.
Parallel Communications and NUMA Control on the Teragrid’s New Sun Constellation System Lars Koesterke with Kent Milfeld and Karl W. Schulz AUS Presentation.
KYLIN-I 麒麟一号 High-Performance Computing Cluster Institute for Fusion Theory and Simulation, Zhejiang University
© Copyright 2010 Hewlett-Packard Development Company, L.P. 1 HP + DDN = A WINNING PARTNERSHIP Systems architected by HP and DDN Full storage hardware and.
Copyright 2009 Fujitsu America, Inc. 0 Fujitsu PRIMERGY Servers “Next Generation HPC and Cloud Architecture” PRIMERGY CX1000 Tom Donnelly April
Server and Storage Connectivity Solutions
Seaborg Cerise Wuthrich CMPS Seaborg  Manufactured by IBM  Distributed Memory Parallel Supercomputer  Based on IBM’s SP RS/6000 Architecture.
CONFIDENTIAL Mellanox Technologies, Ltd. Corporate Overview Q1, 2007.
© 2012 MELLANOX TECHNOLOGIES 1 The Exascale Interconnect Technology Rich Graham – Sr. Solutions Architect.
The NE010 iWARP Adapter Gary Montry Senior Scientist
The Red Storm High Performance Computer March 19, 2008 Sue Kelly Sandia National Laboratories Abstract: Sandia National.
Gilad Shainer, VP of Marketing Dec 2013 Interconnect Your Future.
Rensselaer Why not change the world? Rensselaer Why not change the world? 1.
Jaguar Super Computer Topics Covered Introduction Architecture Location & Cost Bench Mark Results Location & Manufacturer Machines in top 500 Operating.
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
HPC Business update HP Confidential – CDA Required
Remote Direct Memory Access (RDMA) over IP PFLDNet 2003, Geneva Stephen Bailey, Sandburst Corp., Allyn Romanow, Cisco Systems,
March 9, 2015 San Jose Compute Engineering Workshop.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
2009/4/21 Third French-Japanese PAAP Workshop 1 A Volumetric 3-D FFT on Clusters of Multi-Core Processors Daisuke Takahashi University of Tsukuba, Japan.
Infiniband Bart Taylor. What it is InfiniBand™ Architecture defines a new interconnect technology for servers that changes the way data centers will be.
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP Update IDC HPC Forum.
Performance Networking ™ Server Blade Summit March 23, 2005.
Mellanox Connectivity Solutions for Scalable HPC Highest Performing, Most Efficient End-to-End Connectivity for Servers and Storage April 2010.
Rick Claus Sr. Technical Evangelist,
Efficiency of small size tasks calculation in grid clusters using parallel processing.. Olgerts Belmanis Jānis Kūliņš RTU ETF Riga Technical University.
Gravitational N-body Simulation Major Design Goals -Efficiency -Versatility (ability to use different numerical methods) -Scalability Lesser Design Goals.
Interconnection network network interface and a case study.
High Performance Computing
Copyright © 2009 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Principles of Parallel Programming First Edition by Calvin Lin Lawrence Snyder.
Barriers to IB adoption (Storage Perspective) Ashish Batwara Software Solution Architect May 01, 2007.
Mellanox Connectivity Solutions for Scalable HPC Highest Performing, Most Efficient End-to-End Connectivity for Servers and Storage September 2010 Brandon.
3/12/2013Computer Engg, IIT(BHU)1 PARALLEL COMPUTERS- 2.
Tackling I/O Issues 1 David Race 16 March 2010.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
Background Computer System Architectures Computer System Software.
Rackable Systems Company Update SWsoft Conference Jeff Stilwell – Sr. Director of Systems Engineering.
Voltaire and the CERN openlab collaborate on Grid technology project using InfiniBand May 27, 2004 Patrick Chevaux EMEA Business Development
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
Enhancements for Voltaire’s InfiniBand simulator
Appro Xtreme-X Supercomputers
Hot Processors Of Today
Super Computing By RIsaj t r S3 ece, roll 50.
BlueGene/L Supercomputer
© 2016 Global Market Insights, Inc. USA. All Rights Reserved Ethernet Storage Market Size Growth During Forecast Period.
Presentation transcript:

Maximizing The Compute Power With Mellanox InfiniBand Connectivity Gilad Shainer Wolfram Technology Conference 2006

Mellanox Technologies - Maximizing The Compute Power 2 A global leader in semiconductor solutions for server and storage connectivity Leading provider of low-latency and high-bandwidth InfiniBand solutions Converges clustering, communications, management and storage onto a single link with Quality of Service Mellanox Technologies SERVERSSTORAGE INFINIBAND ADAPTERS

Mellanox Technologies - Maximizing The Compute Power 3 Mellanox InfiniBand Off-the-Shelf Storage Commodity Servers High Performance Computing Proprietary Systems $1M+ Expensive to scale Not flexible Clusters One-tenth the cost Affordable to scale Very flexible Industry Megatrend InfiniBand Clusters Up to 40X more performance 1/10 latency Scalable to 10,000s of connections Flexible and easy to manage

Mellanox Technologies - Maximizing The Compute Power 4 Increased Demand for Compute Power Automotive  $500K per vehicle crash test (GM)  Design cycles reduced from 4 years to 18 months Digital Content Creation  $90M to produce “The Incredibles”  $630M gross income Oil and Gas Exploration  Up to $4B for offshore wells Weather Forecasting  $1M per mile to evacuate the coast

Mellanox Technologies - Maximizing The Compute Power 5 Mellanox InfiniBand Performance Bavarian Car-To-Car Model1.1 M elements, cycles InfiniBand price/performance advantage increases with cluster size Gigabit Ethernet becomes ineffective with cluster size Lower Is better

Mellanox Technologies - Maximizing The Compute Power 6 Building High Performances Solution Best price/performance servers  PCI Express servers  Low power multi-core CPUs Best price/performance storage  High performance file system Mellanox InfiniBand  Highest bandwidth, low latency, low CPU overhead  Parallel direct access from compute nodes to storage HP c-Class Blade System 20Gb/s InfiniBand

Mellanox Technologies - Maximizing The Compute Power 7 Top500 – Industry-wide Clustering Trends List of the 500 most powerful computers  Published twice a year InfiniBand deployments increase 33% from Nov05 to Jun06  The only growing high speed interconnect solution Three Top10 high ranked industry- standard clusters use InfiniBand  #4 NASA, 10K Itanium-2 CPUs, 51.8TFlops (2004)  #6 Sandia National Laboratories, 4500 nodes, 9K CPUs, 38.2TFlops (2005)  #7 Tokyo Institute of Technology, 1300 nodes, 10K CPUs, 38.1TFlops (2006)

Mellanox Technologies - Maximizing The Compute Power 8 Mellanox Increases The Compute Power Reduces the high CPU processing overhead  Full transport offload, Remote Direct Memory Access (RDMA) Allows performance-hungry applications to fully utilize CPU resources  Overlapping I/O communication with CPU computation cycles

Mellanox Technologies - Maximizing The Compute Power 9 Mellanox Increases The Compute Power Dramatically increases overall compute cluster efficiency  Eliminating memory bandwidth bottleneck (Zero-copy) Ensures I/O resource for multi-core systems  20Gb/s node to node (40Gb/s, 2007) …

Mellanox Technologies - Maximizing The Compute Power 10 A Unified InfiniBand Fabric World-class performance Simplified Management Ultimate scalability Optimal total cost of ownership

Mellanox Technologies - Maximizing The Compute Power 11 Mellanox Cluster Center Neptune cluster  32 Node,  Dual core AMD Opteron CPUs Helios cluster  32 node  Dual core Intel Woodcrest CPUs Utilizing “Fat Tree” network architecture (CBB)  Non-blocking switch topology  Non-blocking bandwidth InfiniBand 20Gb/s

Mellanox Technologies - Maximizing The Compute Power 12 Air Pollution Simulation – The Problem Technological innovations increases our need for energy and materials but also increase air pollutions Worldwide air pollution is responsible for a large number of deaths and cases of respiratory disease  The World Health Organization estimates 4.6 million people die each year from causes directly attributable to air pollution Minimizing and managing the production of pollutants is critical to our environment Complex modeling and simulation requires a compute- intensive solution

Mellanox Technologies - Maximizing The Compute Power 13 Air Pollution Simulation – The Solution Personal supercomputing for scientific research, modeling and simulations High speed, low latency and low CPU overhead InfiniBand interconnect High performance low power multi-core CPUs Powerful Wolfram gridMathematica supercomputing environment for developing solutions Ease-of-use of Windows Compute Cluster Server operating system

Mellanox Technologies - Maximizing The Compute Power 14 Air Pollution Simulation - Performance Maximum utilization, efficiency and scalability Seconds

Mellanox Technologies - Maximizing The Compute Power 15 Summary Wolfram gridMathematica with the performance of Mellanox InfiniBand Powerful environment for rapidly developing solutions for computationally challenging problems

Q&A Gilad Shainer, “A leading supplier of semiconductor-based, high performance interconnect products”