DIRECT IMMERSION COOLED IMMERS HPC CLUSTERS BUILDING EXPERIENCE

Slides:



Advertisements
Similar presentations
Appro Xtreme-X Supercomputers A P P R O I N T E R N A T I O N A L I N C.
Advertisements

CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
Program Systems Institute Russian Academy of Sciences1 Program Systems Institute Research Activities Overview Extended Version Alexander Moskovsky, Program.
Information Technology Center Introduction to High Performance Computing at KFUPM.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
ASKAP Central Processor: Design and Implementation Calibration and Imaging Workshop 2014 ASTRONOMY AND SPACE SCIENCE Ben Humphreys | ASKAP Software and.
High Performance Computing Center North  HPC2N 2002 all rights reserved HPC2N and SweGrid Åke Sandgren, HPC2N and SweGrid Technology Group.
By: Kathleen Walters CLOUD COMPUTING Definition Cloud computing allows multiple computers to connect to one main network. Instead of installing different.
02/24/09 Green Data Center project Alan Crosswell.
1 petaFLOPS+ in 10 racks TB2–TL system announcement Rev 1A.
ILLiad Migration & Server Upgrade: From Your Library's' IT Point of View Juan Denzer Library System Specialist August 1, 2013.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Techniques for establishing and maintaining constant temperature in ICT systems in order to reduce energy consumption Mihail Radu Cătălin Truşcă, Ş. Albert,
ww w.p ost ers essi on. co m E quipped with latest high end computing systems for providing wide range of services.
KYLIN-I 麒麟一号 High-Performance Computing Cluster Institute for Fusion Theory and Simulation, Zhejiang University
Southgrid Status Report Pete Gronbech: February 2005 GridPP 12 - Brunel.
© Copyright 2010 Hewlett-Packard Development Company, L.P. 1 HP + DDN = A WINNING PARTNERSHIP Systems architected by HP and DDN Full storage hardware and.
Green technology used for ATLAS processing Dr. Ing. Fărcaş Felix NATIONAL INSTITUTE FOR RESEARCH AND DEVELOPMENT OF ISOTOPIC AND MOLECULAR.
DRAFT 1 Institutional Research Computing at WSU: A community-based approach Governance model, access policy, and acquisition strategy for consideration.
Liam Newcombe BCS Data Centre Specialist Group Secretary Energy Efficient Data Centres.
Strata IT Training Chapter 16 Green Networking. Green Servers Power Management –Power off monitor settings –Automatic server shutdown policy Rack mounted.
Business Intelligence Appliance Powerful pay as you grow BI solutions with Engineered Systems.
17-April-2007 High Performance Computing Basics April 17, 2007 Dr. David J. Haglin.
S&T IT Research Support 11 March, 2011 ITCC. Fast Facts Team of 4 positions 3 positions filled Focus on technical support of researchers Not “IT” for.
Biryaltsev E.V., Galimov M.R., Demidov D.E., Elizarov A.M. HPC CLUSTER DEVELOPMENT AND OPERATION EXPERIENCE FOR SOLVING THE INVERSE PROBLEMS OF SEISMIC.
A 1.7 Petaflops Warm-Water- Cooled System: Operational Experiences and Scientific Results Łukasz Flis, Karol Krawentek, Marek Magryś.
Patryk Lasoń, Marek Magryś
IDC HPC User Forum April 14 th, 2008 A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering
Computing Facilities CERN IT Department CH-1211 Geneva 23 Switzerland t CF CERN Computer Centre Consolidation Project Vincent Doré IT Technical.
Sugon Server TC5600-H v3 Moscow, 12/2015.
Authors: William Tschudi, Lawrence Berkeley National Lab Stephen Fok, Pacific Gas and Electric Company Stephen Fok, Pacific Gas and Electric Company Presented.
Power and Cooling at Texas Advanced Computing Center Tommy Minyard, Ph.D. Director of Advanced Computing Systems 42 nd HPC User Forum September 8, 2011.
This courseware is copyrighted © 2016 gtslearning. No part of this courseware or any training material supplied by gtslearning International Limited to.
[Eco 2 System] A Fundamental Twist To Data Centre Cooling Ecological Friendly & Economical.
BLADE HEMAL RANA BLADE TECHNOLOGIES PRESENTED BY HEMAL RANA COMPUTER ENGINEER GOVERNMENT ENGINEERING COLLEGE,MODASA.
Extreme Scale Infrastructure
CANOVATE MOBILE (CONTAINER) DATA CENTER SOLUTIONS
NIIF HPC services for research and education
Chapter 1: Introduction to the Personal Computer
Lenovo – DataCore Deployment Ready Offerings
Introduction to Insero Energy
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
DCS Status and Amanda News
Chapter 4: Overview of Preventive Maintenance
Low-Cost High-Performance Computing Via Consumer GPUs
The demonstration of Lustre in EAST data system
Server Consolidation and Virtualization
Software Defined Storage
Heterogeneous Computation Team HybriLIT
DIRECT IMMERSION COOLED IMMERS HPC CLUSTERS BUILDING EXPERIENCE
Jeremy Maris Research Computing IT Services University of Sussex
Appro Xtreme-X Supercomputers
Clouds of JINR, University of Sofia and INRNE Join Together
© 2016 Global Market Insights, Inc. USA. All Rights Reserved Fuel Cell Market size worth $25.5bn by 2024 Data Center Cooling Market.
NGS Oracle Service.
Instructor Materials Chapter 1: Introduction to the Personal Computer
Clustered Systems Introduction
Power couple. Dell EMC servers powered by Intel® Xeon® processors and running Windows Server* 2016, ready to securely handle dynamic business workloads.
Shared Research Computing Policy Advisory Committee (SRCPAC)
Windows Server 2016 Software Defined Storage
MIS.
Dean Martin Cadwallader Dean of the Graduate School
. Level 3 Air Conditioning Inspections for Buildings 15. User Controls
Hyper-Converged Technology a User's Perspective
Footer.
Customer Overview Pack – July 2018
Customer Overview Pack – July 2018
CUBAN ICT NETWORK UNIVERSITY COOPERATION (VLIRED
AKSUM UNIVERSITY ICT Directorate Oct 10, 2018 Axum, Tigray, Ethiopia
Ernst Haunschmid, TU WIEN EOSC, 30th October 2018
Presentation transcript:

DIRECT IMMERSION COOLED IMMERS HPC CLUSTERS BUILDING EXPERIENCE Leo Klyuev, General Manager, Immers Inc. IMMERS Direct Immersion Cooling Systems Codesigned by the STORUS Group and the Ailamazyan Program Systems Institute of the Russian Academy of Sciences

Topics Technology Overview Project History Product Line Key Features MEPHI projects

Technology Overview. Participants Ailamazyan Program Systems Institute of the Russian Academy of Sciences established in 1983 lead the implementation of SKIF and SKIF-GRID State Programs Storus group was founded in 2001. It specializes in storage systems and high performance computing

Technology Overview. Participants IMMERS is part of the Storus Group. The Company's primary focus is to develop, supply and maintain HPC systems for educational, academia/research institutions and data centers dedicated to industrial, engineering and oil-gas companies

Technology Overview Server racks: immers cooling of computational nodes (motherboards, hard drives, and power supplies) Heat exchange unit serves to absorb heat and transfer it to the outside (heating of water, heat rejection) Control unit ensures optimum operation of the system.

Project History Server March 2011 – First experiments September 2011 – IMMERS-01 first release readiness, start of survival testing April 2012 – IMMERS-02 second release readiness September 2012 – Successful completion of the first supply July 2013 – IMMERS-03 third release readiness January 2014 – IMMERS 660 start of supplies May 2014 – IMMERS 880 availability April 2015 – IMMERS 6 R4 availability October 2016 – IMMERS 8 R6 availability .

Product Line IMMERS 6 R6. Up to 24 Intel/AMD CPUs. Ideal for building HPC clusters with a 100 Tflops performance

Product Line IMMERS 8 R6. Up to 78 Intel/AMD CPUs. Ideal for building computing clusters with a 70 Tflops peak performance

IMMERS 6 R6/8 R6 Computational Nodes Homogeneous nodes. Up to 4 Intel/AMD CPUs motherboards Heterogeneous nodes. 2x Intel Xeon E5 CPUs + 4x nVIDIA or Xeon Phi or AMD Radeon cards Depending on the application, computational nodes can be combined

Key Features. IMMERS allows to Eliminate specific requirements for temperature, humidity, and air purity in data centers & server rooms Significantly simplify the cooling system and increase its reliability Provide comfortable conditions of work in data centers due to decrease of noise and increase of air temperature Substantially reduce floor space by eliminating hot/cold aisles and inter-row air conditioning

Key Features The computing cluster operates at an industry-record PUE of 1.037 The noise level of the operating computing cluster does not exceed 48 dB -> cluster can be installed in research lab close to work places The installation of equipment at the customer’s premises takes 1-2 working days

MEPHI projects HPC cluster Cherenkov installed in main data center 20 2xXeon E5-2630 v3 nodes 64 Gb DDR4 per node IB FDR network 50 Tb external storage Full onsite installation, tuning peak performance

MEPHI projects HPC cluster installed in education college 1511 Three 2xXeon E5-2630 v4 nodes 128 Gb DDR4 per node 800 Gb SSD per node IB FDR network Tuning performance, dedicated parallel programming master classes

Immers Services System design and development, including computing and communication units, power system and cooling system Software development Installation and commissioning of equipment at the customer’s premises Testing, configuration and software upgrades Maintenance service Staff training

Got questions? Just ask. If you have any questions, please contact IMMERS by phone or email Tel/Fax: +7 (495) 631-56-43 l.klyuev@immers.ru