HIGH PERFORMANCE COMPUTING ENVIRONMENT The High Performance Computing environment consists of high-end systems used for executing complex number crunching.

Slides:



Advertisements
Similar presentations
IBM 1350 Cluster Expansion Doug Johnson Senior Systems Developer.
Advertisements

The AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members December 9,
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
1© Copyright 2011 EMC Corporation. All rights reserved. EMC SQL Server Data Warehouse Fast Track Solutions Nov 30, 2011 Ling Wu, EMC.
Information Technology Center Introduction to High Performance Computing at KFUPM.
Tamnun Hardware. Tamnun Cluster inventory – system Login node (Intel 2 E GB ) – user login – PBS – compilations, – YP master Admin.
AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members November 18, 2008.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
Presented by: Yash Gurung, ICFAI UNIVERSITY.Sikkim BUILDING of 3 R'sCLUSTER PARALLEL COMPUTER.
1. 2 Welcome to HP-CAST-NTIG at NSC 1–2 April 2008.
IBM RS6000/SP Overview Advanced IBM Unix computers series Multiple different configurations Available from entry level to high-end machines. POWER (1,2,3,4)
Silicon Graphics, Inc. Poster Presented by: SGI Proprietary Technologies for Breakthrough Research Rosario Caltabiano North East Higher Education & Research.
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
VMware Infrastructure Alex Dementsov Tao Yang Clarkson University Feb 28, 2007.
Sun FIRE Jani Raitavuo Niko Ronkainen. Sun FIRE 15K Most powerful and scalable Up to 106 processors, 576 GB memory and 250 TB online disk storage Fireplane.
Linux clustering Morris Law, IT Coordinator, Science Faculty, Hong Kong Baptist University.
IBM RS/6000 SP POWER3 SMP Jari Jokinen Pekka Laurila.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Tuesday, September 08, Head Node – Magic.cse.buffalo.edu Hardware Profile Model – Dell PowerEdge 1950 CPU - two Dual Core Xeon Processors (5148LV)
Virtual Network Servers. What is a Server? 1. A software application that provides a specific one or more services to other computers  Example: Apache.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster.
Institutional Research Computing at WSU: Implementing a community-based approach Exploratory Workshop on the Role of High-Performance Computing in the.
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
ww w.p ost ers essi on. co m E quipped with latest high end computing systems for providing wide range of services.
KYLIN-I 麒麟一号 High-Performance Computing Cluster Institute for Fusion Theory and Simulation, Zhejiang University
Research Computing with Newton Gerald Ragghianti Newton HPC workshop Sept. 3, 2010.
Corporate Partner Overview and Update September 27, 2007 Gary Crane SURA Director IT Initiatives.
Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center.
Introduction to the HPCC Dirk Colbry Research Specialist Institute for Cyber Enabled Research.
Farm Management D. Andreotti 1), A. Crescente 2), A. Dorigo 2), F. Galeazzi 2), M. Marzolla 3), M. Morandin 2), F.
Indiana University’s Name for its Sakai Implementation Oncourse CL (Collaborative Learning) Active Users = 112,341 Sites.
Business Intelligence Appliance Powerful pay as you grow BI solutions with Engineered Systems.
Taking the Complexity out of Cluster Computing Vendor Update HPC User Forum Arend Dittmer Director Product Management HPC April,
INDIACMS-TIFR Tier 2 Grid Status Report I IndiaCMS Meeting, April 05-06, 2007.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
ARGONNE NATIONAL LABORATORY Climate Modeling on the Jazz Linux Cluster at ANL John Taylor Mathematics and Computer Science & Environmental Research Divisions.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
CCS Overview Rene Salmon Center for Computational Science.
Computing Resources at Vilnius Gediminas Technical University Dalius Mažeika Parallel Computing Laboratory Vilnius Gediminas Technical University
Rob Allan Daresbury Laboratory NW-GRID Training Event 25 th January 2007 Introduction to NW-GRID R.J. Allan CCLRC Daresbury Laboratory.
AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members November 10, 2009.
Nanco: a large HPC cluster for RBNI (Russell Berrie Nanotechnology Institute) Anne Weill – Zrahia Technion,Computer Center October 2008.
Forschungszentrum Karlsruhe in der Helmholtz-Gemeinschaft Implementation of a reliable and expandable on-line storage for compute clusters Jos van Wezel.
Solution to help customers and partners accelerate their data.
Biryaltsev E.V., Galimov M.R., Demidov D.E., Elizarov A.M. HPC CLUSTER DEVELOPMENT AND OPERATION EXPERIENCE FOR SOLVING THE INVERSE PROBLEMS OF SEISMIC.
2011/08/23 國家高速網路與計算中心 Advanced Large-scale Parallel Supercluster.
ARCHER Advanced Research Computing High End Resource
The 2001 Tier-1 prototype for LHCb-Italy Vincenzo Vagnoni Genève, November 2000.
LSST Cluster Chris Cribbs (NCSA). LSST Cluster Power edge 1855 / 1955 Power Edge 1855 (*LSST1 – LSST 4) –Duel Core Xeon 3.6GHz (*LSST1 2XDuel Core Xeon)
A Scalable Distributed Datastore for BioImaging R. Cai, J. Curnutt, E. Gomez, G. Kaymaz, T. Kleffel, K. Schubert, J. Tafas {jcurnutt, egomez, keith,
SESAME-NET: Supercomputing Expertise for Small And Medium Enterprises Todor Gurov Associate Professor IICT-BAS
HIGH PERFORMANCE COMPUTING ENVIRONMENT
HP Proliant Server  Intel Xeon E3-1220v3 (3.1GHz / 4-core / 8MB / 80W).  HP 4GB Dual Rank x8 PC E (DDR3-1600) Unbuffered Memory Kit.  HP Ethernet.
Daniele Cesini - INFN CNAF. INFN-CNAF 20 maggio 2014 CNAF 2 CNAF hosts the Italian Tier1 computing centre for the LHC experiments ATLAS, CMS, ALICE and.
NREL is a national laboratory of the U.S. Department of Energy, Office of Energy Efficiency and Renewable Energy, operated by the Alliance for Sustainable.
Multicore Applications in Physics and Biochemical Research Hristo Iliev Faculty of Physics Sofia University “St. Kliment Ohridski” 3 rd Balkan Conference.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
NIIF HPC services for research and education
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
HPC usage and software packages
Chapter 6 Warehouse-Scale Computers to Exploit Request-Level and Data-Level Parallelism Topic 13 Using Energy Efficiently Inside the Server Prof. Zhang.
HIGH PERFORMANCE COMPUTING ENVIRONMENT
…updates… 9/19/2018.
Footer.
Overview of HPC systems and software available within
Tamnun Hardware.
HIGH PERFORMANCE COMPUTING ENVIRONMENT
SAP HANA Cost-optimized Hardware for Non-Production
Presentation transcript:

HIGH PERFORMANCE COMPUTING ENVIRONMENT The High Performance Computing environment consists of high-end systems used for executing complex number crunching applications for research it has two such machines and they are,  LIBRA (GPU) Cluster.  VIRGO Super cluster.

IBM's VIRGO SUPERCLUSTER

FDR 10 Infiniband Inter Processor Communication Network 298 IBM System x iDataPlex dx360 M4 Servers SAN Switch - 2SAN Switch Gbps Ethernet xCAT Management Network PFS - 1PFS - 2NFS VIRGO

System Configuration  292 Compute Nodes  2 Master Nodes  4 Storage Nodes  Total Compute Power 97 TFlops  IBM System x iDataPlex dx360 M4 Highly Optimized Servers for HPC  Populated with 2 X Intel E C 2.6 GHz Processor  A total of 64 GB RAM per node with 8 X 8 GB 1600 MHz DIMM connected in a fully balanced mode.  Low powered Mezzanine Adapter for FDR10 Infiniband based Inter processor communication

Storage Configuration  2 Storage Subsystem for PFS and 1 Storage Subsystem for NAS.  Fine Tuned Redundant SAN Switch  IBM DS3500 Series of Storage System  Total PFS Capacity 160 TB  Total NAS Capacity 50 TB

Virgo Highlights  Ranked 4 th in India  Ranked 224 th in the world  Fastest in Indian Academic Institutions  1 st in India  5 th in the world Rmax of TF Rpeak of TF Efficiency is 932 MFlop/Watt

Software Stack Commercial Software Namd 2.9 Gromacs 4.0.7,4.5.5 Ansys 11,12 Abaqus 6.10,.6.11 Fluent 6.3 OpenFoam Comsol 4.3 Mathematica 8,9 Matlab 2012a,2012b Lammps (parallel) Gaussian 9 version A.02 Allinea Novell SuSE Enterprise Linux with SP2 Extreme Cluster Administration Toolkit (xCAT) IBM General Parallel File System IBM Load Leveler Intel Cluster Studio IBM Tivoli Storage Manager Vtunes and Intel Cluster Studio XE

Text File Editor  Gvim/VI editor  Emacs  Gedit Compilers  Gnu compilers  Intel Compilers  Javac  Python 2.7/2.7.3  Cmake  Perl Parallel computing Frameworks  OpenMPI  MPIch  Intel-MPI Scientific Libraries  FFTW  HDF5  MKL  GNU Scientific Library  BLAS  LAPACK  LAPACK ++ Open Source Software  Open foam  Scilab Interpreters and Runtime Environments  Java  Python 2.7/2.7.3  Numpy Visualization Software  Gnuplot 4.0 Debugger  Gnu gdb  Intel idb

Libra Cluster  1 Head Node on HP Proliant DL380 G7 servers with Dual Processors, Six-Core Intel Xeon 5670 series processors with 24GB RAM and 146gb of SAS Hard disk.  8 nodes based on HP Proliant SL390s server of Dual processor, Six-core, Intel Xeon X5675 Processors with 3 Telsa M2070 GPU card and 146gb of SAS Hard disk in each node.

GPU Schematic

The GPU Cluster consist of One head node and 8 compute nodes and storage which achieves a performance of 6TFLOPS. Each node entails 3 Tesla GPU's, Intel Xeon with 12 CPU's and 10TB of shared storage. GPU Hardware