HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.

Slides:



Advertisements
Similar presentations
Founded in 2010: UCL, Southampton, Oxford and Bristol Key Objectives of the Consortium: Prove the concept of shared, regional e-infrastructure services.
Advertisements

Issues of HPC software From the experience of TH-1A Lu Yutong NUDT.
LinkSCEEM-2: A computational resource for the Eastern Mediterranean.
The Development of Mellanox - NVIDIA GPUDirect over InfiniBand A New Model for GPU to GPU Communications Gilad Shainer.
1 Computational models of the physical world Cortical bone Trabecular bone.
The AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members December 9,
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
Early Linpack Performance Benchmarking on IPE Mole-8.5 Fermi GPU Cluster Xianyi Zhang 1),2) and Yunquan Zhang 1),3) 1) Laboratory of Parallel Software.
CaSToRC LinkSCEEM-2: A Computational Resource for the Development of Computational Sciences in the Eastern Mediterranean Jens Wiegand Scientific Coordinator.
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
Information Technology Center Introduction to High Performance Computing at KFUPM.
LinkSCEEM Roadshow Introduction to LinkSCEEM/SESAME/IMAN1 4 May 2014, J.U.S.T Presented by Salman Matalgah Computing Group leader SESAME.
AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members November 18, 2008.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
HIGH PERFORMANCE COMPUTING ENVIRONMENT The High Performance Computing environment consists of high-end systems used for executing complex number crunching.
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
Academic and Research Technology (A&RT)
Hitachi SR8000 Supercomputer LAPPEENRANTA UNIVERSITY OF TECHNOLOGY Department of Information Technology Introduction to Parallel Computing Group.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
1 petaFLOPS+ in 10 racks TB2–TL system announcement Rev 1A.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster.
Institutional Research Computing at WSU: Implementing a community-based approach Exploratory Workshop on the Role of High-Performance Computing in the.
Introduction to LinkSCEEM and SESAME 15 June 2014, ibis Hotel, Amman - Jordan Presented by Salman Matalgah Computing Group leader SESAME.
HPC at IISER Pune Neet Deo System Administrator
GPU Programming with CUDA – Accelerated Architectures Mike Griffiths
Data Partitioning on Heterogeneous Multicore and Multi-GPU Systems Using Functional Performance Models of Data-Parallel Applications Published in: Cluster.
Chapter 2 Computer Clusters Lecture 2.3 GPU Clusters for Massive Paralelism.
WORK ON CLUSTER HYBRILIT E. Aleksandrov 1, D. Belyakov 1, M. Matveev 1, M. Vala 1,2 1 Joint Institute for nuclear research, LIT, Russia 2 Institute for.
Introduction to the HPCC Jim Leikert System Administrator High Performance Computing Center.
By Arun Bhandari Course: HPC Date: 01/28/12. GPU (Graphics Processing Unit) High performance many core processors Only used to accelerate certain parts.
Introduction to the HPCC Dirk Colbry Research Specialist Institute for Cyber Enabled Research.
Sobolev Showcase Computational Mathematics and Imaging Lab.
Lab System Environment
Scientific Computing Experimental Physics Lattice QCD Sandy Philpott May 20, 2011 IT Internal Review 12GeV Readiness.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
CEA DSM Irfu IRFU site report. CEA DSM Irfu HEPiX Fall 0927/10/ Computing centers used by IRFU people IRFU local computing IRFU GRIF sub site Windows.
AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members November 10, 2009.
2011/08/23 國家高速網路與計算中心 Advanced Large-scale Parallel Supercluster.
ARCHER Advanced Research Computing High End Resource
How to use HybriLIT Matveev M. A., Zuev M.I. Heterogeneous Computations team HybriLIT Laboratory of Information Technologies (LIT), Joint Institute for.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
Getting Started: XSEDE Comet Shahzeb Siddiqui - Software Systems Engineer Office: 222A Computer Building Institute of CyberScience May.
SESAME-NET: Supercomputing Expertise for Small And Medium Enterprises Todor Gurov Associate Professor IICT-BAS
CaSToRC Linking Scientific Computing in Europe and the Eastern Mediterranean LinkSCEEM-2 CyI IUCC CYNET SESAME JUNET JSC BA NARSS MPIC ESRF NCSA.
Multicore Applications in Physics and Biochemical Research Hristo Iliev Faculty of Physics Sofia University “St. Kliment Ohridski” 3 rd Balkan Conference.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
Sobolev(+Node 6, 7) Showcase +K20m GPU Accelerator.
29 th of March 2016 HPC Applications and Parallel Computing Paradigms.
Advanced Computing Facility Introduction
NIIF HPC services for research and education
Opportunities offered by LinkSCEEM Project
Introduction to Parallel Computing: MPI, OpenMP and Hybrid Programming
HPC usage and software packages
Low-Cost High-Performance Computing Via Consumer GPUs
White Rose Grid Infrastructure Overview
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
I. E. Venetis1, N. Nikoloutsakos1, E. Gallopoulos1, John Ekaterinaris2
Heterogeneous Computation Team HybriLIT
LinkSCEEM-2: A computational resource for the Eastern Mediterranean
Appro Xtreme-X Supercomputers
Low-Cost High-Performance Computing Via Consumer GPUs
Small site approaches - Sussex
System G And CHECS Cal Ribbens
Footer.
Overview of HPC systems and software available within
CUBAN ICT NETWORK UNIVERSITY COOPERATION (VLIRED
Ernst Haunschmid, TU WIEN EOSC, 30th October 2018
EmPOWERing Software Porting Code to Run on our Power AI Cluster
Presentation transcript:

HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project

Overview Available HPC Systems Software Environment BA Cy-Tera Euclid 300+ Software modules installed

HPC System at Bibliotheca Alexandrina SUN cluster of peak performance of 12 Tflops 130 eight-core compute nodes 2 quad-core sockets per node, each is Intel Quad Xeon E5440 @ 2.83GHz 8 GB memory per node, Total memory 1.05 TBytes 36 TB shared scratch (lustre) Node-node interconnect Ethernet & 4x SDR Infiniband network for MPI 4x SDR Infiniband network for I/O to the global Lustre filesystems

HPC System at Bibliotheca Alexandrina

Cy-Tera HPC System at CyI Cy-Tera is the first large cluster as part of a Cypriot National HPC Facility Cy-Tera Strategic Infrastructure Project A new research unit to host a HPC infrastructure RPF funded project The Project Cy-Tera (ΝΕΑ ΥΠΟΔΟΜΗ/ΣΤΡΑΤΗ/0308/31) is co- financed by the European Regional Development Fund and the Republic of Cyprus through the Research Promotion Foundation LinkSCEEM uses the Cy-Tera HPC System at The Cyprus Institute

Cy-Tera HPC System at CyI Hybrid CPU/GPU Linux Cluster Computational Power 98 x 2 x Westmere 6-core compute nodes Each compute node = 128GFlops 18 x 2 x Westmere 6-core + 2 x NVIDIA M2070 GPU nodes Each GPU node = 1 Tflop Theoretical Peak Performance (TPP) = 30.5Tflops 48 GB memory per node MPI Messaging & Storage Access 40Gbps QDR Infiniband Storage: 360TB raw disk

Cy-Tera HPC System at CyI

Cy-Tera HPC System at CyI Cy-Tera Software RHEL 6.1 x86_64 Operating System SLURM Resource Management Intel Compiler Suite (optimised on Intel architecture) PGI Compiler Suite (including OpenACC for accelerators) CUDA + OpenCL Optimised math libraries Other software required by users can also be made available Licences will have to be purchased by users

Cy-Tera HPC System at CyI Power Total Power  60kW Power Reducing Design Efficient CPUs More efficient GPUs Rear Door Water Cooled Heat Exchangers Cy-Tera is 4x more power efficient than the other CyI systems

Euclid HPC System at CyI Hybrid CPU/GPU Linux Cluster Training Cluster of the LinkSCEEM project Computational Power 6 eight-core compute nodes + 2 NVIDIA Tesla T10 processors Theoretical Peak Performance (TPP)  0.75 Tflop/s 16 GB memory per node MPI Messaging & Storage Access Infiniband Network

Software Automated reproducible build processes using EasyBuild Maintain multiple compilers/versions 300+ software packages installed and 1000+ software packages can be made available

Available Software/Tools Compilers gcc, intel, pgi, lf64 MPI OpenMPI, IntelMPI, MVAPICH2, MPICH2 Numerical Libraries: CUDA: Parallel computing architecture graphics processing ATLAS, OpenBLAS: Basic Linear Algebra routines GSL: C/C++ library with a wide variety of mathematical routines LAPACK: Software library of numerical linear algebra routines FFTW: Library for computing Discrete Fourier Transforms GPU Programming CUDA, OpenMP4.0, OpenCL, OpenACC

Available Software/Tools Molecular Dynamics Gromacs, NAMD, NWChem, CP2K, QuantumEspresso Computational Fluid Dynamics OpenFOAM, ParFlow Bioinformatics BWA, DendroPy, GATK, SAMtools Weather Modelling ESMF, WPS, WRF Data Processing HDF5, NCL, NetCDF Performance and optimization Scalasca

LinkSCEEM User Support Primary User Support User support is provided via an online helpdesk system. The helpdesk service can be contacted via the email address: hpc-support@linksceem.eu All user queries relating to LinkSCEEM resources should be directed to this email address.

Thank you Acknowledgement Funding for LinkSCEEM-2 is provided by DG-INFSO Grant Agreement Number: 261600 Call (part) identifier: FP7-INFRASTRUCTURES-2010-2 Topic: INFRA-2010-1.2.3: Virtual Research Communities Funding Scheme: Combination of CP & CSA