Overview of HPC systems and software available within

Slides:



Advertisements
Similar presentations
Founded in 2010: UCL, Southampton, Oxford and Bristol Key Objectives of the Consortium: Prove the concept of shared, regional e-infrastructure services.
Advertisements

Issues of HPC software From the experience of TH-1A Lu Yutong NUDT.
LinkSCEEM-2: A computational resource for the Eastern Mediterranean.
The Development of Mellanox - NVIDIA GPUDirect over InfiniBand A New Model for GPU to GPU Communications Gilad Shainer.
1 Computational models of the physical world Cortical bone Trabecular bone.
Monte-Carlo method and Parallel computing  An introduction to GPU programming Mr. Fang-An Kuo, Dr. Matthew R. Smith NCHC Applied Scientific Computing.
Statewide IT Conference30-September-2011 HPC Cloud Penguin on David Hancock –
The AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members December 9,
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
Early Linpack Performance Benchmarking on IPE Mole-8.5 Fermi GPU Cluster Xianyi Zhang 1),2) and Yunquan Zhang 1),3) 1) Laboratory of Parallel Software.
CaSToRC LinkSCEEM-2: A Computational Resource for the Development of Computational Sciences in the Eastern Mediterranean Jens Wiegand Scientific Coordinator.
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
Information Technology Center Introduction to High Performance Computing at KFUPM.
LinkSCEEM Roadshow Introduction to LinkSCEEM/SESAME/IMAN1 4 May 2014, J.U.S.T Presented by Salman Matalgah Computing Group leader SESAME.
AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members November 18, 2008.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
HIGH PERFORMANCE COMPUTING ENVIRONMENT The High Performance Computing environment consists of high-end systems used for executing complex number crunching.
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
Academic and Research Technology (A&RT)
Linux clustering Morris Law, IT Coordinator, Science Faculty, Hong Kong Baptist University.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
1 petaFLOPS+ in 10 racks TB2–TL system announcement Rev 1A.
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
HPCC Mid-Morning Break Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery Introduction to the new GPU (GFX) cluster.
Introduction to LinkSCEEM and SESAME 15 June 2014, ibis Hotel, Amman - Jordan Presented by Salman Matalgah Computing Group leader SESAME.
HPC at IISER Pune Neet Deo System Administrator
Motivation “Every three minutes a woman is diagnosed with Breast cancer” (American Cancer Society, “Detailed Guide: Breast Cancer,” 2006) Explore the use.
GPU Programming with CUDA – Accelerated Architectures Mike Griffiths
Data Partitioning on Heterogeneous Multicore and Multi-GPU Systems Using Functional Performance Models of Data-Parallel Applications Published in: Cluster.
WORK ON CLUSTER HYBRILIT E. Aleksandrov 1, D. Belyakov 1, M. Matveev 1, M. Vala 1,2 1 Joint Institute for nuclear research, LIT, Russia 2 Institute for.
Introduction to the HPCC Dirk Colbry Research Specialist Institute for Cyber Enabled Research.
Lab System Environment
Scientific Computing Experimental Physics Lattice QCD Sandy Philpott May 20, 2011 IT Internal Review 12GeV Readiness.
ARGONNE NATIONAL LABORATORY Climate Modeling on the Jazz Linux Cluster at ANL John Taylor Mathematics and Computer Science & Environmental Research Divisions.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
CCS Overview Rene Salmon Center for Computational Science.
AASPI Software Computational Environment Tim Kwiatkowski Welcome Consortium Members November 10, 2009.
Nanco: a large HPC cluster for RBNI (Russell Berrie Nanotechnology Institute) Anne Weill – Zrahia Technion,Computer Center October 2008.
2011/08/23 國家高速網路與計算中心 Advanced Large-scale Parallel Supercluster.
ARCHER Advanced Research Computing High End Resource
How to use HybriLIT Matveev M. A., Zuev M.I. Heterogeneous Computations team HybriLIT Laboratory of Information Technologies (LIT), Joint Institute for.
Pathway to Petaflops A vendor contribution Philippe Trautmann Business Development Manager HPC & Grid Global Education, Government & Healthcare.
SESAME-NET: Supercomputing Expertise for Small And Medium Enterprises Todor Gurov Associate Professor IICT-BAS
CaSToRC Linking Scientific Computing in Europe and the Eastern Mediterranean LinkSCEEM-2 CyI IUCC CYNET SESAME JUNET JSC BA NARSS MPIC ESRF NCSA.
Multicore Applications in Physics and Biochemical Research Hristo Iliev Faculty of Physics Sofia University “St. Kliment Ohridski” 3 rd Balkan Conference.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
Sobolev(+Node 6, 7) Showcase +K20m GPU Accelerator.
29 th of March 2016 HPC Applications and Parallel Computing Paradigms.
NIIF HPC services for research and education
Opportunities offered by LinkSCEEM Project
HPC Roadshow Overview of HPC systems and software available within the LinkSCEEM project.
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
HPC usage and software packages
Low-Cost High-Performance Computing Via Consumer GPUs
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
Heterogeneous Computation Team HybriLIT
LinkSCEEM-2: A computational resource for the Eastern Mediterranean
Appro Xtreme-X Supercomputers
Constructing a system with multiple computers or processors
Low-Cost High-Performance Computing Via Consumer GPUs
System G And CHECS Cal Ribbens
HII Technical Infrastructure
Footer.
VI-SEEM Virtual Research Environment
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
Constructing a system with multiple computers or processors
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
Ernst Haunschmid, TU WIEN EOSC, 30th October 2018
H2020 EU PROJECT | Topic SC1-DTH | GA:
Presentation transcript:

Overview of HPC systems and software available within CaSToRC

Overview Available HPC Systems Available Visualization Facilities Ba Cy-Tera Available Visualization Facilities Software Environments

HPC System at Bibliotheca Alexandrina SUN cluster of peak performance of 12 Tflops 130 eight-core compute nodes 2 quad-core sockets per node, each is Intel Quad Xeon E5440 @ 2.83GHz 8 GB memory per node, Total memory 1.05 TBytes 36 TB shared scratch (lustre) Node-node interconnect Ethernet & 4x SDR Infiniband network for MPI 4x SDR Infiniband network for I/O to the global Lustre filesystems

HPC System at Bibliotheca Alexandrina

Cy-Tera HPC System at CyI LinkSCEEM uses the Cy-Tera HPC System at The Cyprus Institute Cy-Tera is the first large cluster as part of a Cypriot National HPC Facility Cy-Tera Strategic Infrastructure Project A new research unit to host a HPC infrastructure RPF funded project The Project Cy-Tera (ΝΕΑ ΥΠΟΔΟΜΗ/ΣΤΡΑΤΗ/0308/31) is co-financed by the European Regional Development Fund and the Republic of Cyprus through the Research Promotion Foundation

Cy-Tera HPC System at CyI Hybrid CPU/GPU Linux Cluster Computational Power 98 x 2 x Westmere 6-core compute nodes Each compute node = 128GFlops 18 x 2 x Westmere 6-core + 2 x NVIDIA M2070 GPU nodes Each gpu node = 1 Tflop Theoretical Peak Performance (TPP) = 30.5Tflops 48GB DDR3 RAM per node MPI Messaging & Storage Access 40Gbps QDR Infiniband Storage: 360TB raw disk

Cy-Tera HPC System at CyI

Cy-Tera HPC System at CyI Cy-Tera Software RHEL 5.6 x86_64 Operating System TORQUE + MOAB Resource Management Intel Compiler Suite (optimised on Intel architecture) PGI Compiler Suite (including OpenACC for accelerators) CUDA + OpenCL Optimised math libraries Other software required by users can also be made available Licences will have to be purchased by users

Cy-Tera HPC System at CyI Power Total Power ~ 60kW Power Reducing Design More efficient CPUs GPUs are much more efficient when measuring Flops/Watt Rear Door Water Cooled Heat Exchangers As a comparison: Planck + Euclid -> 4Tflops = 30kW CyTera -> 30Tflops = 60kW CyTera is ~ 4x more power efficient

Visualization at Bibliotheca Alexandrina Visualization facility based on CAVE Technology Computer Aided Virtual Environment system - FLEX Using stereoscopic projection together with high-resolution 3D computer graphics, an illusion is created which makes the user feels that he/she is immersed in a virtual environment. This helps the user to have better perception and hence analysis of the visualized data.  Digital Projection HIGHlite 8000Dsx projection system (4 projectors).

Visualization at The Cyprus Institute Polynomial Texture Map (PTM) Dome Stereoscopic Projection System 3D TV

Available Software/Tools Compilers: gcc, intel, pgi, cmake, lf64 MPI: OPEN MPI, IntelMPI, MVAPICH2, MPICH2 Libraries: - CUDA: Parallel computing architecture graphics processing - ATLAS, GotoBlas: Basic Linear Algebra routines - GSL: C/C++ library with a wide variety of mathematical routines - LAPACK: Software library of numerical linear algebra routines - FFTW: Library for computing Discrete Fourier Transforms - NetCDF: Creation, access, and sharing of array-oriented scientific data

Available Software/Tools METIS: Graph Partitioning ParMETIS: MPI-based parallel library which extend METIS Gromacs: Molecular Dynamics Package Scalasca: Performance optimization tool for parallel programs ParaView: Multi-platform data analysis and visualization application VMD: Molecular modelling and visualization program OpenFoam: Computational Fluid Dynamics software package MEDICI Content Management System for data sharing Mostly used for Cultural Heritage Purposes (images, 3D images viewers, PTM viewers) UNICORE - Uniform Interface to Computing Resources Makes distributed computing and data resources available in a seamless and secure way in intranets and the internet

LinkSCEEM User Support Primary User Support User support is provided via an online helpdesk system. The helpdesk service can be contacted via the email address: hpc-support@linksceem.eu All user queries relating to LinkSCEEM computer resources should be directed to this email address.

Thank you www.cyi.ac.cy CaSToRC