Www.cineca.it The Evolution of the Italian HPC Infrastructure Carlo Cavazzoni CINECA – Supercomputing Application & Innovation 31 Marzo 2015.

Slides:



Advertisements
Similar presentations
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
Advertisements

SAN DIEGO SUPERCOMPUTER CENTER Niches, Long Tails, and Condos Effectively Supporting Modest-Scale HPC Users 21st High Performance Computing Symposia (HPC'13)
PRAKTICKÝ ÚVOD DO SUPERPOČÍTAČE ANSELM Infrastruktura, přístup a podpora uživatelů David Hrbáč
IDC HPC User Forum Conference Appro Product Update Anthony Kenisky, VP of Sales.
Supermicro © 2009Confidential HPC Case Study & References.
WEST VIRGINIA UNIVERSITY HPC and Scientific Computing AN OVERVIEW OF HIGH PERFORMANCE COMPUTING RESOURCES AT WVU.
Scale-out Central Store. Conventional Storage Verses Scale Out Clustered Storage Conventional Storage Scale Out Clustered Storage Faster……………………………………………….
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
ASKAP Central Processor: Design and Implementation Calibration and Imaging Workshop 2014 ASTRONOMY AND SPACE SCIENCE Ben Humphreys | ASKAP Software and.
Scientific Computing Laboratory I NSTITUTE OF P HYSICS B ELGRADE WWW. SCL. RS.
1. 2 Welcome to HP-CAST-NTIG at NSC 1–2 April 2008.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA; SAN DIEGO IEEE Symposium of Massive Storage Systems, May 3-5, 2010 Data-Intensive Solutions.
SUMS Storage Requirement 250 TB fixed disk cache 130 TB annual increment for permanently on- line data 100 TB work area (not controlled by SUMS) 2 PB near-line.
An Introduction to Princeton’s New Computing Resources: IBM Blue Gene, SGI Altix, and Dell Beowulf Cluster PICASso Mini-Course October 18, 2006 Curt Hillegas.
Gordon: Using Flash Memory to Build Fast, Power-efficient Clusters for Data-intensive Applications A. Caulfield, L. Grupp, S. Swanson, UCSD, ASPLOS’09.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
© 2013 Mellanox Technologies 1 NoSQL DB Benchmarking with high performance Networking solutions WBDB, Xian, July 2013.
1 Down Place Hammersmith London UK 530 Lytton Ave. Palo Alto CA USA.
© Copyright 2010 Hewlett-Packard Development Company, L.P. 1 HP + DDN = A WINNING PARTNERSHIP Systems architected by HP and DDN Full storage hardware and.
High Performance Computing G Burton – ICG – Oct12 – v1.1 1.
LARGE SCALE DEPLOYMENT OF DAP AND DTS Rob Kooper Jay Alemeda Volodymyr Kindratenko.
SDSC RP Update TeraGrid Roundtable Reviewing Dash Unique characteristics: –A pre-production/evaluation “data-intensive” supercomputer based.
Emalayan Vairavanathan
Workload Optimized Processor
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
CERN - IT Department CH-1211 Genève 23 Switzerland t Tier0 database extensions and multi-core/64 bit studies Maria Girone, CERN IT-PSS LCG.
Hadoop Hardware Infrastructure considerations ©2013 OpalSoft Big Data.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
SAN DIEGO SUPERCOMPUTER CENTER at the UNIVERSITY OF CALIFORNIA, SAN DIEGO Michael L. Norman Principal Investigator Interim Director, SDSC Allan Snavely.
HPCVL High Performance Computing Virtual Laboratory Founded 1998 as a joint HPC lab between –Carleton U. (Comp. Sci.) –Queen’s U. (Engineering) –U. of.
JLab Scientific Computing: Theory HPC & Experimental Physics Thomas Jefferson National Accelerator Facility Newport News, VA Sandy Philpott.
ITEP computing center and plans for supercomputing Plans for Tier 1 for FAIR (GSI) in ITEP  8000 cores in 3 years, in this year  Distributed.
© 2010 DataDirect Networks. Confidential Information D D N D E L I V E R S DataDirect Networks Update Mike Barry, HPC Sales April IDC HPC User Forum.
© Copyright 2012 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. HP Update IDC HPC Forum.
Biryaltsev E.V., Galimov M.R., Demidov D.E., Elizarov A.M. HPC CLUSTER DEVELOPMENT AND OPERATION EXPERIENCE FOR SOLVING THE INVERSE PROBLEMS OF SEISMIC.
Platform Disaggregation Lightening talk Openlab Major review 16 th Octobre 2014.
IDC HPC User Forum April 14 th, 2008 A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering
NICS RP Update TeraGrid Round Table March 10, 2011 Ryan Braby NICS HPC Operations Group Lead.
DiRAC-3 – The future Jeremy Yates, STFC DiRAC HPC Facility.
Integration center of the cyberinfrastructure of NRC “KI” Dubna, 16 july 2012 V.E. Velikhov V.A. Ilyin E.A. Ryabinkin.
Introduction to Hartree Centre Resources: IBM iDataPlex Cluster and Training Workstations Rob Allan Scientific Computing Department STFC Daresbury Laboratory.
The Worldwide LHC Computing Grid Frédéric Hemmer IT Department Head Visit of INTEL ISEF CERN Special Award Winners 2012 Thursday, 21 st June 2012.
Hartree Centre systems overview. Public nameInternal nameTechnologyService type Blue WonderInvictax86 SandyBridgeproduction Blue WonderNapierx86 IvyBridgeproduction.
Impatto delle architetture ibride sui modelli di programmazione e sulla gestione dell'infrastruttura. (Towards exascale ) Carlo Cavazzoni,
Configuring SQL Server for a successful SharePoint Server Deployment Haaron Gonzalez Solution Architect & Consultant Microsoft MVP SharePoint Server
Computer System Replacement at KEK K. Murakami KEK/CRC.
Exascale: challenges and opportunities in a power constrained world Carlo Cavazzoni – SuperComputing Applications and Innovation.
Introduction to Data Analysis with R on HPC Texas Advanced Computing Center Feb
Predrag Buncic CERN Data management in Run3. Roles of Tiers in Run 3 Predrag Buncic 2 ALICEALICE ALICE Offline Week, 01/04/2016 Reconstruction Calibration.
Viet Tran Institute of Informatics, SAS Slovakia.
© Thomas Ludwig Prof. Dr. Thomas Ludwig German Climate Computing Center (DKRZ) University of Hamburg, Department for Computer Science (UHH/FBI) Disks,
COMP7330/7336 Advanced Parallel and Distributed Computing Dr. Xiao Qin Auburn University
NIIF HPC services for research and education
Buying into “Summit” under the “Condo” model
Modern supercomputers, Georgian supercomputer project and usage areas
INFN Computing Outlook The Bologna Initiative
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
Running virtualized Hadoop, does it make sense?
Appro Xtreme-X Supercomputers
Traditional Enterprise Business Challenges
Vladimir Sapunenko On behalf of INFN-T1 staff HEPiX Spring 2017
USF Health Informatics Institute (HII)
HII Technical Infrastructure
Discussion: Cloud Computing for an AI First Future
IBM Power Systems.
CUBAN ICT NETWORK UNIVERSITY COOPERATION (VLIRED
Jianting Zhang1,2 Simin You2, Le Gruenwald3
This work is supported by projects Research infrastructure CERN (CERN-CZ, LM ) and OP RDE CERN Computing (CZ /0.0/0.0/1 6013/ ) from.
H2020 EU PROJECT | Topic SC1-DTH | GA:
Presentation transcript:

The Evolution of the Italian HPC Infrastructure Carlo Cavazzoni CINECA – Supercomputing Application & Innovation 31 Marzo 2015

Name: Fermi Architecture: BlueGene/Q (10 racks) Processor type: IBM GHz Computing Nodes: Each node: 16 cores and 16GB of RAM Computing Cores: RAM: 1GByte / core (163 TByte total) Internal Network: 5D Torus Disk Space: 2PByte of scratch space Peak Performance: 2PFlop/s Power Consumption: 820 kWatts N. 7 in Top 500 rank (June 2012) National and PRACE Tier-0 calls FERMI High-end system, only for extremely scalable applications

GALILEO Name: Galileo Model: IBM NeXtScale Architecture: IBM NeXtScale Processor type: Intel Xeon 2.4 GHz Computing Nodes: 516 Each node: 16 cores, 128 GB of RAM Computing Cores: RAM: 66 TByte Internal Network: Infiniband 4xQDR switches (40 Gb/s) Accelerators: 768 Intel Phi 7120p (2 per node on 384 nodes + 80 Nvidia K80 Peak Performance: 1.2 PFlops National and PRACE Tier-1 calls X86 based system for production of medium scalability applications

PICO Name: Pico Model: IBM NeXtScale Architecture: Linux Infiniband cluster Processor type: Intel Xeon E Computing Nodes: 66+ Each node: 20 cores, 128 GB of RAM Computing Cores: RAM: 6,4 GB/core plus 2 Visualization nodes 2 Big Mem nodes 4 BigInsight nodes Storage and processing of large volumes of data Storage 50TByte of SSD 5PByte on-line repository (same fabric of the cluster) 16PByte of tapes Services Hadoop & PBS OpenStack cloud NGS pipelines Workflows (weather/sea forecast) Analytics High-throughput workloads

Infrastructure Evolution

Workspace Front End Cluster DB Web serv. WebArchiveFTP Repository Tape HPC “island” Infrastructure HPC Engine FERMI Laboratories PRACEEUDAT Other Data Sources External Data Sources Human Brain Prj HPC Engine X86 Cluster Workspace

7 PByte Core Data Processing (Pico) viz Big mem DB Data moverprocessing Web serv. WebArchiveFTP Core Data Store Repository 5 PByte Tape 12 PByte Internal data sources (data centric) Infrastructure Cloud service Scale-Out Data Processing FERMI Tier-1 Laboratories PRACEEUDAT Other Data Sources External Data Sources Human Brain Prj SaaS APP Analytics APPParallel APP

Next Tier0 system (Late 2015) Fermi, at present our tier0 system, reaches the normal end It will be substituted with another system of comparable performance to fullfil the commitments at Italian and European level (order of magnitude 50PFlops -or- 50M€) BG/Q architecture is no more in the development plans of IBM, the actual tecnology has not yet been identified

Computing Infrastructure today1Q 2016 Tier0: Fermi Tier1: Galileo BigData: Pico Tier0: new (HPC Top10) BigData: Galileo/Pico Tier0 BigData: 50PFlops 50PByte 2018

How to get HPC resources Peer reviewed projects: you can submit a project that will be reviewed. If you win you will get the needed resources for free National  ISCRA Europe  PRACE No selection : some institutions sign special purpose R&D agreement with CINECA to have access to the HPC resources

Peer reviewed selection ISCRA: PRACE: ISCRA Italian Super Computing Resource Allocation Italian researchers PRACE Partnership for advanced Computing in europe European researchers

PICO TAPE 12PB  16PB New hw: 10 drives shoud guarantee 2.5GBs troughput DISKs 5PB distributed storage (GPFS) to be used across diffente platforms. Servers for Tiering and data migration COMPUTE ~ 70 nodes, 20 cores/each NeXtScale Intel Xeon E v2 “Ivy Bridge” Mem: GB/node 4 nodes BigInsight 40TB SSD disk

“BigData”: sw configuration New services to be defined on this system, taking advance from its peculiarities: Low parallelism (less cores with respect to other systems, more cores/node) Memory intensive (more memory/core and /node) I/O intensive (SSD disk available) DB based (a lot of storage space) New application environments: Bioinformatics Data anaysis Engineerings Quantum Chemistry General services Remote visualisation Web access to HPC HPC Cloud