ITALIAN NATIONAL AGENCY FOR NEW TECNOLOGY, ENERGY AND THE ENVIRONMENT Infrastruttura & Competenze.

Slides:



Advertisements
Similar presentations
ITALIAN NATIONAL AGENCY FOR NEW TECNOLOGY, ENERGY AND THE ENVIRONMENT DIESIS Meeting – Portici
Advertisements

ENEA-GRID, Incontro GARR, Roma 15/7/2004 Incontro GARR 15 Luglio 2004, CRUI, ROMA Armonizzazione delle strutture di rete e delle griglie computazionali.
ITALIAN NATIONAL AGENCY FOR NEW TECNOLOGY, ENERGY AND THE ENVIRONMENT WORKSHOP REMOTE MICROSCOPY FOR ELECTRON MICROSCOPY.
INFO Direzione INFO Coordinatore Calcolo Scientifico Ing. Silvio Migliori (5-2003)
ITALIAN NATIONAL AGENCY FOR NEW TECNOLOGY, ENERGY AND THE ENVIRONMENT CRESCO Seminario Portici 05 Febbraio 2009.
GRISU' Open Day su Scienza ed Ingegneria dei Materiali e Grid : Napoli ENEA Grid e CRESCO in GRISU: uno strumento per la scienza dei materiali.
ArcGIS Server Architecture at the DNR GIS/LIS Conference, October 2013.
Monte-Carlo method and Parallel computing  An introduction to GPU programming Mr. Fang-An Kuo, Dr. Matthew R. Smith NCHC Applied Scientific Computing.
G. Bracco Evolution ENEAGRID/CRESCO - EERA-SP4 Bologna 15/5/2013 Incontro EERA-SP4 Bologna 15 maggio 2013 The evolution of ENEAGRID/CRESCO HPC infrastructure.
HPC in Poland Marek Niezgódka ICM, University of Warsaw
Contact: Hirofumi Amano at Kyushu 40 Years of HPC Services In this memorable year, the.
CURRENT AND FUTURE HPC SOLUTIONS. T-PLATFORMS  Russia’s leading developer of turn-key solutions for supercomputing  Privately owned  140+ employees.
HPCC Mid-Morning Break High Performance Computing on a GPU cluster Dirk Colbry, Ph.D. Research Specialist Institute for Cyber Enabled Discovery.
ENEA APPROACHES TO SUPERCOMPUTING AND CRITICAL INFRASTRUCTURES PROTECTION Sandro Bologna, Silvio Migliori, Andrea Quintiliani, Vittorio Rosato ENEA
Computer Basics 1 Computer Basic 1 includes two lessons:
LinkSCEEM Roadshow Introduction to LinkSCEEM/SESAME/IMAN1 4 May 2014, J.U.S.T Presented by Salman Matalgah Computing Group leader SESAME.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
Linux Clustering A way to supercomputing. What is Cluster? A group of individual computers bundled together using hardware and software in order to make.
ENEA-GRID and gLite Interoperability: robustness of SPAGO approach Catania, Italy, February ENEA-GRID and gLite Interoperability: robustness.
Silicon Graphics, Inc. Poster Presented by: SGI Proprietary Technologies for Breakthrough Research Rosario Caltabiano North East Higher Education & Research.
High Performance Computing (HPC) at Center for Information Communication and Technology in UTM.
Hardware and Software Basics. Computer Hardware  Central Processing Unit - also called “The Chip”, a CPU, a processor, or a microprocessor  Memory (RAM)
CPP Staff - 30 CPP Staff - 30 FCIPT Staff - 35 IPR Staff IPR Staff ITER-India Staff ITER-India Staff Research Areas: 1.Studies.
Windows Server 2003 Windows Server Family Products Windows Server 2003 Web Edition Windows Server 2003 Standard Edition Windows Server 2003 Enterprise.
Virtual Desktop Infrastructure Solution Stack Cam Merrett – Demonstrator User device Connection Bandwidth Virtualisation Hardware Centralised desktops.
Real Parallel Computers. Modular data centers Background Information Recent trends in the marketplace of high performance computing Strohmaier, Dongarra,
Cluster computing facility for CMS simulation work at NPD-BARC Raman Sehgal.
A.V. Bogdanov Private cloud vs personal supercomputer.
Cloud Computing in ENEA-GRID: Virtual Machines, Roaming Profile, and Online Storage Ing. Giovanni Ponti, Ph.D. ENEA – UTICT-HPC
Corporate Partner Overview and Update September 27, 2007 Gary Crane SURA Director IT Initiatives.
High Performance Computing G Burton – ICG – Oct12 – v1.1 1.
The ICT infrastructure and virtual lab supporting research for Industry and Environment Ing. Silvio Migliori ENEA – UTICT
"Parallel MATLAB in production supercomputing with applications in signal and image processing" Ashok Krishnamurthy David Hudak John Nehrbass Siddharth.
Tools and Utilities for parallel and serial codes in ENEA-GRID environment CRESCO Project: Salvatore Raia SubProject I.2 C.R. ENEA-Portici. 11/12/2007.
SA1 / Operation & support Enabling Grids for E-sciencE Integration of heterogeneous computational resources in.
ENTERPRISE COMPUTING QUIZ By: Lean F. Torida
Enabling Grids for E-sciencE CRESCO HPC SYSTEM INTEGRATED INTO ENEA GRID ENVIRONMENT G. Bracco, S.
Rensselaer Why not change the world? Rensselaer Why not change the world? 1.
Introduction to U.S. ATLAS Facilities Rich Baker Brookhaven National Lab.
Looking Ahead: A New PSU Research Cloud Architecture Chuck Gilbert - Systems Architect and Systems Team Lead Research CI Coordinating Committee Meeting.
WNoDeS – Worker Nodes on Demand Service on EMI2 WNoDeS – Worker Nodes on Demand Service on EMI2 Local batch jobs can be run on both real and virtual execution.
Contact: Hirofumi Amano at Kyushu Mission 40 Years of HPC Services Though the R. I. I.
Enabling Grids for E-sciencE CRESCO COMPUTATIONAL RESOURCES AND ITS INTEGRATION IN ENEA-GRID.
CASPUR Site Report Andrei Maslennikov Lead - Systems Amsterdam, May 2003.
CASPUR Site Report Andrei Maslennikov Lead - Systems Rome, April 2006.
IDC HPC User Forum April 14 th, 2008 A P P R O I N T E R N A T I O N A L I N C Steve Lyness Vice President, HPC Solutions Engineering
A Silvio Pardi on behalf of the SuperB Collaboration a INFN-Napoli -Campus di M.S.Angelo Via Cinthia– 80126, Napoli, Italy CHEP12 – New York – USA – May.
ClinicalSoftwareSolutions Patient focused.Business minded. Slide 1 Opus Server Architecture Fritz Feltner Sept 7, 2007 Director, IT and Systems Integration.
Presented by NCCS Hardware Jim Rogers Director of Operations National Center for Computational Sciences.
Windows Azure poDRw_Xi3Aw.
CIP HPC CIP - HPC HPC = High Performance Computer It’s not a regular computer, it’s bigger, faster, more powerful, and more.
14, Chicago, IL, 2005 Science Gateways to DEISA Motivation, user requirements, and prototype example Thomas Soddemann, RZG, Germany.
Parallel Computers Today Oak Ridge / Cray Jaguar > 1.75 PFLOPS Two Nvidia 8800 GPUs > 1 TFLOPS Intel 80- core chip > 1 TFLOPS  TFLOPS = floating.
SYSTEM MODELS FOR ADVANCED COMPUTING Jhashuva. U 1 Asst. Prof CSE
The Evolution of the Italian HPC Infrastructure Carlo Cavazzoni CINECA – Supercomputing Application & Innovation 31 Marzo 2015.
ENEA GRID & JPNM WEB PORTAL to create a collaborative development environment Dr. Simonetta Pagnutti JPNM – SP4 Meeting Edinburgh – June 3rd, 2013 Italian.
Creating Grid Resources for Undergraduate Coursework John N. Huffman Brown University Richard Repasky Indiana University Joseph Rinkovsky Indiana University.
Citrix Academic Network
GRISU' Open Day su Scienza ed Ingegneria dei Materiali e Grid
NIIF HPC services for research and education
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CLOUD COMPUTING
Clouds , Grids and Clusters
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
Super Computing By RIsaj t r S3 ece, roll 50.
CRESCO Project: Salvatore Raia
Interoperability of Digital Repositories
20409A 7: Installing and Configuring System Center 2012 R2 Virtual Machine Manager Module 7 Installing and Configuring System Center 2012 R2 Virtual.
TeraScale Supernova Initiative
EFDA Meeting – Portici ITALIAN NATIONAL AGENCY
CUBAN ICT NETWORK UNIVERSITY COOPERATION (VLIRED
Presentation transcript:

ITALIAN NATIONAL AGENCY FOR NEW TECNOLOGY, ENERGY AND THE ENVIRONMENT Infrastruttura & Competenze ICT ENEA Incontro Confindustria Monza Brianza ( )

Contents  Infrastruttura ENEA-ICT & GRID/CLOUD  Approccio Metodologico “Virtual Lab” Silvio Migliori ENEA-Unità Tecnica Sviluppo sistemi per l'informatica e l'ICT (UTICT) L'infrastruttura ICT dei laboratori virtuali ENEA a supporto della ricerca industria & territorio

ENEA-GRID Story Sensor nets ENEA-GRID  Born in 1998 with 6 Italian geographic clusters  Main aim –Give to ENEA researchers a unified environment to use in an easy way heterogeneous computers and applications software –Build an homogeneous ICT infrastructure but with a distributed and delegated control –Integrate the ENEA-GRID with National and International GRIDS Data archives Colleagues & 3D Software catalogs Computers Application Fabric Connectivity Resource Collective User

THE CONCEPT OF ENEA GRID INFRASTRUCTURE Cell Centered Data Base “CCDB” IMAGING-INSTRUMENTS COMP.-RESOURCESMULTI-SCALE-DATA-BASES NETWORK ADV.-COMP.-GRAPHICS DATA ANALYSIS DATA ACQUISITION By-Mark Ellisman

CodeApplication field FLUENT, OpenFoamCFD ANSYS,ABAQUS, NASTRAN Structures FLUKA,ERANOSNuclear MCNP, MCNPXNeutronics MpCCIFluid-structure interaction USER codesAny type and platform Web access Softwares Instruments

ENEA-GRID concept  Potentiality: –Running all Software Model for any kind of computer architecture –Only one installation for each software for all geographical computers –All authorized users can have access the software from local site –Any application can share the data in the common geographical file system –All sensors can share the data easily using the geographical file systems Sensor nets Computers Portic i ( Roma-1 Bologna EU… BrindisiRoma-2 Trisaia Software catalogs Computers Sensor nets ENEA-GRID Software Model 1 Software Model 2 Software Model 3 Software Model.. Rules Computers

CASACCIA FRASCATI S.Teresa Saluggia Ispra BOLOGNA PORTICI TRISAIA BRINDISI Manfredonia ENEA-GRID Computational & 3D Centers (more then 4000 core) 90 > #CPU/Core 45 New 1000 core AMD 12 core/soket

Jack Dongarra 2002 CRESCO in the Top500 List June 2008 Italy (06/2008) 90CINECA 125ENEA 131ENI 135CILEA 156CINECA 226Telecom

Section 2 (MPP) 256 Nodes blades IBM HS21 with 2 Xeon Quad-Core Clovertown E5345 (2.33GHz/1333MH z/8MB L2), 16 GB RAM total 2048 cores Intel Clovertown New 28 Nodes (224 core) Nehalem 2.4 GHz HPC CRESCO INFRASTRUCTURE Portici LAN SERVERS GPFS 4 Nodes IBM 3650 IBFC High speed storage 2 GByte/s 160 TByte IBM/DDN 9550 Backup system 300 TByte IBM Tape Library TS3500 with 4 drives SERVERS BACKUP 3 Nodes IBM 3650 FCIB GARR (WAN) Section 1 (Large memory) 42 Nodes SMP IBM x3850-M2 with 4 Xeon Quad- Core Tigerton E7330 (32/64 GByte RAM 2.4GHz/ 1066MHz/6MB L2) total 672 cores Intel Tigerton Section 3 (Special) 4 Nodes blades IBM QS21 con 2 Cell BE Processors 3.2 Ghz each. 6 Nodes IBM x3755, 8 Core AMD 8222 FPGA VIRTEX5 4 Nodes IBM x 3755, 8 core AMD 8222 with NVIDIA Quadra FX 4500 X2 4 Nodes Windows 8 core 16 Byte RAM Double interconnection at 1 Gbit & 1 network at 1 Gbit for management 4x10 Gbits Mbits 35 Nodes for services Servers for : Front-end installations AFS CRESCO Peak: 28 TFlops CRESCO Peak: 28 TFlops New 10 Nodes for tools Tools: Faro Jobrama Amaca Interconnections InfiniBand 4XDDR

CRESCO HPC hall

Computational ability ENEA-GRID & CRESCO An heterogeneous system with more than 4000 CPU/cores CodeRisorsa SerialSection 1 & ENEA-GRID Parallel strongly coupledSection 2 Embarrassing parallelSection 1, 2, ENEA-GRID System with several parallel codes Section 1, 2, Section 1+2 Special ( FPGA, GB, CELL)Section 3 windows (serial and parallel)Section 3 Not LINUXENEA-GRID Large date “Data Base”Section 1,2,3 e GPFS

Some CRESCO HPC speed-up Fluent OpenFoam Commercial code Combustion Open Source Processors User Code CPMD Spedup procs

DB_1 CPUS ENEA GRID WEB ICA SSH DNA Sequence system ( ABI Prism 3700) Trisaia DB_3 DB_2 Electron Microscope (Brindisi) 300 KeV (sept. 2004) ENEA GRID and experimental facilities Controlled Nuclear Fusion: FTU Frascati Tokamak Upgrade Video Acquisition

New ENEA-GRID feature under develops for high resolution of microscope images Conceptual structure Site … Site 2 SITE 1 Logical access (AFS) MySQL structure Microscope High definition images CRESCO NX servers with graphic board New ENEA-GRD Java/web technology Low internet bandwidth for high resolution images 3D scenes

ENEA-GRID & CRESCO ACCESS FARO Graphics terminal Frontend all O.S. FARO

ENEA-GRID & CRESCO ACCESS FARO for Industry (Virtual LAB) Graphics terminal Frontend all O.S. SOFTWARE FARO for Virtual LAB (Gateway)

Conceptual structure from a remote user’s point of view Local Data AFS Data Integration > 4 TByte File server Streaming server Citrix XenApp Windows Linux Cluster WEB Sever NX+FARO Web conferencing Classroom Local GRID Cluster Main service View of TEM image Programs & Data E-learning tools ENEA GRID SOFTWARE INFRASTRUCTURE View of TEM image Programs & Data Supercomputing 3D Display E-learning tools Remote Operation Users Special Driver for TEM Remote Application Users + Remote operation NX + FARO

Corso SISM 9-10 Ottobre 2007

A new ENEA-GRID & Virtual Lab New material & TEM Li ENEA University and research institute Enterprise New collaborations

Cloud Computing in ENEA-GRID Prima Fase Fruizione di servizi e applicativi software Laboratori Virtuali (Accesso WEB a documentazione e software specifici di aree tematiche) File system geograficamente distribuito (OpenAFS) Seconda Fase Virtualizzazione Esperienza consolidata Esperienza nuova (work in progress)

Hydrogen storage and production Photovoltaic Metallic membranes Organic-inorganic adhesion Liquid and amorphous metallic and semiconductor systems Carbon and silicon based Tungsten and its alloys Quantum dots Materials under pressure etc ACTIVITIES CODES CPMD, CP2K, Quantum Expresso, LAMMPS GROMACS Homemade for intermetallics and semiconductors CMAST gathers all the CRESCO computational activities: ENEA researchers collaborate with industries and universities on several scientific activities sharing codes and ideas

OpenNebula e ENEA-GRID Accessibilità integrata in FARO Accesso integrato con il portale FARO Lista delle VM disponibili

To collaborate with ENEA ICT

ITALIAN NATIONAL AGENCY FOR NEW TECNOLOGY, ENERGY AND THE ENVIRONMENT