THE NAPLES GROUP: RESOURCES SCoPE Datacenter of more than 3.000 CPU/core and 300TB including infiniband and MPI Library in supporting Fast Simulation activy.

Slides:



Advertisements
Similar presentations
CERN STAR TAP June 2001 Status of the EU DataGrid Project Fabrizio Gagliardi CERN EU-DataGrid Project Leader June 2001
Advertisements

1 Project overview Presented at the Euforia KoM January, 2008 Marcin Płóciennik, PSNC, Poland.
Hungrid A Possible Distributed Computing Platform for Hungarian Fusion Research Szabolcs Hernáth MTA KFKI RMKI EFDA RP Workshop.
GPU Computing with Hartford Condor Week 2012 Bob Nordlund.
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME SESAME – LinkSCEEM.
ATLAS computing in Geneva 268 CPU cores (login + batch) 180 TB for data the analysis facility for Geneva group grid batch production for ATLAS special.
Research Computing with Newton Gerald Ragghianti Nov. 12, 2010.
1 Deployment of an LCG Infrastructure in Australia How-To Setup the LCG Grid Middleware – A beginner's perspective Marco La Rosa
Green technology used for ATLAS processing Dr. Ing. Fărcaş Felix NATIONAL INSTITUTE FOR RESEARCH AND DEVELOPMENT OF ISOTOPIC AND MOLECULAR.
E-Infrastructure hierarchy Networking and Computational facilities in Armenia ASNET AM Network Armenian National Grid Initiative Armenian ATLAS site (AM-04-YERPHI)
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
Computing for ILC experiment Computing Research Center, KEK Hiroyuki Matsunaga.
Computational Infrastructure Ion I. Moraru. UConn Health HPC Facility Originated out of the computational needs of another NIH P41 grant (NRCAM, continuously.
INFSO-RI Enabling Grids for E-sciencE EGEE Induction Grid training for users, Institute of Physics Belgrade, Serbia Sep. 19, 2008.
GridKa SC4 Tier2 Workshop – Sep , Warsaw Tier2 Site.
ISS-AliEn and ISS-gLite Adrian Sevcenco RO-LCG 2011 WORKSHOP Applications of Grid Technology and High Performance Computing in Advanced Research.
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
GriPhyN EAC Meeting (Jan. 7, 2002)Carl Kesselman1 University of Southern California GriPhyN External Advisory Committee Meeting Gainesville,
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
Spending Plans and Schedule Jae Yu July 26, 2002.
October LHCUSA meeting BNL Bjørn S. Nilsen Update on NSF-ITR Proposal Bjørn S. Nilsen The Ohio State University.
The project of application for network computing in seismology --The prototype of SeisGrid Chen HuiZhong, Ze Ren Zhi Ma, Hu Bin Institute.
CEOS WGISS-21 CNES GRID related R&D activities Anne JEAN-ANTOINE PICCOLO CEOS WGISS-21 – Budapest – 2006, 8-12 May.
EU-IndiaGrid (RI ) is funded by the European Commission under the Research Infrastructure Programme WP5 Application Support Marco.
…building the next IT revolution From Web to Grid…
Overview of grid activities in France in relation to FKPPL FKPPL Workshop Thursday February 26th, 2009 Dominique Boutigny.
PC clusters in KEK A.Manabe KEK(Japan). 22 May '01LSCC WS '012 PC clusters in KEK s Belle (in KEKB) PC clusters s Neutron Shielding Simulation cluster.
CDF computing in the GRID framework in Santander
EGI-InSPIRE Steven Newhouse Interim EGI.eu Director EGI-InSPIRE Project Director Technical Director EGEE-III 1GDB - December 2009.
A Silvio Pardi on behalf of the SuperB Collaboration a INFN-Napoli -Campus di M.S.Angelo Via Cinthia– 80126, Napoli, Italy CHEP12 – New York – USA – May.
January 30, 2016 RHIC/USATLAS Computing Facility Overview Dantong Yu Brookhaven National Lab.
Università di Perugia Enabling Grids for E-sciencE Status of and requirements for Computational Chemistry NA4 – SA1 Meeting – 6 th April.
IGI – the Italian Grid initiative and its impact for the Astrophysics community Fabio Pasian INAF – Information Systems Unit INAF – Osservatorio Astronomico.
Participation of JINR in CERN- INTAS project ( ) Korenkov V., Mitcin V., Nikonov E., Oleynik D., Pose V., Tikhonenko E. 19 march 2004.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
IAG – Israel Academic Grid, EGEE and HEP in Israel Prof. David Horn Tel Aviv University.
GLite Middleware Administration Sara Bertocco INFN Padova 11 th International GridKa School 2013 – Big Data, Clouds and Grids.
THE GLUE DOMAIN DEPLOYMENT The middleware layer supporting the domain-based INFN Grid network monitoring activity is powered by GlueDomains [2]. The GlueDomains.
CERN News on Grid and openlab François Fluckiger, Manager, CERN openlab for DataGrid Applications.
A Computing Tier 2 Node Eric Fede – LAPP/IN2P3. 2 Eric Fede – 1st Chinese-French Workshop Plan What is a Tier 2 –Context and definition To be a Tier 2.
Daniele Cesini - INFN CNAF. INFN-CNAF 20 maggio 2014 CNAF 2 CNAF hosts the Italian Tier1 computing centre for the LHC experiments ATLAS, CMS, ALICE and.
1 The S.Co.P.E. Project and its model of procurement G. Russo, University of Naples Prof. Guido Russo.
FIFE Architecture Figures for V1.2 of document. Servers Desktops and Laptops Desktops and Laptops Off-Site Computing Off-Site Computing Interactive ComputingSoftware.
Re.Ca.S. Rete di Calcolo per SuperB G. Russo Ferrara, 2011 july 7th 1.
Update on Computing/Cloud Marco Destefanis Università degli Studi di Torino 1 BESIII Ferrara, Italy October 21, 2014 Stefano Bagnasco, Flavio Astorino,
G. Russo, D. Del Prete, S. Pardi Kick Off Meeting - Isola d'Elba, 2011 May 29th–June 01th A proposal for distributed computing monitoring for SuperB G.
G. Russo, D. Del Prete, S. Pardi Frascati, 2011 april 4th-7th The Naples' testbed for the SuperB computing model: first tests G. Russo, D. Del Prete, S.
J. Templon Nikhef Amsterdam Physics Data Processing Group Large Scale Computing Jeff Templon Nikhef Jamboree, Utrecht, 10 december 2012.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
S. Pardi Computing R&D Workshop Ferrara 2011 – 4 – 7 July SuperB R&D on going on storage and data access R&D Storage Silvio Pardi
Hall D Computing Facilities Ian Bird 16 March 2001.
SuperB – Naples Site Dr. Silvio Pardi. Right now the Napoli Group is employed in 3 main tasks relate the computing in SuperB Fast Simulation Electron.
COMETA Sara Pirrone INFN.
NIIF HPC services for research and education
SuperB – INFN-Bari Giacinto DONVITO.
A testbed for the SuperB computing model
SA1 Execution Plan Status and Issues
Pasquale Migliozzi INFN Napoli
Silvio Pardi R&D Storage Silvio Pardi
LinkSCEEM-2: A computational resource for the development of Computational Sciences in the Eastern Mediterranean Mostafa Zoubi SESAME Outreach SESAME,
NGIs – Turkish Case : TR-Grid
Christos Markou Institute of Nuclear Physics NCSR ‘Demokritos’
Grid related projects CERN openlab LCG EDG F.Fluckiger
Russian Regional Center for LHC Data Analysis
Stephen Childs Trinity College Dublin
Simulation use cases for T2 in ALICE
CUBAN ICT NETWORK UNIVERSITY COOPERATION (VLIRED
LHCb thinking on Regional Centres and Related activities (GRIDs)
H2020 EU PROJECT | Topic SC1-DTH | GA:
Presentation transcript:

THE NAPLES GROUP: RESOURCES SCoPE Datacenter of more than CPU/core and 300TB including infiniband and MPI Library in supporting Fast Simulation activy and the “Electron Cloud Effects in SuperB” in collaboration with the Frascati Group and with Dr. Theo Demma Tier2 of Atlas About CPU that support the superbvo.org VO for Fast Sim A Farm of 12 Server PowerEdge R510 With 10Gbit/s network dedicate for R&D activity New NVIDIA PCI EXPANDER with 4 Tesla GPGPU for R&D

THE NAPLES GROUP: ACTIVITY Development of the Multilevel Monitoring System for all the main aspect: Data, Computing, Storage, Network and Middleware Software Distribution Framework for SuperB CVMFS In testing Study New Computing topology using GPGPU Designing and promoting the new distributed computer facility in south Italy (Pon RECAS 2011 INFN,UNINA, and UNIBA) R&D In Storage and data Management- New Cluster Setup and new network topology.

THE RECAS PROJECT

Characteristics of the «call for projects» P.O.N. (National Operational Programme) is funded by EU Funding is strictly limited to 4 southern regions Project to be presented by Aug. 11th Funding request: min 15 M€, max 45 M€ Only for «large» infrastructure, including grid About 9,5% is for educational purposes

Characteristics of our project Amount requested: 44,5 M€ Partecipants: – INFN (NA, BA, CT, CS) – UniNA – UniBA INFN: 22,5; UniNA: 11 M€; UniBA: 11 M€ 3

Key persons L. Merola (NA - head, empowering project) R. Bellotti (BA – head, educational project) G. Russo (NA - relationship with funding agency; proposal submission and follow-up) One administrative officer for each partner