Overview of grid activities in France in relation to FKPPL FKPPL Workshop Thursday February 26th, 2009 Dominique Boutigny.

Slides:



Advertisements
Similar presentations
1 ALICE Grid Status David Evans The University of Birmingham GridPP 16 th Collaboration Meeting QMUL June 2006.
Advertisements

JSAGA2 Overview job desc. gLite plug-ins Globus plug-ins JSAGA hidemiddlewareheterogeneity (e.g. gLite, Globus, Unicore) JDLRSL.
Sylvain Reynaud, Pascal Calvat CC-IN2P3 Grid interoperability using.
Plateforme de Calcul pour les Sciences du Vivant SRB & gLite V. Breton.
GRID INTEROPERABILITY USING GANGA Soonwook Hwang (KISTI) YoonKee Lee and EunSung Kim (Seoul National Uniersity) KISTI-CCIN2P3 FKPPL Workshop December 1,
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
1 Deployment of an LCG Infrastructure in Australia How-To Setup the LCG Grid Middleware – A beginner's perspective Marco La Rosa
November 16, 2007 Dominique Boutigny – CC-IN2P3 Grids: Tools for e-Science DoSon AC GRID School.
Database Services for Physics at CERN with Oracle 10g RAC HEPiX - April 4th 2006, Rome Luca Canali, CERN.
OSG End User Tools Overview OSG Grid school – March 19, 2009 Marco Mambelli - University of Chicago A brief summary about the system.
Status Report on Tier-1 in Korea Gungwon Kang, Sang-Un Ahn and Hangjin Jang (KISTI GSDC) April 28, 2014 at 15th CERN-Korea Committee, Geneva Korea Institute.
Open Science Grid Software Stack, Virtual Data Toolkit and Interoperability Activities D. Olson, LBNL for the OSG International.
A Lightweight Platform for Integration of Resource Limited Devices into Pervasive Grids Stavros Isaiadis and Vladimir Getov University of Westminster
Nicholas LoulloudesMarch 3 rd, 2009 g-Eclipse Testing and Benchmarking Grid Infrastructures using the g-Eclipse Framework Nicholas Loulloudes On behalf.
Group Computing Strategy Introduction and BaBar Roger Barlow June 28 th 2005.
BINP/GCF Status Report BINP LCG Site Registration Oct 2009
INFSO-RI Enabling Grids for E-sciencE Logging and Bookkeeping and Job Provenance Services Ludek Matyska (CESNET) on behalf of the.
:: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 1 MPI On Grids September 3 rd, GridKA School 2009.
Grid Applications for High Energy Physics and Interoperability Dominique Boutigny CC-IN2P3 June 24, 2006 Centre de Calcul de l’IN2P3 et du DAPNIA.
F. Fassi, S. Cabrera, R. Vives, S. González de la Hoz, Á. Fernández, J. Sánchez, L. March, J. Salt, A. Lamas IFIC-CSIC-UV, Valencia, Spain Third EELA conference,
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
EGEE-Forum – May 11, 2007 Enabling Grids for E-sciencE EGEE and gLite are registered trademarks A gateway platform for Grid Nicolas.
Operated by the Southeastern Universities Research Association for the U.S. Depart. Of Energy Thomas Jefferson National Accelerator Facility Andy Kowalski.
CEOS WGISS-21 CNES GRID related R&D activities Anne JEAN-ANTOINE PICCOLO CEOS WGISS-21 – Budapest – 2006, 8-12 May.
1 User Analysis Workgroup Discussion  Understand and document analysis models  Best in a way that allows to compare them easily.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Conference name Company name INFSOM-RI Speaker name The ETICS Job management architecture EGEE ‘08 Istanbul, September 25 th 2008 Valerio Venturi.
INFSO-RI Enabling Grids for E-sciencE Experience of using gLite for analysis of ATLAS combined test beam data A. Zalite / PNPI.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Interactive Data Analysis on the “Grid” Tech-X/SLAC/PPDG:CS-11 Balamurali Ananthan David Alexander
Doug Benjamin Duke University. 2 ESD/AOD, D 1 PD, D 2 PD - POOL based D 3 PD - flat ntuple Contents defined by physics group(s) - made in official production.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
EUFORIA FP7-INFRASTRUCTURES , Grant Migrating Desktop Uniform Access to the Grid Marcin Płóciennik Poznan Supercomputing and Networking Center.
Andrea Manzi CERN On behalf of the DPM team HEPiX Fall 2014 Workshop DPM performance tuning hints for HTTP/WebDAV and Xrootd 1 16/10/2014.
Università di Perugia Enabling Grids for E-sciencE Status of and requirements for Computational Chemistry NA4 – SA1 Meeting – 6 th April.
EGI Technical Forum Amsterdam, 16 September 2010 Sylvain Reynaud.
Data transfers and storage Kilian Schwarz GSI. GSI – current storage capacities vobox LCG RB/CE GSI batchfarm: ALICE cluster (67 nodes/480 cores for batch.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
LHCC Referees Meeting – 28 June LCG-2 Data Management Planning Ian Bird LHCC Referees Meeting 28 th June 2004.
Western Tier 2 Site at SLAC Wei Yang US ATLAS Tier 2 Workshop Harvard University August 17-18, 2006.
FJPPL meeting Lyon, 17th of February 2010 Sylvain Reynaud.
Tutorial on Science Gateways, Roma, Catania Science Gateway Framework Motivations, architecture, features Riccardo Rotondo.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)
Dominique Boutigny December 12, 2006 CC-IN2P3 a Tier-1 for W-LCG 1 st Chinese – French Workshop on LHC Physics and associated Grid Computing IHEP - Beijing.
Acronyms GAS - Grid Acronym Soup, LCG - LHC Computing Project EGEE - Enabling Grids for E-sciencE.
Mardi 14 juin 2016 JUX (Java Universal eXplorer) Pascal Calvat.
JUX (Java Universal eXplorer) Pascal Calvat. Several grid in the world middleware ARCGOSNAREGI 2.
November 28, 2007 Dominique Boutigny – CC-IN2P3 CC-IN2P3 Update Status.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
2 nd EGEE/OSG Workshop Data Management in Production Grids 2 nd of series of EGEE/OSG workshops – 1 st on security at HPDC 2006 (Paris) Goal: open discussion.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks 4th EGEE User Forum Catania, 3 march 2009.
CNAF - 24 September 2004 EGEE SA-1 SPACI Activity Italo Epicoco.
Grid Colombia Workshop with OSG Week 2 Startup Rob Gardner University of Chicago October 26, 2009.
Grid Interoperability and Data Management KEK-CRC & CC-IN2P3 Yonny CARDENAS JFPPL09 Workshop, Tsukuba, May 2009.
Lyon Analysis Facility - status & evolution - Renaud Vernet.
Bob Jones EGEE Technical Director
(Prague, March 2009) Andrey Y Shevel
Grid Interoperability
EGEE Middleware Activities Overview
JUX (Java Universal eXplorer)
FJPPL Lyon, 13 March 2012 Sylvain Reynaud, Lionel Schwarz
UK GridPP Tier-1/A Centre at CLRC
IGTMD meeting, Lyon Sylvain Reynaud
News and computing activities at CC-IN2P3
LCG middleware and LHC experiments ARDA project
Support for ”interactive batch”
CC and LQCD dimanche 13 janvier 2019dimanche 13 janvier 2019
Grid Engine Riccardo Rotondo
Presentation transcript:

Overview of grid activities in France in relation to FKPPL FKPPL Workshop Thursday February 26th, 2009 Dominique Boutigny

FKPPL VO Thursday February 26th, 2009Dominique Boutigny2 Just 1 slide  see Soonwook's talk… The setting up of FKPPL VO has been the first practical action done within the framework of the VO Idea: Setup a grid environment to allow the students attending the Seoul e-science and Grid school to practice on a real, full scale system Great success ! The VO has been setup very fast with good coordination between KISTI and CC-IN2P3 Decision: end of July, first job in October And the VO is actually used : 5000 jobs in 5 months and > 9000 kSI2k.hours

LCG Resources at CC-IN2P3 Thursday February 26th, 2009Dominique Boutigny3 Roughly equivalent to 305 Servers (with 1TB disks) or 34 racks

Disk storage deployment Thursday February 26th, 2009Dominique Boutigny4 X 6.7

Availability Thursday February 26th, 2009Dominique Boutigny5 Average: 92%

Reliability Thursday February 26th, 2009Dominique Boutigny6 Average: 95%

Strengthening the system Thursday February 26th, 2009Dominique Boutigny7 Considerable efforts have been invested in 2008 / 2009 in order to strengthen the LCG computing infrastructure Every piece is important – A single flaky component will ruin all the system  We start to collect the fruits of this ATLAS 3000 jobs running in // The weak point is clearly related to the SRM / dcache system which handles the data storage

Thursday February 26th, 2009Dominique Boutigny8 CC-IN2P3 is committed to collaborate with KISTI in order to develop the ALICE Tier-2 LHC CPU consumption in 2008: > 17 M kSI2k.hours

Dominique Boutigny9 cluster Work on interoperability - JSAGA Motivations for using several grid infrastructures: increasing the number of computing resources available to user need for resources with specific constraints super-computer confidentiality small overhead interactivity availability, on a given grid, of: the data the software Thursday February 26th, 2009 From Sylvain Reynaud

JSAGA Thursday February 26th, 2009Dominique Boutigny10 WMS WMS input data SRM GridFTP WS-GRAM LCG-CELCG-CEWS-GRAM firewall job desc. gLite plug-ins Globus plug-ins JSAGA job staging graph delegate selection & files staging job OPlast EGEE hideinfrastructuresheterogeneity (e.g. EGEE, OSG, DEISA) hidemiddlewareheterogeneity (e.g. gLite, Globus, Unicore) JDLRSL From Sylvain Reynaud

Dominique Boutigny11 Ready-to-use software, adapted to targeted scientific field Ready-to-use software, adapted to targeted scientific field Hide heterogeneity between grid infrastructures Hide heterogeneity between grid infrastructures Hide heterogeneity between middlewares Hide heterogeneity between middlewares As many interfaces as ways to implement each functionality As many interfaces as ways to implement each functionality As many interfaces as used technologies As many interfaces as used technologies Applications SAGA SAGA end user application developer plug-ins developer core engine + plug-ins JSAGA jobscollection JSAGA Thursday February 26th, 2009 From Sylvain Reynaud

A JSAGA application Thursday February 26th, 2009Dominique Boutigny12 JUX is a file explorer designed to be independent of –Operating System tested on Windows, Scientific Linux, Ubuntu, Mac –Data management protocol tested with gsiftp, srb, irods, http, https, sftp, zip, (srm) –Security mechanism tested with GSI, VOMS, Login/Password, X509, SSH full java code JSAGA Pascal Calvat + Sylvain Reynaud For instance, it is possible to interactively handle files stored in SRM / dcache from my own laptop and to move them to another data storage system managed by another Grid middleware

Accessing parallel computer through the EGEE middleware Thursday February 26th, 2009Dominique Boutigny13 Some applications need to run on parallel computer: Molecular Dynamics within WISDOM Lattice QCD Some astroparticles applications … EGEE middleware has been mainly designed to address jobs to serial computer farms Using parallel computers would require to be able to characterize the parallel nodes within the Information System Very relevant in the framework of FKPPL

Parallel computers at CC-IN2P3 Thursday February 26th, 2009Dominique Boutigny14 CC-IN2P3 operates a small parallel farm: 232 CPU-cores connected in Gigabit Ethernet This farm will be upgraded this year to ~1000 CPU cores Low latency network (probably Infiniband) This farm will be upgraded this year to ~1000 CPU cores Low latency network (probably Infiniband) Due to modern CPU design constraints (many cores per chip), using parallelism even in HEP applications will become unavoidable I consider that gaining expertise in this area is crucial for CC-IN2P3 KISTI has this expertise !

Analysis interactive platform Thursday February 26th, 2009Dominique Boutigny15 This year CC-IN2P3 will build a powerful analysis interactive platform  Fast event filtering - Typically read and process AOD at 1 kHz for 50 users in parallel  Root analysis  Fast event filtering - Typically read and process AOD at 1 kHz for 50 users in parallel  Root analysis The architecture will be based on PROOF + probably xrootd, but other storage system will be considered A prototype will be setup in the coming weeks with existing hardware in order to validate the architecture Then we will build a full scale system for LHC startup

Thursday February 26th, 2009Dominique Boutigny16 Analysis interactive platform ALICE is very enthusiastic to get such a system at CC-IN2P3 which will complement the CERN Analysis Facility  We will easily get user applications to test the system Also in contact with René Brun and PROOF development team in order to setup something really powerful Balance between RAM – SSD and HDD This is something that I propose to consider within the framework of FKPPL