INFN/IGI contributions Federated Clouds Task Force F2F meeting November 24, 2011, Amsterdam.

Slides:



Advertisements
Similar presentations
The National Grid Service and OGSA-DAI Mike Mineter
Advertisements

FP7-INFRA Enabling Grids for E-sciencE EGEE Induction Grid training for users, Institute of Physics Belgrade, Serbia Sep. 19, 2008.
CERN IT Department CH-1211 Genève 23 Switzerland t CERN-IT Plans on Virtualization Ian Bird On behalf of IT WLCG Workshop, 9 th July 2010.
Technology on the NGS Pete Oliver NGS Operations Manager.
COMS E Cloud Computing and Data Center Networking Sambit Sahu
Site Report HEPHY-UIBK Austrian federated Tier 2 meeting
DataGrid Kimmo Soikkeli Ilkka Sormunen. What is DataGrid? DataGrid is a project that aims to enable access to geographically distributed computing power.
IFIN-HH LHCB GRID Activities Eduard Pauna Radu Stoica.
Makrand Siddhabhatti Tata Institute of Fundamental Research Mumbai 17 Aug
1 Kolkata, Asia Joint CHAIN/EU-IndiaGrid2/EPIKH School for Grid Site Administrators, The EPIKH Project (Exchange Programme.
08/11/908 WP2 e-NMR Grid deployment and operations Technical Review in Brussels, 8 th of December 2008 Marco Verlato.
Riccardo Bruno INFN.CT Sevilla, Sep 2007 The GENIUS Grid portal.
BESIII distributed computing and VMDIRAC
1 Evolution of OSG to support virtualization and multi-core applications (Perspective of a Condor Guy) Dan Bradley University of Wisconsin Workshop on.
Enabling Grids for E-sciencE ENEA and the EGEE project gLite and interoperability Andrea Santoro, Carlo Sciò Enea Frascati, 22 November.
GILDA testbed GILDA Certification Authority GILDA Certification Authority User Support and Training Services in IGI IGI Site Administrators IGI Users IGI.
First experience of submission to the EGEE/RDIG Grid of jobs prepared for non standart OSs by means of virtualization. I.Gorbunov, A.Kryukov SINP MSU,
WNoDeS – Worker Nodes on Demand Service on EMI2 WNoDeS – Worker Nodes on Demand Service on EMI2 Local batch jobs can be run on both real and virtual execution.
June 24-25, 2008 Regional Grid Training, University of Belgrade, Serbia Introduction to gLite gLite Basic Services Antun Balaž SCL, Institute of Physics.
Certification and test activity IT ROC/CIC Deployment Team LCG WorkShop on Operations, CERN 2-4 Nov
CEOS WGISS-21 CNES GRID related R&D activities Anne JEAN-ANTOINE PICCOLO CEOS WGISS-21 – Budapest – 2006, 8-12 May.
Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Usage of virtualization in gLite certification Andreas Unterkircher.
INFSO-RI Enabling Grids for E-sciencE OSG-LCG Interoperability Activity Author: Laurence Field (CERN)
Enabling Grids for E-sciencE SGE J. Lopez, A. Simon, E. Freire, G. Borges, K. M. Sephton All Hands Meeting Dublin, Ireland 12 Dec 2007 Batch system support.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Conference name Company name INFSOM-RI Speaker name The ETICS Job management architecture EGEE ‘08 Istanbul, September 25 th 2008 Valerio Venturi.
Next Steps.
Grid DESY Andreas Gellrich DESY EGEE ROC DECH Meeting FZ Karlsruhe, 22./
Trusted Virtual Machine Images a step towards Cloud Computing for HEP? Tony Cass on behalf of the HEPiX Virtualisation Working Group October 19 th 2010.
Glite. Architecture Applications have access both to Higher-level Grid Services and to Foundation Grid Middleware Higher-Level Grid Services are supposed.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
HLRmon accounting portal DGAS (Distributed Grid Accounting System) sensors collect accounting information at site level. Site data are sent to site or.
Recent improvements in HLRmon, an accounting portal suitable for national Grids Enrico Fattibene (speaker), Andrea Cristofori, Luciano Gaido, Paolo Veronesi.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Grid2Win : gLite for Microsoft Windows Roberto.
Certification and test activity ROC/CIC Deployment Team EGEE-SA1 Conference, CNAF – Bologna 05 Oct
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI VM Management Chair: Alexander Papaspyrou 2/25/
Automatic testing and certification procedure for IGI products in the EMI era and beyond Sara Bertocco INFN Padova on behalf of IGI Release Team EGI Community.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Practical using WMProxy advanced job submission.
HLRmon accounting portal The accounting layout A. Cristofori 1, E. Fattibene 1, L. Gaido 2, P. Veronesi 1 INFN-CNAF Bologna (Italy) 1, INFN-Torino Torino.
OpenStack Chances and Practice at IHEP Haibo, Li Computing Center, the Institute of High Energy Physics, CAS, China 2012/10/15.
INFN GRID Production Infrastructure Status and operation organization Cristina Vistoli Cnaf GDB Bologna, 11/10/2005.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
Presentation of the results khiat abdelhamid
StratusLab is co-funded by the European Community’s Seventh Framework Programme (Capacities) Grant Agreement INFSO-RI Demonstration StratusLab First.
II EGEE conference Den Haag November, ROC-CIC status in Italy
– n° 1 Grid di produzione INFN – GRID Cristina Vistoli INFN-CNAF Bologna Workshop di INFN-Grid ottobre 2004 Bari.
Consorzio COMETA - Progetto PI2S2 UNIONE EUROPEA Grid2Win : gLite for Microsoft Windows Elisa Ingrà - INFN.
WNoDeS – a Grid/Cloud Integration Framework Elisabetta Ronchieri (INFN-CNAF) for the WNoDeS Project
Instituto de Biocomputación y Física de Sistemas Complejos Cloud resources and BIFI activities in JRA2 Reunión JRU Española.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
Information Initiative Center, Hokkaido University North 11, West 5, Sapporo , Japan Tel, Fax: General.
Co-ordination & Harmonisation of Advanced e-Infrastructures for Research and Education Data Sharing Research Infrastructures Grant Agreement n
DIRAC for Grid and Cloud Dr. Víctor Méndez Muñoz (for DIRAC Project) LHCb Tier 1 Liaison at PIC EGI User Community Board, October 31st, 2013.
HLRmon Enrico Fattibene INFN-CNAF 1EGI-TF Lyon, France19-23 September 2011.
Trusted Virtual Machine Images the HEPiX Point of View Tony Cass October 21 st 2011.
WP5 – Infrastructure Operations Test and Production Infrastructures StratusLab kick-off meeting June 2010, Orsay, France GRNET.
EMI is partially funded by the European Commission under Grant Agreement RI Elisabetta Ronchieri, INFN CNAF Munich, 29 March 2012 WNoDeS Tutorial.
Using HLRmon for advanced visualization of resource usage Enrico Fattibene INFN - CNAF ISCG 2010 – Taipei March 11 th, 2010.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI Services for Distributed e-Infrastructure Access Tiziana Ferrari on behalf.
Antonio Fuentes RedIRIS Barcelona, 15 Abril 2008 The GENIUS Grid portal.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
EGI-Engage is co-funded by the Horizon 2020 Framework Programme of the European Union under grant number Federated Cloud Update.
1 EGI Federated Cloud Architecture Matteo Turilli Senior Research Associate, OeRC, University of Oxford Chair – EGI Federated Clouds Task Force
Job monitoring and accounting data visualization
StratusLab Final Periodic Review
StratusLab Final Periodic Review
lcg-infosites documentation (v2.1, LCG2.3.1) 10/03/05
Virtualization in the gLite Grid Middleware software process
Interoperability & Standards
HLRmon accounting portal
Presentation transcript:

INFN/IGI contributions Federated Clouds Task Force F2F meeting November 24, 2011, Amsterdam

WN WNoDeS enabled WNs Virtual Machin e Non WNoDeS WNs Batch Server Manager WNoDeS jobs Standard jobs WN Standard job CREAM CE WNoDeS NameServer Deployment Scenario 1 shared CREAM-CE We will move to 1 dedicated CREAM-CE within the end of November Torque/MAUI batch scheduler 1 Name Server (WNoDeS dedicated service) 2 physical servers: Could host up to 24 concurrent virtual nodes At the moment we have one single image with – Scientific Linux SL release 5.7 (Boron) as Operating System – 1 Core per machine – 1.5 GB of memory We can have different images one per each VO Each image is shared among all the VOs (i.e., DTEAM, enmr.org) Customized images need to be decided together with the site Test status: Functional test => OK – Both with local account submission and Grid submission Stress test => OK – Few submission of ~2000 jobs in one shot, all of them completed fine

Scenario 1 The provided testbed supports the provisioning of VMs by using GRID Prerequisite – Users MUST belong to a given VO (i.g., DTEAM) – Users MUST have a x.509 digital certificate How to submit user’s job to CE ( i.g., cremino.cnaf.infn.it ) directly as follows: – editing run_job.jdl file: Executable="run_job.sh"; Arguments=“arguments”; StdOutput="report_out.txt"; StdError="report_err.txt”; InputSandbox={"run_job.sh"}; OutputSandbox={"report_out.txt","report_err.txt"}; OutputSandboxBaseDestURI = "gsiftp://gftp2.ba.infn.it/lustre/cms/test_grid/”; – running commands: > voms-proxy-init --voms dteam > glite-ce-job-submit -a -r cremino.cnaf.infn.it:8443/cream-pbs-cloudtf run_job.jdl > glite-ce-job-status

Scenario 1 The provided testbed supports the provisioning of VMs by using GRID Prerequisite – Users MUST belong to a given VO (i.g., DTEAM) – Users MUST have a x.509 digital certificate How to submit user’s job to CE ( i.g., cremino.cnaf.infn.it ) as follows: – editing run_job.sh file: #!/bin/bash hostname date sleep 20 printenv – editing run_job.jdl file: Executable="run_job.sh"; #Arguments="blast_pesole_09_2011 5"; ## Please put your VO as Argument StdOutput="report_out.txt"; StdError="report_err.txt"; InputSandbox={"run_job.sh"}; OutputSandbox={"report_out.txt","report_err.txt"}; requirements = other.GlueCEPolicyMaxWallClockTime > 100 && RegExp(".*cremino.*cloudtf",other.GlueCEUniqueID); Rank= ( other.GlueCEStateWaitingJobs == 0 ? other.GlueCEStateFreeCPUs : -other.GlueCEStateWaitingJobs) ; – running commands: >voms-proxy-init --voms dteam >glite-wms-job-status -a -e job.jdlglite-ce-job-submit -a -r cremino.cnaf.infn.it:8443/cream-pbs-cloudtf run_job.jdl

Scenario 1 The testbed will support the OCCI interface integrated in a Web- based Portal within the middle of December – to retrieve information about VM by using VM’s UUID – to cancel VM by using VM’s UUID Prerequisite – Users MUST belong to a given VO (i.g., DTEAM) – Users MUST have a x.509 digital certificate previously installed in the Web portal How to interact with the Web-based Portal – A user, after selecting the VO he belongs to, is authenticated via VOMS service – Then user can select a compute resource – Then user can specify VM characteristics, like CPU, RAM, and OS – Then user can submit his/her request to instantiate VM, which can be assessed via passowrdless ssh

Scenario 2 The provided testbed supports shared NFS filesystem with an available storage space of 2TB.

Scenario 3 The provided testbed publishes the GlueHostApplicationSoftware­Run­ Time­Environment attribute for publishing VM information by using egee-bdii.cnaf.infn.it as BDII For example, a user that belongs to the VO DTEAM can get information about Tags configured on the queue cremino.cnaf.infn.it:8443/cream- pbs-cloudtf as follows: $ lcg-info --vo dteam --list-ce --query 'CE=cremino.cnaf.infn.it:8443/cream- pbs-cloudtf' --attrs Tag - CE: cremino.cnaf.infn.it:8443/cream-pbs-cloudtf - Tag CNAF GLITE-3_0_0 GLITE-3_1_0 GLITE-3_2_0 SF00MeanPerCPU_951 SI00MeanPerCPU_1039 VO-dteam-ScientificLinuxSLrelease5.7 This could be used in future to let the user know which OS images is available on that queue

Scenario 4 The provided testbed supports accounting at batch system level (i.e., pbs) and integration with the DGAS Accounting System used by the Italian Grid infrastructure – Information is available via HLRMon tool – A given user can ONLY access to his/her data – Aggregated data is sent to CESGA p p

Scenario 5 WNoDeS has an internal monitoring system for hypervisors in relation with the supported batch system. – The provided testbed uses the CREAM probe to integrate WNoDeS monitoring in Nagios as shown at the Web page bin/status.cgi?navbarsearch=1&host=cremino.cnaf.infn.it – It is accessible by users that belong to the VO dteam In addition, it uses what described in scenario 3

Scenario 5