Presentation of the results khiat abdelhamid

Slides:



Advertisements
Similar presentations
EU 2nd Year Review – Jan – Title – n° 1 WP1 Speaker name (Speaker function and WP ) Presentation address e.g.
Advertisements

Workload management Owen Maroney, Imperial College London (with a little help from David Colling)
INFSO-RI Enabling Grids for E-sciencE Workload Management System and Job Description Language.
FP7-INFRA Enabling Grids for E-sciencE EGEE Induction Grid training for users, Institute of Physics Belgrade, Serbia Sep. 19, 2008.
Development of test suites for the certification of EGEE-II Grid middleware Task 2: The development of testing procedures focused on special details of.
INFSO-RI Enabling Grids for E-sciencE EGEE Middleware The Resource Broker EGEE project members.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Services Abderrahman El Kharrim
Basic Grid Job Submission Alessandra Forti 28 March 2006.
Makrand Siddhabhatti Tata Institute of Fundamental Research Mumbai 17 Aug
1 Kolkata, Asia Joint CHAIN/EU-IndiaGrid2/EPIKH School for Grid Site Administrators, The EPIKH Project (Exchange Programme.
E-science grid facility for Europe and Latin America UI PnP and UI Installation User and Site Admin Tutorial Riccardo Bruno – INFN Catania.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) VOMS Installation and configuration Bouchra
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) CE+WN+siteBDII Installation and configuration Bouchra
E-science grid facility for Europe and Latin America Installation and configuration of a top BDII Gianni M. Ricciardi – Consorzio COMETA.
The gLite API – PART I Giuseppe LA ROCCA INFN Catania ACGRID-II School 2-14 November 2009 Kuala Lumpur - Malaysia.
INFSO-RI Enabling Grids for E-sciencE GILDA Praticals GILDA Tutors INFN Catania ICTP/INFM-Democritos Workshop on Porting Scientific.
:: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: ::::: :: GridKA School 2009 MPI on Grids 1 MPI On Grids September 3 rd, GridKA School 2009.
Enabling Grids for E-sciencE Workload Management System on gLite middleware Matthieu Reichstadt CNRS/IN2P3 ACGRID School, Hanoi (Vietnam)
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Workload Management System + Logging&Bookkeeping Installation.
E-science grid facility for Europe and Latin America LFC Server Installation and Configuration Antonio Calanducci INFN Catania.
Nadia LAJILI User Interface User Interface 4 Février 2002.
INFSO-RI Enabling Grids for E-sciencE Workload Management System Mike Mineter
E-science grid facility for Europe and Latin America gLite WMS Installation and configuration Riccardo Bruno – INFN.CT 30/06/2008 – 04/07/2008.
Group 1 : Grid Computing Laboratory of Information Technology Supervisors: Alexander Ujhinsky Nikolay Kutovskiy.
EGEE-II INFSO-RI Enabling Grids for E-sciencE Using gLite API Vladimir Dimitrov IPP-BAS “gLite middleware Application Developers.
E-science grid facility for Europe and Latin America Using Secure Storage Service inside the EELA-2 Infrastructure Diego Scardaci INFN (Italy)
INFSO-RI Enabling Grids for E-sciencE WMS + LB Installation Emidio Giorgio Giuseppe La Rocca INFN EGEE Tutorial, Rome November.2005.
9th EELA TUTORIAL - USERS AND SYSTEM ADMINISTRATORS E-infrastructure shared between Europe and Latin America CE + WN installation and configuration.
EGEE-III INFSO-RI Enabling Grids for E-sciencE Feb. 06, Introduction to High Performance and Grid Computing Faculty of Sciences,
Getting started DIRAC Project. Outline  DIRAC information system  Documentation sources  DIRAC users and groups  Registration with DIRAC  Getting.
4th EELA TUTORIAL - USERS AND SYSTEM ADMINISTRATORS E-infrastructure shared between Europe and Latin America CE + WN installation and configuration.
Glite. Architecture Applications have access both to Higher-level Grid Services and to Foundation Grid Middleware Higher-Level Grid Services are supposed.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE Site Architecture Resource Center Deployment Considerations MIMOS EGEE Tutorial.
Enabling Grids for E-sciencE Workload Management System on gLite middleware - commands Matthieu Reichstadt CNRS/IN2P3 ACGRID School, Hanoi.
High-Performance Computing Lab Overview: Job Submission in EDG & Globus November 2002 Wei Xing.
FP6−2004−Infrastructures−6-SSA E-infrastructure shared between Europe and Latin America WMS+LB Server Installation Tony Calanducci.
Third EELA Tutorial for Managers and Users E-infrastructure shared between Europe and Latin America CE + WN installation and configuration.
Liudmila Stepanova SINP-MSU team Testing gLite Worker Nodes LCG Dubna, Jul 26, 2007.
INFSO-RI Enabling Grids for E-sciencE GILDA Praticals Giuseppe La Rocca INFN – Catania gLite Tutorial at the EGEE User Forum CERN.
TP: Grid site installation BEINGRID site installation.
User Interface UI TP: UI User Interface installation & configuration.
LCG2 Tutorial Viet Tran Institute of Informatics Slovakia.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) UI Installation and Configuration Dong Xu IHEP,
GLite WN Installation Giuseppe LA ROCCA INFN Catania ACGRID-II School 2-14 November 2009 Kuala Lumpur - Malaysia.
First South Africa Grid Training Installation and configuration of BDII Gianni M. Ricciardi Consorzio COMETA First South Africa Grid Training Catania,
SEE-GRID-SCI MON Hands-on Session Vladimir Slavnić Institute of Physics Belgrade Serbia The SEE-GRID-SCI initiative.
Co-ordination & Harmonisation of Advanced e-Infrastructures for Research and Education Data Sharing Research Infrastructures Grant Agreement n
INFN/IGI contributions Federated Clouds Task Force F2F meeting November 24, 2011, Amsterdam.
EGEE-III INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks BDII Server Installation & Configuration.
Job Management Beijing, 13-15/11/2013. Overview Beijing, /11/2013 DIRAC Tutorial2  DIRAC JDL  DIRAC Commands  Tutorial Exercises  What do you.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) WMS LB BDII Installation and Configuration Salma Saber
Site BDII and CE Installation Muhammad Farhan Sjaugi, UPM 2009 November , UM Malaysia 1.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Overview of software tools for gLite installation & configuration.
WORKER NODE Alfonso Pardo EPIKH School, System Admin Tutorial Beijing, 2010 August 30th – 2010 September 3th.
Introduction to Computing Element HsiKai Wang Academia Sinica Grid Computing Center, Taiwan.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
Enabling Grids for E-sciencE Work Load Management & Simple Job Submission Practical Shu-Ting Liao APROC, ASGC EGEE Tutorial.
User Interface (UI) Installation Bandung ITB Desember 2009.
16-26 June 2008, Catania (Italy) First South Africa Grid Training LFC Server Installation and Configuration Antonio Calanducci INFN Catania.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Elisa Ingrà Consortium GARR- Roma WMS LB.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) Workload Management System + Logging&Bookkeeping Installation.
Regional SEE-GRID-SCI Training for Site Administrators
Security aspects of the CREAM-CE
Installation and configuration of a top BDII
UI Installation and Configuration
WMS LB topBDII Installation and Configuration
EGEE Middleware: gLite Information Systems (IS)
gLite Job Management Christos Theodosiou
UI Installation and Configuration
GENIUS Grid portal Hands on
Presentation transcript:

Presentation of the results khiat abdelhamid

Objectives Installation of Grid Services as a Grid Site Administrator study of Grid Monitoring System.

Plan Pre-requisites Study of grid computing services Installation of Grid Services as a Grid Site Administrator UI CE WN Study of Grid Monitoring System. CEMON gstat RTM Future objectives

Grid services studied Workload Management System (WMS) – The purpose of the WMS is to accept user jobs, to assign them to the most appropriate resources, to record their status and retrieve their output

Grid services studied Computing Element (CE) is the service representing a computing resource. Its main functionality is job management (job submission, job control, etc.).

Grid services studied Worker Node (WN) is a set of clients required to run jobs sent by the Computing Element via the Local Resource Management System

Grid services studied Job Description Language (JDL) describes jobs to be submitted, which specifies, for example, which executable to run and its parameters, files, and any requirements on the CE and the Worker Node

Grid services studied The gLite user Interface (UI) is a suite of clients and APIs that users and applications can use to access the gLite services from the gLite software.

Installation of Grid Services Before installation the administrator must have knowledge in Linux system and minimum of knowledge in networks for this installation we used 4 Virtual machines 64 bit with SL5.

Installation of Grid Services User interface Repo used for this installation is cd /etc/yum/repos.d/ wget \ it.cnaf.infn.it/mrepo/repos/sl5/x86_64/dag.repo \ \ ca.repo \ it.cnaf.infn.it/mrepo/repos/sl5/x86_64/glite-ui.repo \

Installation of Grid Services User interface installation of the packages yum groupinstall ig_UI_noafs (Andrew File System ) Install CAs and GILDA UTILS yum install -y lcg-CA ntp Configuration vi /opt/glite/yaim/examples/siteinfo/ig-site-info.def

Installation of Grid Services # Hostname of the top level BDII # INFN-GRID: Default BDII Top for Italy (if you do not have your own) BDII_HOST=egee-bdii.cnaf.infn.it # Hostname of the PX PX_HOST=myproxy.ct.infn.it # Hostname of the RGMA server MON_HOST=my-mon.$MY_DOMAIN # Hostname of the RB RB_HOST=my-rb.$MY_DOMAIN # Space separated list of VOs supported by your site VOS="dteam ops eumed gilda infngrid"

Installation of Grid Services /opt/glite/yaim/bin/ig_yaim -c \ -s /opt/glite/yaim/examples/siteinfo/ig-site-info.def \ -n ig_UI_noafs Result INFO: Configuration Complete. [ OK ] INFO: YAIM terminated succesfully.

Test Creation of an account on the UI server Installation of the certificate (usercert.pem,userkey.pem) Generate voms proxy Submit the first job

Test vi test.jdl Type = "Job"; JobType = "Normal"; Executable = "/bin/sh"; Arguments = « test.sh"; StdOutput = "std.output"; StdError = "std.error"; Environment = {"PATH=$PATH:/usr/local/bin", "LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib"}; InputSandbox = {« test.sh"}; OutputSandbox = {"std.output", "std.error"};

Test vi test.sh #!/bin/sh wdtofind=" 9dd4e461268c8034f5c8564e155c67a6 " alphabet="! \" # $ % & \' ( ) \* +, -. / : ; A B C D E F G H I J K L M N O P Q R S T U V W X Y Z [ \ ] ^ _ \` a b c d e f g h i j k l m n o p q r s t u v w x y z { | } ~« for w1 in $alphabet do pwdcrypt=`echo -n $w1 | openssl md5` if [ "$pwdcrypt" = "$wdtofind" ] then echo "word found:$w1" exit fi done&

Test glite-wms-job-submit -a -e 01.pd.infn.it:7443/glite_wms_wmproxy_server test.jdl glite-wms-job-status 01.pd.infn.it:9000/qpm6s9DZpDK7uvlgnQNXyw glite-wms-job-output 01.pd.infn.it:9000/qpm6s9DZpDK7uvlgnQNXyw cd /tmp/jobOutput/testgrid_qpm6s9DZpDK7uvlgnQNXyw/ Vi std.output word found:x

Installation of Grid Services Computing Element Repository Settings for x86_64 wget \ \ \ \ cream_torque.repo \

Installation of Grid Services Computing Element Instalation of the packages yum install ig_CREAM_torque yum install -y lcg-CA ntp Install Host Certificate Configuring vi /opt/glite/yaim/examples/siteinfo/ig-site-info.def

Installation of Grid Services Computing Element configuration of the users VI /opt/glite/yaim/examples/ig-users.conf 4401:gilda001:4400:gilda:gilda:: 4402:gilda002:4400:gilda:gilda:: 4403:gilda003:4400:gilda:gilda:: 4404:gilda004:4400:gilda:gilda:: 4405:gilda005:4400:gilda:gilda:: 4406:gilda006:4400:gilda:gilda:: 4407:gilda007:4400:gilda:gilda:: 4408:gilda008:4400:gilda:gilda:: [..]

Installation of Grid Services Computing Element Configuration of the groups VI /opt/glite/yaim/examples/ig-groups.conf "/eumed/ROLE=SoftwareManager":::sgm "/eumed":::: "/gilda/ROLE=SoftwareManager":::sgm: "/gilda/grelc/das/*":gilda::: "/gilda"::::

Installation of Grid Services VI /opt/glite/yaim/examples/siteinfo/services/glite-creamce # CE-monitor host (by default CE-monitor is installed on the same machine as # cream-CE) CEMON_HOST=${CE_HOST} # # CREAM database user CREAM_DB_USER=creamdbuser CREAM_DB_PASSWORD=KualaLumpur # # Machine hosting the BLAH blparser. # In this machine batch system logs must be accessible. BLPARSER_HOST=${CE_HOST} VI /opt/glite/yaim/examples/wn-list.conf vm02.ct.infn.it

Installation of Grid Services Computing Element Configure site /opt/glite/yaim/bin/ig_yaim -c -s /opt/glite/yaim/examples/siteinfo/ig-site-info.def -n ig_CREAM_torque

Installation of Grid Services Worker nodes Repository Settings for x86_64 wget \ \ \ wn_torque.repo \

Installation of Grid Services worker nodes Installation of the packages yum groupinstall ig_WN_torque_noafs yum install -y lcg-CA ntp Copy the ig-site-info.def from the CE to the WN VI /opt/glite/yaim/examples/wn-list.conf vm02.ct.infn.it

Installation of Grid Services worker nodes Configure site /opt/glite/yaim/bin/ig_yaim -c -s /opt/glite/yaim/examples/siteinfo/ig-site-info.def -n ig_WN_torque_noafs

Study of Grid Monitoring System CEMon The BDII hosted CEMon-BDII server acquires LDIF formatted information from several Computing element (via CEMon),

Study of Grid Monitoring System CEMon-BDII Aggregator Components CEMon Consumer This process waits for connections from CEMons running on Computing element. It then saves the data sent to a file in the CEMon consumer's own data space.The file name is that of the FQDN reported by the CEMon consumer.

Study of Grid Monitoring System CEMon-BDII Aggregator Components Translator script A translator script runs periodically via cron. It scans the CEMon raw data directory for CEMon data files (LDIF) from active registered Computing element, that are less than N minutes old (currently 10 minutes), and modifies the LDIF data to a format that is acceptable to the BDII.

Study of Grid Monitoring System LDAP Queries on BDII Server The LDAP servers are primarily accessed using ldap queries on the LDAP servers running on: Request: ldapsearch -x -LLL -H ldap://sibilla.cnaf.infn.it:2170 -b mds-vo- name=INFN-CNAF,o=grid 'objectClass=GlueSite' Result : dn: GlueSiteUniqueID=INFN-CNAF,Mds-Vo-name=INFN-CNAF,o=grid GlueSiteSecurityContact: GlueSiteSponsor: none objectClass: GlueTop objectClass: GlueSite objectClass: GlueKey objectClass: GlueSchemaVersion GlueSiteSysAdminContact: GlueSiteDescription: predeployment site GlueSiteUniqueID: INFN-CNAF

Study of Grid Monitoring System gstat gstat provides a method to visualize a grid infrastructure from an operational perspective based on information found in the grid information system(BDII).

Study of Grid Monitoring System gstat Repository Settings for x86_64 wget \ BDII.repo \ \ \

Study of Grid Monitoring System gstat Installation. rpm -Uvh SA1/centos5/$(uname -i)/sa1-release-2-1.el5.noarch.rpm yum install -y MySQL-shared-compat MySQL-client-community MySQL-server-community yum install -y gstat-web ca_policy_igtf-classic yum install -y nagios nagios-plugins nagios-plugins-nrpe nagios- devel

Study of Grid Monitoring System gstat Create the GStat database and grant access to gstat user. mysql mysql> CREATE DATABASE gstat mysql> GRANT ALL PRIVILEGES ON gstat.* TO IDENTIFIED BY 'GLOBAL_PASSWORD';" Edit /etc/gstat/gstat.ini file and ensure that DATABASE_PASSWORD and other database parameters match the values used to create the GStat database. vi /etc/gstat/gstat.ini DATABASE_PASSWORD:GLOBAL_PASSWORD... Create a password for MySQL. mysqladmin password GLOBAL_PASSWORD Set up the database tables in the database. /usr/share/gstat/manage syncdb

Study of Grid Monitoring System Create a configuration file containing the details for the reference BDII. vi /etc/gstat/ref-bdii.conf BDII_HOST=liknayan.pscigrid.gov.ph BDII_PORT=2170 BDII_BIND=mds-vo-name=PH-ASTI-LIKNAYAN,o=grid GSTAT_PROD=TRUE Create a site-info.def file for use with YAIM. Use your DN for the NAGSIO_ADMIN_DNS. vim /opt/glite/yaim/bin/site-info.def INSTALL_ROOT=/opt SITE_NAME=PSciGrid NAGIOS_HTTPD_ENABLE_CONFIG=true NAGIOS_NAGIOS_ENABLE_CONFIG=true NAGIOS_CGI_ENABLE_CONFIG=true VOS="euasia dteam ops" NAGIOS_ADMIN_DNS="/C=TW/O=AP/OU=GRID/CN=Rey Vincent Babilonia " VO_EUASIA_VOMS_SERVERS='vomss://voms.grid.sinica.edu.tw:8443/voms/e uasia?/euasia' VO_DTEAM_VOMS_SERVERS='vomss://voms.cern.ch:8443/voms/dteam?/dte am' VO_OPS_VOMS_SERVERS='vomss://voms.cern.ch:8443/voms/ops?/ops‘

Study of Grid Monitoring System gstat Create the initial Nagios configuration files and populate the database. gstat-update configure-nagios Run YAIM../yaim -c -s site-info.def -n GSTAT

Study of Grid Monitoring System Usage Open your Web browser and go to

Study of Grid Monitoring System RTM I just begin with RTM The Real Time Monitor (RTM) is a high-level monitoring tool, which aggregates information on grid jobs and presents it in a suitable graphical form, displaying job distribution on a map of the Earth.

Future objectives Learn more about the grid computing Learn more about the monitoring Learn the liferay Devloppe a tool of monitoring using the liferay

Thank you