Presentation is loading. Please wait.

Presentation is loading. Please wait.

Www.see-grid-sci.eu SEE-GRID-SCI Hands-On Session: Computing Element (CE) and site BDII Installation and Configuration Dusan Vudragovic Institute of Physics.

Similar presentations


Presentation on theme: "Www.see-grid-sci.eu SEE-GRID-SCI Hands-On Session: Computing Element (CE) and site BDII Installation and Configuration Dusan Vudragovic Institute of Physics."— Presentation transcript:

1 www.see-grid-sci.eu SEE-GRID-SCI Hands-On Session: Computing Element (CE) and site BDII Installation and Configuration Dusan Vudragovic Institute of Physics Serbia dusan@scl.rs The SEE-GRID-SCI initiative is co-funded by the European Commission under the FP7 Research Infrastructures contract no. 211338 Regional SEE-GRID-SCI Training for Site Administrators Institute of Physics Belgrade March 5-6, 2009

2 Regional SEE-GRID-SCI Training for Site Administrators, Institute of Physics Belgrade, March 5-6, 2009 2 Overview Homogeneous set of worker nodes (WNs) Grid gate node Local resource management system: Condor / PBS / LSF master Gatekeeper Job request Loc. Info system Logging A&A Information system L&B A CE is a grid batch queue with a “grid gate” front-end:

3 Regional SEE-GRID-SCI Training for Site Administrators, Institute of Physics Belgrade, March 5-6, 2009 3 OS installation & configuration Newest Scientific Linux series 4 (currently 4.7) should be installed (not SL5) Only 32-bit distribution is supported by lcg-CE and BDII_site so far We have chosen to install all base packages from 5 SL4.7 CDs, then remove unnecessary Packages with great chances not to be used should be removed to speed up future software updates, i.e. openoffice.org Remove all LAM and OPENMPI packages, we'll be using MPICH Remove java-1.4.2-sun-compat package! Virtual environment is a possible solution

4 Regional SEE-GRID-SCI Training for Site Administrators, Institute of Physics Belgrade, March 5-6, 2009 4 Further OS tuning Adjust services/daemons started at the boot time  it is recommended to change the default runlevel to 3 in /etc/inittab  disable yum auto-update, since this may bring trouble when new gLite updates appear  If you install MPI_CE, it is suggested to disable SELINUX by replacing “SELINUX=enforcing” with line “SELINUX=disabled” in the file /etc/selinux/config Configure NTP service  Example of configuration file /etc/ntp.conf can be found on http://glite.phy.bg.ac.yu/GLITE-3/ntp.conf  touch /etc/ntp.drift /etc/ntp.drift.TEMP  chown ntp.ntp /etc/ntp.drift /etc/ntp.drift.TEMP  chkconfig ntpd on

5 Regional SEE-GRID-SCI Training for Site Administrators, Institute of Physics Belgrade, March 5-6, 2009 5 Repository adjustment DAG repo should be enabled by changing "enabled=0" into "enabled=1" in /etc/yum.repos.d/dag.repo Base SL repos must be PROTECTED, not allowing DAG packages to replace them! Add line “protect=1” to /etc/yum.repos.d/sl.repo and /etc/yum.repos.d/sl- errata.repo Following new files must be created in /etc/yum.repos.d:  lcg-ca.repo (Certification authorities packs)  glite.repo (all gLite packages)  jpackage5.0.repo (java stuff)  contents of these files follow.....

6 Regional SEE-GRID-SCI Training for Site Administrators, Institute of Physics Belgrade, March 5-6, 2009 6 Repository adjustment Local repository at SCL has been available since November 2008. Configuration files for the majority of repos can be found at http://rpm.scl.rs/yum.conf/

7 Regional SEE-GRID-SCI Training for Site Administrators, Institute of Physics Belgrade, March 5-6, 2009 7 File system import/export Application software filesystem – All WNs must have shared application software filesystem where VO SGMs (software grid managers) will install VO-specific software. – If it's supposed to be located on CE itself, following (or similar) line must be appended to /etc/exports /opt/exp_soft 147.91.12.0/255.255.255.0(rw,sync,no_root_squash) – If you want to map application software filesystem from other node (usually SE), append this line to /etc/fstab: se.csk.kg.ac.yu:/opt/exp_soft /opt/exp_soft nfs hard,intr,nodev,nosuid,tcp,timeo=15 0 0 Do not forget to create /opt/exp_soft directory! Shared /home filesystem: – In order to provide appropriate MPI support, entire /home must be shared among WNs. – Procedure is equal to procedure for app. soft. filesystem

8 Regional SEE-GRID-SCI Training for Site Administrators, Institute of Physics Belgrade, March 5-6, 2009 8 gLite software installation Valid host certificate must be present at /etc/grid-security gLite software binaries, libraries and other stuff are organized using meta-package paradigm. In order to install necessary packages for lcg-CE/BDII node with MPI support, following packages must be installed:  glite-BDII  lcg-CE  glite-TORQUE_server  glite-TORQUE_utils  glite-MPI_utils Due to temporary packaging inconsistency in glite-MPI_utils described in link, YUM command line must be:  yum install lcg-CE glite-BDII glite-TORQUE_server glite-TORQUE_utils glite-MPI_utils torque-2.1.9-4cri.slc4 maui-client- 3.2.6p19_20.snap.1182974819-4.slc4 maui-server- 3.2.6p19_20.snap.1182974819-4.slc4 maui- 3.2.6p19_20.snap.1182974819-4.slc4 torque-server-2.1.9-4cri.slc4 torque-client-2.1.9-4cri.slc4

9 Regional SEE-GRID-SCI Training for Site Administrators, Institute of Physics Belgrade, March 5-6, 2009 9 SSH configuration SSH must allow hostbased authentication between CE and WNs, as well as among WNs each other This is especially important if grid site supports MPI Helper script available in gLite can be found at /opt/edg/sbin/edg-pbs-knownhosts Script configuration can be adjusted in /opt/edg/etc/edg-pbs-knownhosts.conf Put all relevant FQDNs into /etc/ssh/shosts.equiv This is standard procedure for hostbased SSH Identical procedure applies to all WNs

10 Regional SEE-GRID-SCI Training for Site Administrators, Institute of Physics Belgrade, March 5-6, 2009 10 gLite configuration All grid sevices must be configured properly using YAIM tool. Official info available at https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400 https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide400 Templates for input YAIM files can be taken from https://viewvc.scl.rs/viewvc/yaim/trunk/?root=seegrid https://viewvc.scl.rs/viewvc/yaim/trunk/?root=seegrid Since YAIM is mainly a set of bash scripts, bash-like syntax must be used in input files Required input files are: – site-info.def – users.conf – wn-list.conf – groups.conf – directory vo.d with one file per VO YAIM config. files must not be readable for users!

11 Regional SEE-GRID-SCI Training for Site Administrators, Institute of Physics Belgrade, March 5-6, 2009 11 gLite configuration site-info.def  Main configuration input source  Contains proper paths to all other configuation files users.conf  Defines UNIX pool users for each Virtual Organization  Helpful script at http://glite.phy.bg.ac.yu/GLITE-3/generate- pool-accounts-AEGIS-v4  Example:./generate-pool-accounts-AEGIS-v4 seegrid 20000 seegrid 2000 200 10 10 >> users.conf groups.conf  Defines groups per VO, template can be employed as is. wn-list.conf  Simple list of FQDNs of available Worker Nodes vo.d/  Directory containing a file per each supported VO.

12 Regional SEE-GRID-SCI Training for Site Administrators, Institute of Physics Belgrade, March 5-6, 2009 12 gLite configuration Following http://wiki.egee-see.org/index.php/SEE- GRID_MPI_Admin_Guide, /opt/globus/setup/globus/pbs.in should be replaced with http://cyclops.phy.bg.ac.yu/mpi/pbs.in before YAIM invocation in order to force WN to use local scratch instead of shared /home for single CPU jobs YAIM invocation command for lcg-CE/BDII_site combination with MPI support has to be: /opt/glite/yaim/bin/yaim -c -s /path/to/site-info.def -n MPI_CE -n lcg-CE -n TORQUE_server -n TORQUE_utils -n BDII_site Note that MPI_CE has to be first in the line In case that YAIM returns an error anywhere in the procedure, check data in site-info.def and other input files and restart YAIM

13 Regional SEE-GRID-SCI Training for Site Administrators, Institute of Physics Belgrade, March 5-6, 2009 13 MAUI post-configuration steps Verify that /var/spool/maui/maui.cfg contains the following line: ADMIN3 edginfo rgma edguser tomcat Reserve a node for SAM test jobs in MAUI: QOSCFG[qossam]MAXPROC=1 PRIORITY=100000 GROUPCFG[prdseegrid] QDEF=qossam PRIORITY=100000 GROUPCFG[sgmseegrid] QDEF=qossam PRIORITY=100000 SRCFG[samreservation] TASKCOUNT=1 RESOURCES=PROCS:1 SRCFG[samreservation] PERIOD=INFINITY SRCFG[samreservation] GROUPLIST=prdseegrid,sgmseegrid SRCFG[samreservation] HOSTLIST=rti18.etf.bg.ac.yu SRCFG[samreservation] QOSLIST=qossam If maui.cfg is modified, restart it: /etc/init.d/maui restart

14 Regional SEE-GRID-SCI Training for Site Administrators, Institute of Physics Belgrade, March 5-6, 2009 14 VO support SEEGRID VO  Install latest seegrid RPM available at http://www.irb.hr/users/vvidic/seegrid/ AEGIS VO  Put http://voms.phy.bg.ac.yu/voms.phy.bg.ac.yu.119 into /etc/grid-security/vomsdir........

15 Regional SEE-GRID-SCI Training for Site Administrators, Institute of Physics Belgrade, March 5-6, 2009 15 Testing configured system Verify local batching system  qmgr -c "print server" Test if site properly provides info using GSTAT tool at http://egee017.cnaf.infn.it/gstat/seegrid/ GSTAT places delay of ~15mins, but up-to-date info can be obtained using simple ldap client: ldapsearch -x -H ldap:// :2170 -b mds-vo-name=,o=grid Useful CE and SE info:  lcg-infosites --vo seegrid ce  lcg-infosites --vo seegrid se

16 Regional SEE-GRID-SCI Training for Site Administrators, Institute of Physics Belgrade, March 5-6, 2009 16 Helpful links http://wiki.egee-see.org/index.php/SG_GLITE- 3_Guide http://wiki.egee-see.org/index.php/SL4_WN_glite-3.1 http://wiki.egee-see.org/index.php/SEE- GRID_MPI_Admin_Guide https://twiki.cern.ch/twiki/bin/view/EGEE/GLite31JPac kage https://twiki.cern.ch/twiki/bin/view/LCG/YaimGuide40 0


Download ppt "Www.see-grid-sci.eu SEE-GRID-SCI Hands-On Session: Computing Element (CE) and site BDII Installation and Configuration Dusan Vudragovic Institute of Physics."

Similar presentations


Ads by Google