Presentation is loading. Please wait.

Presentation is loading. Please wait.

Batch Scheduling at CERN (LSF) Hepix Spring Meeting 2005 Tim Bell IT/FIO Fabric Services.

Similar presentations


Presentation on theme: "Batch Scheduling at CERN (LSF) Hepix Spring Meeting 2005 Tim Bell IT/FIO Fabric Services."— Presentation transcript:

1 Batch Scheduling at CERN (LSF) Hepix Spring Meeting 2005 Tim Bell IT/FIO Fabric Services

2 03/05/2005Batch Scheduling at CERN Tim.Bell@cern.ch2 End User Clusters lxplus001 lxbatch001 lxb0001 DNS-like load balancing LSF CLI or Grid UI disk001 rfio tape001 rfio lxfsrk123 tpsrv001 950 Batch Servers “lxbatch” 44 Interactive Servers “lxplus” 400 Disk Servers 80 Tape Servers

3 03/05/2005Batch Scheduling at CERN Tim.Bell@cern.ch3 Hard/software of clusters  Typically we make two acquisitions per year  Interactive login:  44 lxplus dual 2.8 GHz, 2GB memory, 80GB disk, slc3  Batch worker nodes - /pool is working space:  85 lxbatch dual 800MHz, 512MB memory, 8GB /pool,redhat7  86 lxbatch dual 1 GHz, 1GB memory, 8GB /pool, slc3  538 lxbatch dual 2.4 GHz, 1GB memory, 40GB /pool, slc3  226 lxbatch dual 2.8 GHz, 2GB memory, 40GB /pool, slc3  On order 225 lxbatch dual 2.8GHz, 2GB memory, 120GB disk, slc3

4 03/05/2005Batch Scheduling at CERN Tim.Bell@cern.ch4 Current CERN LSF queues  View queue information  bqueues [options] [queue-name]  % bqueues  QUEUE_NAME PRIO STATUS MAX JL/U JL/P JL/H NJOBS PEND RUN SUSP  grid_dteam 20 Open:Active - - - - 0 0 0 0  grid_cms 20 Open:Active - - - - 0 0 0 0  grid_atlas 20 Open:Active - - - - 226 21 205 0  grid_lhcb 20 Open:Active - - - - 0 0 0 0  grid_alice 20 Open:Active - - - - 0 0 0 0  lcgtest 20 Open:Active - - - - 0 0 0 0  system_test 10 Open:Active - - - - 0 0 0 0  8nm 7 Open:Active - 120 - - 3487 3472 15 0  1nh 6 Open:Active - 120 - - 2218 2104 114 0  8nh 5 Open:Active - 100 - 3 5330 5004 326 0  cmsprs 5 Open:Active 60 75 - 3 98 86 12 0  1nd 4 Open:Active - - - 2 6894 6230 664 0  2nd 4 Open:Active - - - 2 3212 2686 526 0  1nw 3 Open:Active - - - 1 4623 4257 366 0  prod100 2 Open:Active - 100 - 2 146 120 26 0  cmsdc04 2 Open:Active - 600 - 2 0 0 0 0  prod200 1 Open:Active - 250 - 2 1611 1561 50 0  prod400 1 Open:Active - 1000 - 2 5773 5301 472 0

5 03/05/2005Batch Scheduling at CERN Tim.Bell@cern.ch5 Queues are based on cputime requirements  8nm8 normalised minutes  1nh1 normalised hour  8nh8 normalised hours  1nd1 normalised day  1nw1 normalised week  The normalisation changes every few years as machines get faster. We recently converted to kilo-specint2000 units so that one cpu hour on a 2.8GHz farm PC is almost exactly one normalised hour (they are rated at 1.037 KSI2K).  Additional low priority production queues for reserved users with high job concurrency allow 1nw  Grid queues use mapped team accounts and allow 1nw cpu time. Further refinements will be investigated.  All queues can have resource or user group restrictions

6 03/05/2005Batch Scheduling at CERN Tim.Bell@cern.ch6

7 03/05/2005Batch Scheduling at CERN Tim.Bell@cern.ch7

8 03/05/2005Batch Scheduling at CERN Tim.Bell@cern.ch8 Queue issues  The queue definitions are essentially empirically defined to match users requirements for turnaround times where a user could expect many short jobs per day and a few long jobs overnight.  Production queues are for low priority work where specific turnaround is not an issue. The three queues allow higher numbers of concurrent jobs at decreasing priority.  There is a grid queue for each VO but without any cpu-time granularity.  There is a local queue funded by an experiment for fast turnaround of analysis jobs.  Many experiments define subgroups of privileged users which have higher priority within the group and can run higher numbers of concurrent jobs.  We allow to schedule up to 3 shorter jobs per worker node but stop at 2 if cpu load exceeds 90%.

9 03/05/2005Batch Scheduling at CERN Tim.Bell@cern.ch9 Resource sharing  CERN experiments and major user groups apply annually for part of centrally funded shared resources and can fund extra capacity giving them a guaranteed share. Any unused shares are available to all. LSF schedules jobs firstly on a per user group basis to deliver the defined shares of cpu time to the group in a rolling 12 hour period.  We have currently defined (arbitrarily) 1800 shares to match our 1800 KSi2K of capacity.  There is currently an allocatable capacity reserve (but which is used in practise) of 300 shares that we use to satisfy short term requests.

10 03/05/2005Batch Scheduling at CERN Tim.Bell@cern.ch10 Scheduling policies  LSF supports hierarchies of user groups.  Each level of the hierarchy can have a scheduling policy :  Fairshare is used at the highest group level and by most experiments among their users. Jobs are scheduled to give equal shares in average over a certain period – currently 12+ hours. Queue priorities are ignored and more longer jobs may be started.  FCFS (First Come First Served) (default policy at CERN). Shorter queues are scheduled first. Within a group and queue a single user can block others.  Preemptive/Preemptable used at CERN on an engineering cluster only (for parallel jobs)

11 03/05/2005Batch Scheduling at CERN Tim.Bell@cern.ch11 Current CERN shares per user group  [lxplus058.cern.ch] > 75 ~ > bhpart SHARE  HOST_PARTITION_NAME: SHARE  HOSTS: g_share/  SHARE_INFO_FOR: SHARE/  USER/GROUP SHARES PRIORITY STARTED RESERVED CPU_TIME RUN_TIME  u_prod 289 95.740 0 0 95.6 0  u_z2 100 33.333 0 0 0.0 0  zp_prod 90 1.978 5 0 109780.2 31637  u_vp 20 0.586 2 0 19698.6 109414  u_vo 1 0.333 0 0 0.0 0  harp_dydak 2 0.261 1 0 200.8 8353  u_slap 75 0.161 11 0 766725.5 1442519  u_za 16 0.138 10 0 191694.7 236536  u_DELPHI 2 0.129 1 0 33828.5 14929  u_wf 3 0.077 0 0 183887.4 0  u_LHCB 280 0.070 263 0 1268841.6 15357023  u_HARP 9 0.068 11 0 138163.6 353289  u_c3 1 0.050 2 0 20586.3 35210  zp_burst 40 0.048 77 0 1401348.9 1681989  u_xu 2 0.043 10 0 26846.6 42004  u_CMS 400 0.035 408 0 18033716.0 34445474  u_vl 24 0.024 132 0 2220209.0 908173  u_NA48 6 0.024 34 0 681000.8 85772  u_coll 75 0.023 426 0 6378126.0 4021268  others 20 0.016 23 0 1847487.4 4105005  u_yt 16 0.016 177 0 2208468.0 167261  u_l3c 5 0.016 65 0 568145.3 53442  u_vk 1 0.010 20 0 182311.5 29401  u_OPAL 2 0.009 9 0 124016.9 880953  u_COMPASS 200 0.008 502 0 42877972.0 82582955  u_zp 90 0.008 216 0 5662955.5 51679400  u_z6 2 0.003 10 0 967997.3 1925963

12 03/05/2005Batch Scheduling at CERN Tim.Bell@cern.ch12 Resource Requirement String  Most LSF commands accept a resource requirement string.  Describes the resources required by a job.  Used for mapping the job onto execution hosts which satisfy the resource request.  Queue names imply a cpu (and real time) time limit and are enough for most users.  Real time limit is hardwired per queue at a multiple (3 to 4) of the cputime limit.

13 03/05/2005Batch Scheduling at CERN Tim.Bell@cern.ch13 Users can request specific static resources typehost type (SLC3 or LINUX7)string modelhost model (SEIL_2400 etc.)string hnameHostname (lxb0001 etc.)string cpufCPU factor (0.852 etc.)relative serverhost can run remote jobsboolean rexpriexecution prioritynice(2) argument ncpusnumber of processorsinteger ndisksnumber of local disksinteger maxmemmaximum RAM memory available to usersmegabytes maxswpmaximum available swap spacemegabytes maxtmpmaximum available space in /tmp directorymegabytes

14 03/05/2005Batch Scheduling at CERN Tim.Bell@cern.ch14 Or dynamic resources IndexMeasuresUnitsAveraged over Update Interval statushost statusstring15 sec r15srun queue lengthprocesses15 sec r1mrun queue lengthprocesses1 min15 sec r15mrun queue lengthprocesses15 min15 sec utCPU utilization%1 min15 sec pgpaging activitypgin+pgout per sec 1 min15 sec lsloginsusers30 sec itidle timemin30 sec

15 03/05/2005Batch Scheduling at CERN Tim.Bell@cern.ch15 Other dynamic Resources IndexMeasuresUnitsAveraged over Update Interval swpavailable swap spaceMB15 sec memavailable memoryMB15 sec tmpavailable space in /tmpMB120 sec iodisk i/o to local diskskB/sec1 min15 sec lftmseconds until next shutdown sec15 sec poolavailable space on /pool disk MB15 sec

16 03/05/2005Batch Scheduling at CERN Tim.Bell@cern.ch16 Typical batch job requirements  Beyond queue name the most commonly used resource requests are:  “Cpuf > factor” where current values are from 0.2 to 1.0. This is to avoid slower machines with long execution real times.  “Mem > MB” requesting free memory at job start. Avoids jobs running on hosts where they would be killed by our monitors or swap excessively.  “pool > MB” requests current working directory space at job start. We kill jobs that use more than 75% of a nodes work space.  Current grid UI mapping for CPU factor and pool space is unclear

17 03/05/2005Batch Scheduling at CERN Tim.Bell@cern.ch17 Further information  User level info is on the web at  http://cern.ch/batch which has links to live status, accounting data and basic user guides. http://cern.ch/batch  Live monitoring information which starts at the cluster level but can descend to individual nodes is at:  http://cern.ch/lemon-status (internal access only) http://cern.ch/lemon-status


Download ppt "Batch Scheduling at CERN (LSF) Hepix Spring Meeting 2005 Tim Bell IT/FIO Fabric Services."

Similar presentations


Ads by Google