Presentation is loading. Please wait.

Presentation is loading. Please wait.

ATLAS BEIJING T2/T3 Status WuWenjing 2013.3 IHEP/Beijing.

Similar presentations

Presentation on theme: "ATLAS BEIJING T2/T3 Status WuWenjing 2013.3 IHEP/Beijing."— Presentation transcript:

1 ATLAS BEIJING T2/T3 Status WuWenjing 2013.3 IHEP/Beijing

2 Tier2 Resources/Usage BEIJING_LCG2 ATLAS CPU: 496 Disk(DPM): 320 TB (66% used)  ATLASDATA 269TB/273TB  ATLASGROUP 0/0TB  ATLASPROD 0TB/7TB  ATLASSCRATCH 6TB/17TB  ATLASLOCALGROUP 6TB/8TB ALTASLOCALGROUP can be used To download Tier3 data.

3 Tier2 Site Jobs from 2012.7-2013.3 production: 142K 95% Analysis: 217K 87%

4 Tier2 Remote Data transfer from 2012.7-2013.3 downloading : 25~120TB/Month uploading: 40~180TB/Month

5 Tier2 Data Exchange (Local & Remote) from 2012.7-2013.3 Remote Read & Write 50~300TB/Month Local Read & Write 75~320TB/Month

6 Tier2 Local Data Processing from 2012.7-2013.3 Read Local & Remote : 100~500TB/Month Write Local & Remote : 20TB~140TB/Month

7 Tier3 Resources/Usage ATLAS Tier3 CPU(PBS): Disk:  Public (Lustre): /publicfs/atlas 32TB (76% used) –CMD to check(from lxslc): »lfs df -h -p publicfs.atlaspool /publicfs  Temporary space: /besfs2/atlas 30TB  User Home(PANFS) : /home/atlas 1.1TB (no user based quota) Software distribution  /afs/ (write permission to atlas group)

8 IHEP Tier3: Login Nodes ATLAS Specific (6 nodes)  atlasui0[1-2] (x86_64 Redhat 5.5, scratch disk 225GB)  atlasui0[3-6] (x86_64 SLC 5.8, scratch disk /scratch 776GB) IHEP General: (10 nodes)   atlas0[1-2] on public network, directly accessible from outside IHEP atlas0[3-6] on private network, only directly accessible from IHEP campus network. One can login to these machines from atlasui0[1-2] or IHEP general login machines

9 IHEP log in nodes (cont.) atlasui01 和 atlasui06 作为从 grid 下载数 据和 ntuple 所用,大家调试程序或做分析 ,可用 atlasui02-05 。当然大家也可以用 atlasui01 或 06 , 只是下载数据的时候可 能会影响大家的使用速度.

10 CERN Login: Kerberos authentication For Users who have CERN login accounts, you can use Kerberos authentication to avoid interactive password authentication This is enabled on all atlas login nodes How to use Kerberos: initate the ticket: [wuwj@atlasui01 tmp]$ kinit afs_user@CERN.CH Password for afs_user@CERN.CH: verify your ticket: [wuwj@atlasui01 tmp]$ klist Ticket cache: FILE:/tmp/krb5cc_60008 Default principal: afs_user@CERN.CH Valid starting Expires Service principal 05/23/12 12:01:51 05/24/12 13:01:51 krbtgt/CERN.CH@CERN.CH renew until 05/28/12 12:01:51 Kerberos 4 ticket cache: /tmp/tkt60008 klist: You have no tickets cached

11 CERN Login (Cont.) Use case: passwordless svn check out svn co svn+ssh:// sicsAnalysis/TopPhys/TopRootCoreRelease/tags/ TopRootCoreRelease-00-01- 09 TopRootCoreReleaseTopRootCoreRelease

12 IHEP Cluster PBS (304 Cores, shared 2 Queues) atlaslque : long queue, one user can run 100 jobs, no running time limit on jobs atlassque: short queue, one user can run 50 jobs, maximum running time for each job is 3 hours

13 IHEP Storage Public Storage space : /publicfs/atlas capacity : 32TB usage check (run this cmd from login nodes):  lfs df -h -p publicfs.atlaspool /publicfs 其中 /publicfs/atlas/codesbackup/user_name 每 周备份, 可以长期存放一些用户自产生的源程序等 /home/atlas ( 已撤销 )

14 IHEP Storage (Cont.) Share file system (lustre) /workfs : 存放重要个人文件,每用户拥有 5G 空间, 最多存放 50000 个文件,计算中心提供数据备份服 务。登录节点上可以读写访问,计算节点只能读取 ,无法写入此目录。 /scratchfs :存放临时文件,每用户拥有 500GB 存 储空间,保存时间 2 周。从所有登录节点与计算节 点均可随时读写此目录 Share file system (AFS) /afs/ :存放用户个人文件,每用户 拥有 500MB 空间,计算中心提供数据备份服务。

15 IHEP Cluster & Storage How to redirect job outputs to scratchfs: qsub -o /scratchfs/atlas/username/logfilepath -e /scratchfs/atlas/username/errorlogfilepath -q MyQueue jobscript 其中 MyQueue 是 atlaslque 或 atlassque

16 Software release NFS : Athena (old) /opt/exp_soft/atlas CVMFS : Athena (new) /cvmfs/ AFS ( user required software ) /afs/

17 Resources ATLAS Computing twiki: ng ng DaTRI: ddm_req ddm_req BEIJING_LCG2 ATLAS Site Monitoring:

18 The End

Download ppt "ATLAS BEIJING T2/T3 Status WuWenjing 2013.3 IHEP/Beijing."

Similar presentations

Ads by Google