Presentation is loading. Please wait.

Presentation is loading. Please wait.

Artem Trunov and EKP team EPK – Uni Karlsruhe

Similar presentations


Presentation on theme: "Artem Trunov and EKP team EPK – Uni Karlsruhe"— Presentation transcript:

1 Artem Trunov and EKP team EPK – Uni Karlsruhe
T2 for users Artem Trunov and EKP team EPK – Uni Karlsruhe

2 CMS T2 (from Computing TDR)
User-visible services required at each Tier-2 centre include: Medium- or long-term storage of required data samples. For analysis work, these will be mostly AOD, with some fraction of RECO. RAW data may be required for calibration and detector studies. Transfer, buffering and short-term caching of relevant samples from Tier-1’s, and transfer of produced data to Tier-1’s for storage. Provision and management of temporary local working space for the results of analysis. Support for remote batch job submission. Support for interactive bug finding e.g. fault finding for crashing jobs. Optimised access to CMS central database servers, possibly via replicas or proxies,for obtaining conditions and calibration data. Mechanisms for prioritisation of resource access between competing remote and local users, in accordance with both CMS and local policies. To support the above user-level services, Tier-2s must provide the following system-level services: Accessibility via the workload management services described in Section 4.8 and access to the data management services described in Section 4.4. Quotas, queuing and prioritisation mechanisms for CPU, storage and data transfer resources, for groups and individual users. Provision of the required software installation to replicate the CMS ‘offline environment’ for running jobs. Provision of software, servers and local databases required for the operation of the CMS workload and data management services. Additional services may include: Job and task tracking including provenance bookkeeping for groups and individual users. Group and personal CVS and file catalogues. Support for local batch job submission.

3 German T2s Not all of this comes as a package, need to agree on individual points To facilitate user’s work and in accordance with CMS C-TDR, we propose: Provide all D-CMS users a mean to login to a T2 site Provide users an opportunity to debug their jobs Eventually following jobs on WNs Provide access to (local or global) home and group space for log files, code, builds etc Provide direct access to (local or global) group storage for custom skims, ntuples etc

4 Backup slides They provide possible implementation details

5 Logins for users Gsissh for logins Simplifying user management
The ideal model is to have a gsissh access to general login interactive cluster. Interactive machines will be used for building, debugging, grid UI, etc. User's DN is mapped to a unique local account (better be not generic like cms001). Jobs coming via LCG/gLite CE are mapped to the same account. The minimal model access to the VO BOX, where gsissh is already provided for CMS admins Simplifying user management Local passwordless account is created for every CMS user that receives a German Grid Certificate (certain filtering could be applied on the DN, if desired). At the same time the grid map file on the VO BOX or interactive cluster is updated to permit gsissh logins. When user's certificate is expired or revoked, his account (or gsissh access) is automatically disabled and later automatically removed. User’s home and workgroup dirs Thomas Kress had an interesting idea – global user’s home dirs and group’s dirs on AFS, hosted at one center, for example at DESY. Then it simplifies local user management for admins, since local accounts are without a home directory. Users will need to klog to the AFS cell with their AFS password. AFS also provides fine grain access control and caching Options for debugging A special grid/local queue with one or few nodes where users can login and debug jobs Could also give access to all worker nodes to debug their jobs.

6 Storage for users User produced data (custom skims, ntuples) should go to some storage space on an SE where it is available for user management, job access and transfers. Local posix access via /pnfs is highly desirable Quotas and disk space management User quotas are not enforced, only group quotas. Group storage has g+w sticky bit set such that every group dir is writable by any member. There is a group manager who is responsible for maintaining disk space, talking to users who take too much space, removing old data, negotiating new space quota with admins etc Archiving to tape By default, user's data is not archived to tape, i.e. not in tape pools (where tape is available). When necessary, the group manager can physically copy the data to the tape pool for archiving. The path is most likely changed.


Download ppt "Artem Trunov and EKP team EPK – Uni Karlsruhe"

Similar presentations


Ads by Google