Artem Trunov and EKP team EPK – Uni Karlsruhe

Slides:



Advertisements
Similar presentations
Andrew McNab - Manchester HEP - 17 September 2002 Putting Existing Farms on the Testbed Manchester DZero/Atlas and BaBar farms are available via the Testbed.
Advertisements

Status GridKa & ALICE T2 in Germany Kilian Schwarz GSI Darmstadt.
ATLAS Tier-3 in Geneva Szymon Gadomski, Uni GE at CSCS, November 2009 S. Gadomski, ”ATLAS T3 in Geneva", CSCS meeting, Nov 091 the Geneva ATLAS Tier-3.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Grid and CDB Janusz Martyniak, Imperial College London MICE CM37 Analysis, Software and Reconstruction.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Services Abderrahman El Kharrim
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
Chapter 9 Chapter 9: Managing Groups, Folders, Files, and Object Security.
Chapter 8 Chapter 8: Managing the Server Through Accounts and Groups.
1 Chapter Overview Creating User and Computer Objects Maintaining User Accounts Creating User Profiles.
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
Zhiling Chen (IPP-ETHZ) Doktorandenseminar June, 4 th, 2009.
16 th May 2006Alessandra Forti Storage Alessandra Forti Group seminar 16th May 2006.
OSG Public Storage and iRODS
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
Chapter 13 Users, Groups Profiles and Policies. Learning Objectives Understand Windows XP Professional user accounts Understand the different types of.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
8th November 2002Tim Adye1 BaBar Grid Tim Adye Particle Physics Department Rutherford Appleton Laboratory PP Grid Team Coseners House 8 th November 2002.
Active Directory Administration Lesson 5. Skills Matrix Technology SkillObjective DomainObjective # Creating Users, Computers, and Groups Automate creation.
Interactive Job Monitor: CafMon kill CafMon tail CafMon dir CafMon log CafMon top CafMon ps LcgCAF: CDF submission portal to LCG resources Francesco Delli.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Architecture and ATLAS Western Tier 2 Wei Yang ATLAS Western Tier 2 User Forum meeting SLAC April
Grid Operations Centre LCG Accounting Trevor Daniels, John Gordon GDB 8 Mar 2004.
June 24-25, 2008 Regional Grid Training, University of Belgrade, Serbia Introduction to gLite gLite Basic Services Antun Balaž SCL, Institute of Physics.
Chapter 10 Chapter 10: Managing the Distributed File System, Disk Quotas, and Software Installation.
T3 analysis Facility V. Bucard, F.Furano, A.Maier, R.Santana, R. Santinelli T3 Analysis Facility The LHCb Computing Model divides collaboration affiliated.
Module 4 Planning for Group Policy. Module Overview Planning Group Policy Application Planning Group Policy Processing Planning the Management of Group.
US LHC OSG Technology Roadmap May 4-5th, 2005 Welcome. Thank you to Deirdre for the arrangements.
Glite. Architecture Applications have access both to Higher-level Grid Services and to Foundation Grid Middleware Higher-Level Grid Services are supposed.
User Management. User Registration Policy The issues of creation and management often clash in distributed organisations Central creation and management.
SAM Sensors & Tests Judit Novak CERN IT/GD SAM Review I. 21. May 2007, CERN.
VO Box Issues Summary of concerns expressed following publication of Jeff’s slides Ian Bird GDB, Bologna, 12 Oct 2005 (not necessarily the opinion of)
Storage Classes report GDB Oct Artem Trunov
CMS Computing Model summary UKI Monthly Operations Meeting Olivier van der Aa.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Stephen Burke – Sysman meeting - 22/4/2002 Partner Logo The Testbed – A User View Stephen Burke, PPARC/RAL.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI Mario Reale – GARR NetJobs: Network Monitoring Using Grid Jobs.
The RAL PPD Tier 2/3 Current Status and Future Plans or “Are we ready for next year?” Chris Brew PPD Christmas Lectures th December 2007.
Gestion des jobs grille CMS and Alice Artem Trunov CMS and Alice support.
Claudio Grandi INFN Bologna Virtual Pools for Interactive Analysis and Software Development through an Integrated Cloud Environment Claudio Grandi (INFN.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
AFS Home Directory Migration Details Andy Romero Core Computing Division.
Open Science Grid Consortium Storage on Open Science Grid Placing, Using and Retrieving Data on OSG Resources Abhishek Singh Rana OSG Users Meeting July.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
Storage discovery in AliEn
WLCG IPv6 deployment strategy
The EDG Testbed Deployment Details
David Bouvet Fabio Hernandez IN2P3 Computing Centre - Lyon
Progress on NA61/NA49 software virtualisation Dag Toppe Larsen Wrocław
Belle II Physics Analysis Center at TIFR
ALICE Monitoring
Status and Prospects of The LHC Experiments Computing
Model (CMS) T2 setup for end users
CMS transferts massif Artem Trunov.
Savannah to Jira Migration
Active Directory Administration
Objectives Differentiate between the different editions of Windows Server 2003 Explain Windows Server 2003 network models and server roles Identify concepts.
Grid Deployment Board meeting, 8 November 2006, CERN
THE STEPS TO MANAGE THE GRID
Artem Trunov, Günter Quast EKP – Uni Karlsruhe
Simulation use cases for T2 in ALICE
Survey on User’s Computing Experience
Ákos Frohner EGEE'08 September 2008
ALICE Computing Model in Run3
Computing Infrastructure for DAQ, DM and SC
N. De Filippis - LLR-Ecole Polytechnique
A Web-Based Data Grid Chip Watson, Ian Bird, Jie Chen,
Michael P. McCumber Task Force Meeting April 3, 2006
Chapter 9: Managing Groups, Folders, Files, and Object Security
The LHCb Computing Data Challenge DC06
Presentation transcript:

Artem Trunov and EKP team EPK – Uni Karlsruhe T2 for users Artem Trunov and EKP team EPK – Uni Karlsruhe

CMS T2 (from Computing TDR) User-visible services required at each Tier-2 centre include: Medium- or long-term storage of required data samples. For analysis work, these will be mostly AOD, with some fraction of RECO. RAW data may be required for calibration and detector studies. Transfer, buffering and short-term caching of relevant samples from Tier-1’s, and transfer of produced data to Tier-1’s for storage. Provision and management of temporary local working space for the results of analysis. Support for remote batch job submission. Support for interactive bug finding e.g. fault finding for crashing jobs. Optimised access to CMS central database servers, possibly via replicas or proxies,for obtaining conditions and calibration data. Mechanisms for prioritisation of resource access between competing remote and local users, in accordance with both CMS and local policies. To support the above user-level services, Tier-2s must provide the following system-level services: Accessibility via the workload management services described in Section 4.8 and access to the data management services described in Section 4.4. Quotas, queuing and prioritisation mechanisms for CPU, storage and data transfer resources, for groups and individual users. Provision of the required software installation to replicate the CMS ‘offline environment’ for running jobs. Provision of software, servers and local databases required for the operation of the CMS workload and data management services. Additional services may include: Job and task tracking including provenance bookkeeping for groups and individual users. Group and personal CVS and file catalogues. Support for local batch job submission.

German T2s Not all of this comes as a package, need to agree on individual points To facilitate user’s work and in accordance with CMS C-TDR, we propose: Provide all D-CMS users a mean to login to a T2 site Provide users an opportunity to debug their jobs Eventually following jobs on WNs Provide access to (local or global) home and group space for log files, code, builds etc Provide direct access to (local or global) group storage for custom skims, ntuples etc

Backup slides They provide possible implementation details

Logins for users Gsissh for logins Simplifying user management The ideal model is to have a gsissh access to general login interactive cluster. Interactive machines will be used for building, debugging, grid UI, etc. User's DN is mapped to a unique local account (better be not generic like cms001). Jobs coming via LCG/gLite CE are mapped to the same account. The minimal model access to the VO BOX, where gsissh is already provided for CMS admins Simplifying user management Local passwordless account is created for every CMS user that receives a German Grid Certificate (certain filtering could be applied on the DN, if desired). At the same time the grid map file on the VO BOX or interactive cluster is updated to permit gsissh logins. When user's certificate is expired or revoked, his account (or gsissh access) is automatically disabled and later automatically removed. User’s home and workgroup dirs Thomas Kress had an interesting idea – global user’s home dirs and group’s dirs on AFS, hosted at one center, for example at DESY. Then it simplifies local user management for admins, since local accounts are without a home directory. Users will need to klog to the AFS cell with their AFS password. AFS also provides fine grain access control and caching Options for debugging A special grid/local queue with one or few nodes where users can login and debug jobs Could also give access to all worker nodes to debug their jobs.

Storage for users User produced data (custom skims, ntuples) should go to some storage space on an SE where it is available for user management, job access and transfers. Local posix access via /pnfs is highly desirable Quotas and disk space management User quotas are not enforced, only group quotas. Group storage has g+w sticky bit set such that every group dir is writable by any member. There is a group manager who is responsible for maintaining disk space, talking to users who take too much space, removing old data, negotiating new space quota with admins etc Archiving to tape By default, user's data is not archived to tape, i.e. not in tape pools (where tape is available). When necessary, the group manager can physically copy the data to the tape pool for archiving. The path is most likely changed.