Presentation is loading. Please wait.

Presentation is loading. Please wait.

Integration of Physics Computing on GRID S. Hou, T.L. Hsieh, P.K. Teng Academia Sinica 04 March, 2009 1.

Similar presentations


Presentation on theme: "Integration of Physics Computing on GRID S. Hou, T.L. Hsieh, P.K. Teng Academia Sinica 04 March, 2009 1."— Presentation transcript:

1 Integration of Physics Computing on GRID S. Hou, T.L. Hsieh, P.K. Teng Academia Sinica 04 March, 2009 1

2 Introduction Integration of computing… via GRID issues  integrity on system privacy,  CA access on common OS,  common services to larger user pools High-Performance-Computing for High-Energy Physics  late binding from local cluster toward the GRID PacCAF PacCAF, the Pacific CDF Analysis Farm  a GRID distributed computing model the Common-Analysis-Farm for Physics,e-Science  use GRID CA, Parrot service 2 HPC Asia 2009

3 High-Performance-Computing (HPC) for High-Energy Physics (HEP) 1. HEP users run serial, I/O intensive computing jobs 2. coding in F77, C, C++, flexible on OS, hardware 3. explosive CPU, I/O usage  submit jobs to GRID connecting clusters ~~ The CDF experiment users are serfing the wave of GRID are serfing the wave of GRID our discussion case for integration ~~ our discussion case for integration ~~ 3 HPC Asia 2009

4 HEP computing: the evolution 4 HPC Asia 2009 User access & computing model evolving with hardware and network technology Hardware Architecture 1. in old days of DEC, VM terminal login, mainframe computing 2. Work-stations (Sun,DEC,HP,SGI..) xterm login, local cluster computing 3. Linux Pentium clusters xterm and web access 4. GRID & integration of clusters xterm and web access via CA Network access 1. telnet, ftp, transparent many users, site manager control 2. ssh, scp, encrypted few users, site manager control 3. Kerberos authentication hundreds of users, DB control 4. CA trusted thousands of users, VO control Computing of HEP users is an evolution from Mainframe, Unix station to GRID

5 Users @ home institutes 1. Desk-top + small clusters limited system support 2. Fortrain/C/C++ coding PAW/ROOT graphic analysis 3. Unix based serial programming interactive/batch jobs with limited system support for CPU, Storage 4. Network to Center for Data/DB CDF computing patterns 5 HPC Asia 2009 Computing for the Tevatron CDF experiment: an example of wide varieties of user patterns Experiment site 1. Data Acquisition of petaByte, need Computer Center support 2. Data production, Mass storage, customized computing model DB management on data 3. Distribution of Data, DB hundreds of user access Users are flexible, willing to learn Users are never satisfied

6 Distributed computing for CDF 1. computing was on site main frame 2. evolved into intel based Linux PC clusters  the Central Analysis Farm (CAF) 3. distributed computing by Kerberos login  de-centralized CAF’s in Asia, Europe, US 4. a late binding solution to GRID  a Condor-G baed Glide CAF  our model for integration  our model for integration 6 HPC Asia 2009

7 The CAF portal for distributed computing 7 HPC Asia 2009 CAF: the CDF Analysis Farm, launched in 2002 on dedicated Linux Clusters on FBSNG batch is a portal for user accessing everything Xterm login  Software + Data Handling + Batch + Monitoring CAF headnode Submitter Monitor Condor HTTPd SAM data Handling Enstore tape library Pentium workers CAFmon

8 Startd User jobs Use of Condor in CAF (local batch) 8 Moved to Condor in 2004, departed from traditional, dedicated resources Schedd Schedd : manage user jobs Negotiator Negotiator : assigns nodes to jobs Collector Collector : gathers information from other deamons Startd Startd : manage jobs on Worker node (WN) CAF Submitter Schedd Negotiator User jobs Startd User jobs Startd CafExe User job Collector 1 1 2 3 Worker Nodes HPC Asia 2009

9 What users do on CAF 9 User desktop installed with CAF client commands 1. User prepare working directory 2. Prepare run script (steps to be executed on a WN) 3. Archive the full directory by “tar” command  is the tarball to be sent to WN $ CAF_submit tarball Worker Nodes execute user run script 1. I/O by Kerberos ticket, may “scp” output files eleswhere 2. at finishing of a WN section tar of the working scratch is sent back to submit node e-mail notification is sent to user HPC Asia 2009

10 Interactive monitoring by user 10 CAF cafrout Monitor User jobs User jobs User job WN write pipe CafExewrapper Condor CoD outgoing only HPC Asia 2009 CAF client on user desktop Command line tools via Condor CoD to WN $ list jobs $ pstree list of processes in a section $ list WN working directory $ tail files on WN $ debugging a process Can’t predict how users will mess up Interactive monitoring is everyone’s demand !!

11 Monitor Web monitoring of all jobs 11 CAFmon CAFmon account on CAF headnode to fetch job status into HTTP web display CAF HTTPd HPC Asia 2009

12 Monitor Interactive Web monitoring 12 HPC Asia 2009 Display each running section, and processes is very powerful for debugging system/software problems CAF HTTPd

13 Integration via GRID : connecting private clusters 1. not to grow/merge clusters 2. create a dispatch center 3. add GateKeepers at affiliated GRID sites,  via CA authentication  send jobs over the GRID The late binding solution: Condor-G on globus 13 HPC Asia 2009

14 CAF integration via Condor Glide-in 14 CAN NOT demand dedicated resource any more  computing goes integration and sharing of GRID sites  computing goes integration and sharing of GRID sites CAF hunts CPU by Glide-in to GRID pools CAF Submitter Schedd User jobs Collector 1 3 Glidekeeper Schedd User jobs Glidein User jobs Glidein User job Glidein Globus User jobs Glidein Batch queue User jobs Glidein User job Glidein Globus 2 GRID pools HTTPd tarball HPC Asia 2009

15 The Pacific CAF for CDF 15 PacCAF Peak performance: 1k jobs Joint GRID sites: - IPAS_OSG, IPAS_CS, TW-LCG2, NCHC - JP-TsuKUBA - KR-KISTI-HEP HPC Asia 2009

16 Integration of computing facilities feasible for the advantages: 1. common OS, Linux on Intel based PC clusters 2. GNU based compilers for g77, gcc 3. GRID connection provides -. hardware integrity -. common service software and development -. security via CA access -. user groups via VO management 16 HPC Asia 2009

17 Migrating to GRID 17 HPC Asia 2009 OSG gatekeeper IPAS CondorHEP Condor LCG gatekeeper HEP-loginIPAS-login Scheduler Physics Computing at Academia Sinica upgraded to Condor and GRID services merged into two large clusters for serial/parallel tasks 400 CPU node 100 TBytes 600 CPU node blade systems Gbit to abroad

18 Common Analysis Facility CAF becomes Common Analysis Facility 18 Glide-CAF Submitter Monitor dedicated File-servers HTTPd dedicated User interface Glidekeeper Integratiion at Academia Sinica Linux OSGCC language Common platform using 1.Linux OS; 2.GCC language - Nuclear/HEP - Nuclear/HEP demands custom software, data handling - Complex/Bio-physics - Complex/Bio-physics demands Parallel computing Cdf-soft bio-soft Bio-disk Cdf-disk ParallelCondor SerialCondor HPC Asia 2009

19 CAF Howto build an integrated CAF 19 1. CAF center: - build headnode 2. Joint cluster: - build GRID gatekeeper - (optional HTTP Proxy, GCB) 3. User interface: - build globus client - /usr/local/bin/caf_submit - software distribuiton Glide-CAF Submitter Monitor HTTPd Glidekeeper GRID site GCB broker Globus HTTP Proxy User interface My-soft the Load is on system managers for Switching to GRID tools HPC Asia 2009

20 CAF Users surfing the integrated CAF 20 Users: - replace your acquainted batch submit to $ caf_submit - enjoy/be scared by the surges of running jobs - watch Web monitoring kill broken jobs, clear network clogging Buckle up to sudden surge of CPU and network load HPC Asia 2009

21 Prototype CAF service 21 CAF for physics computing (two condor batches) 1.Serial computing for HEP, Nuclear, experiment users demand: intensive network and data I/O  2 gatekeepers (for OSG and LCG)  OSG to NCHC  1U workers with 2/4/8/16 core, total 400 CPUs  3 User-Interface flavors  200 TB storage on 11 servers 2.Parallel computing for Nano-science, complex systems demand: local, fast backplane network to CPU nodes  blade 7 crates, 640 CPUs  3 User-Interface flavors CAF for integration is a mature technology Seeking for expansion on collaboration with NCHC GRID, AS CITI, CC nation-wide in Taiwan and abroad HPC Asia 2009

22 Web based job-submit 22 Make it even easier to users  developing a NCHC Web submission interface 1.Use Web access as User Interface 2.Provide templates of GRID jobs upload/download & clicking on the GRID HPC Asia 2009

23 Scaling issues 23 All systems have a limit 1. Network: - single Hard-disk spinning limit is ~40 MB/sec - a gigabit port support 2 spinning jobs - effective file-system load a few tens 2. System: - Condor tolerates a few thousands sections - HTTP tolerates a few thousands query  Polite user pattern: - submit slowly, a few hundreds of jobs at once, - watch return network and file-server load  Prevent CPU from waiting for file transfer HPC Asia 2009

24 Summary 24 1. on demand of HPC we build distributed CAF for data processing of CDF experiment 2. a late binding on GRID integrates Pacific CAFs into one service 3. the Prototype Common Analysis Farm is constructed for Physics computing ~~ for I/O intensive serial jobs and CPU intensive parallel computing HPC Asia 2009


Download ppt "Integration of Physics Computing on GRID S. Hou, T.L. Hsieh, P.K. Teng Academia Sinica 04 March, 2009 1."

Similar presentations


Ads by Google