Presentation is loading. Please wait.

Presentation is loading. Please wait.

YuChul Yang kr October 21, 2005The Korean Physical Society The Current Status of CDF Grid 양유철 *, 한대희, 공대정, 김지은, 서준석, 장성현, 조기현, 오영도,

Similar presentations


Presentation on theme: "YuChul Yang kr October 21, 2005The Korean Physical Society The Current Status of CDF Grid 양유철 *, 한대희, 공대정, 김지은, 서준석, 장성현, 조기현, 오영도,"— Presentation transcript:

1 YuChul Yang ycyang@mail.knu.ac. kr October 21, 2005The Korean Physical Society The Current Status of CDF Grid 양유철 *, 한대희, 공대정, 김지은, 서준석, 장성현, 조기현, 오영도, MIAN Shabeer, AHMAD KHAN Adil, 김동희 ( 경북대학교 ) 김수봉, 김현수, 문창성, 이영장, 전은주, 정지은, 주경광 ( 서울대학교 ) 유인태, 이재승, 조일성, 최동혁 ( 성균관대학교 )

2 YuChul Yang ycyang@mail.knu.ac. kr October 21, 2005The Korean Physical Society Introduction to CDF Computing  Developed in 2001-2002 to respond to experiments greatly increased need for computational and data handling resources to deal with RunII  One of the first large-scale cluster approaches to user computing for general analysis.  Greatly increased CPU power & data to physicists.  CDF Grid via CAF, DCAF, SAM and JIM ☞ DCAF(DeCentralized Analysis Farm) ☞ SAM (Sequential Access through Metadata) – Real data Handling System ☞ JIM(Job Information Management) – Resource Broker

3 YuChul Yang ycyang@mail.knu.ac. kr October 21, 2005The Korean Physical Society Outline CAF Central Analysis Farm : A large central computing resource based on Linux cluster farms with a simple job management scheme at Fermilab. DCAF Decentralized CDF Analysis Farm : We extended the above model, including its command line interface and GUI, to manage and work with remote resources Grid We are now in the process of adapting and converting out work flow to the Grid

4 YuChul Yang ycyang@mail.knu.ac. kr October 21, 2005The Korean Physical Society Environment on CAF  All basic CDF software pre-installed on CAF  Authentication via Kerberos ☞ Jobs are run via mapped accounts with authentication of actual user through special principal ☞ Database, data handling remote usres ID passed on through lookup of actual user via special principal  User’s analysis environment comes over in tarball - no need to pre-register or submit only certain jobs.  Job returns results to user via secure ftp/rcp controlled by user script and principal

5 YuChul Yang ycyang@mail.knu.ac. kr October 21, 2005The Korean Physical Society In 2005, 50% of analysis farm outside of FNAL Distributed clusters in Korea, Taiwan, Japan, Italy, Germany, Spain, UK, USA and Canada

6 YuChul Yang ycyang@mail.knu.ac. kr October 21, 2005The Korean Physical Society Current DCAF approach  Cluster technology (CAF = “Central analysis farm”) extended to remote site (DCAFs = Decentralized CDF analysis Farm)  Multiple batch systems supported : converting from FBSNG system to Condor on all DCAFs  SAM data handling system required for offsite DCAFs

7 YuChul Yang ycyang@mail.knu.ac. kr October 21, 2005The Korean Physical Society http://www-cdf.fnal.gov/internal/fastnavigator/fastnavigator.html (2005/Oct/17) Current CDF Dedicated Resources

8 YuChul Yang ycyang@mail.knu.ac. kr October 21, 2005The Korean Physical Society TYPECPURAMHDDNO head Node cluster46.knu.ac.kr AMD MP2000 * 22G80G1 sam station cluster67.knu.ac.kr Pentium 4 2.4G1G80G1 submission node cluster52.knu.ac.kr Pentium 4 2.4G1G80G1 worker node cluster39~cluster73(21) cluster102~cluster114(13) Cluster122~cluster130(9) AMD MP2000 * 22G80G4 AMD MP2200 * 21G80G2 AMD MP2800 * 22G80G11 AMD MP2800 * 22G250G2 Pentium 4 2.4G1G80G15 Xeon 3.G * 22G80G9 Total 75 CPU (173.9GHz)73G4020G46 Detail of KorCAF resources

9 YuChul Yang ycyang@mail.knu.ac. kr October 21, 2005The Korean Physical Society  Storage Upgrade status CPURAMHDDNO Current0.6TB Opteron dual (2005)2G4TB1 Zeon dual (2005)1G1TB1 Total5.6TB2 Now, Converting to Condor batch system  cdfsoft  Installed products : 4.11.1, 4.11.2, 4.8.4, 4.9.1, 4.9.1hpt3, 5.2.0, 5.3.0, 5.3.1, 5.3.3, 5.3.3_nt, 5.3.4, development  Installed binary products: 4.11.2, 5.3.1, 5.3.3, 5.3.3_nt, 5.3.4

10 YuChul Yang ycyang@mail.knu.ac. kr October 21, 2005The Korean Physical Society CAF gui & Monitoring System Select farm Process type Submit status User script, I/O file location Data access http://cluster46.knu.ac.kr/condorcaf

11 YuChul Yang ycyang@mail.knu.ac. kr October 21, 2005The Korean Physical Society Functionality for User FeatureStatus Self-contained user interfaceYes Runs arbitrary user codeYes Automatic identity managementYes Network delivery of resultsYes Input and output data handlingYes Batch system priority management Yes Automatic choice of farmNot yet Negotiation of resourcesNot yet Runs on arbitrary grid resourcesNot yet Grid

12 YuChul Yang ycyang@mail.knu.ac. kr October 21, 2005The Korean Physical Society Luminosity and Data Volume Expectations are for continued high-volume growth as luminosity and data logging rate continue to improve :  Luminosity on target to reach goal of 2.5x present rate.  Data logging rate will increase to 25 - 40MB/s in 2005  Rate will further increase to 60 MB/s in FY 2006 You are here

13 YuChul Yang ycyang@mail.knu.ac. kr October 21, 2005The Korean Physical Society Total Computing Requirements Input ConditionsResulting Requirements Fiscal Year Int LEvtsPeak rateAnaRecoDiskTape I/OTape Vol fb -1 x 10 9 MB/sHzTHz PBGB/sPB Actual 20030.30.620801.50.50.2 0.4 20040.71.120804.00.70.30.51.0 Estimated 20051.22.4352207.21.00.70.92.0 20062.74.760360161.41.21.93.3 20074.47.160360262.81.83/04.9 Analysis CPU, disk, tape needs scale with number of events. FNAL portion of analysis CPU assumed at roughly 50% beyond 2005.

14 YuChul Yang ycyang@mail.knu.ac. kr October 21, 2005The Korean Physical Society Movement to Grid  It’s the world wide trend for HEP experiment.  Need to take advantage of global innovations and resources.  CDF still has a lot of data to be analyzed. USE Grid Cannot continue to expand dedicate resource

15 YuChul Yang ycyang@mail.knu.ac. kr October 21, 2005The Korean Physical Society Activities for CDF Grid  Testing various approaches to using Grid resources (Grid3/OSG and LCG)  Adapt the CAF infrastructure to run on top of the Grid using Condor glide-ins  Use direct submission via CAF interface to OSG and LCG  Use SAMGrid/JIM sendboxing as an alternate way to deliver experiment + user software  Combine DCAFs with Grid resources

16 YuChul Yang ycyang@mail.knu.ac. kr October 21, 2005The Korean Physical Society Conclusions  CDF has successfully deployed a global computing environment (DCAFs) for user analysis.  A large portion (50%) of the total CPU resources of the experiment are now provided by offsite through a combination of DCAFs and other clusters.  And KorCAF (DCAF in Korea) switch to Condor batch system.  Active work is in progress to build bridges to true Grid methods & protocols provide a path to the future.

17 YuChul Yang ycyang@mail.knu.ac. kr October 21, 2005The Korean Physical Society Backup

18 YuChul Yang ycyang@mail.knu.ac. kr October 21, 2005The Korean Physical Society Abstracts CDF is a large-scale collaborative experiment in particle physics cu rrently taking data at the Fermilab Tevatron. As a running experim ent, it generates a large amount of physics data that require proces sing for user analysis. The collaboration has developed techniques for such analysis and the related simulations based on distributed c lusters at several locations throughout the world. We will describe t he evolution of CDF's global computing approach, which exceeded 5 THz of aggregate cpu capability during the past year, and its plan s for putting increasing amounts of user analysis and simulation ont o the grid

19 YuChul Yang ycyang@mail.knu.ac. kr October 21, 2005The Korean Physical Society CDF Data Analysis Flow CDF Level-3 Trigger Tape Storage Production Farm Central Analysis Farm


Download ppt "YuChul Yang kr October 21, 2005The Korean Physical Society The Current Status of CDF Grid 양유철 *, 한대희, 공대정, 김지은, 서준석, 장성현, 조기현, 오영도,"

Similar presentations


Ads by Google