Presentation is loading. Please wait.

Presentation is loading. Please wait.

The following is a collection of slides from a few recent talks on computing for ATLAS in Canada, plus a few new ones. I might refer to all of them, I.

Similar presentations


Presentation on theme: "The following is a collection of slides from a few recent talks on computing for ATLAS in Canada, plus a few new ones. I might refer to all of them, I."— Presentation transcript:

1 The following is a collection of slides from a few recent talks on computing for ATLAS in Canada, plus a few new ones. I might refer to all of them, I might not depending on time and the scope Les wants covered.

2 M.C. Vetterli; SFU/TRIUMF The Canadian Model  Establish a large computing centre at TRIUMF that will be on the LCG and will participate in the common tasks associated with Tier-1 and Tier-2 centres.  Canadian groups will use existing CFI facilities (or what they will become) to do physics analysis. They will access data and the LCG through TRIUMF.  The jobs are smaller at this level and can be more easily integrated into shared facilities. We can also be independent of LCG middleware. In this model, the TRIUMF centre acts as the hub of the Canadian computing network, and as an LCG node

3 TRIUMF Gateway cpu/storage Experts - MC data - ESD’ - calibration - access to CDN Grid - algorithms - calibration - MC production - access to ATLAS Grid - AOD - DPD - technical expertise The ATLAS-Canada Computing Model CA*Net4 USA, Germany, France, UK, Italy, … CERN ATLAS Grid Canadian Grid - ESD - access to RAW & ESD UVic, SFU, UofA, UofT, Carleton, UdeM(CFI funded)

4 M.C. Vetterli; SFU/TRIUMF What Will We Need at TRIUMF?  Total computing power needed: 1.8 MSI2k 250 dual 10GHz 5000 x 1GHz CPUs  Total storage required: 340 TB of disk 1.2 PB of tape  We assume that the network will be 10 GbitE for both the LAN and WAN  These numbers have been supported by an expert advisory committee

5 M.C. Vetterli; SFU/TRIUMF Acquisition Profile Year2005-062006-072007-082008-092009-10 Fraction10%33%67%100%133% Disk (TB)34113226340453 Tape (TB)12040080012001600 CPU (MSI2k) 0.180.61.21.82.4

6 M.C. Vetterli; SFU/TRIUMF The TRIUMF Centre - II  8 NEW people to run the center are included in the budget.  4 for system support; 4 for software/user support.  Also one dedicated technician.  Personnel in the university centers will be mostly for system support.  More software support will be available from ATLAS postdocs.

7 M.C. Vetterli; SFU/TRIUMF Status of Funding  The TRIUMF center will be funded through the next TRIUMF 5-year plan; starts Apr.1, 2005.  Decision on this is expected around the end of this year.  University centers are funded through the Canada Foundation for Innovation and the provincial governments. These centers exist and should continue to be funded. Shared facilities.  Ask CFI for funds for a second large center? Driven by new requirements for T1 centers. Just started discussing this.

8 M.C. Vetterli; SFU/TRIUMF The TRIUMF Prototype Centre  Hardware: - 5 dual 2.8 GHz Xeon nodes - 6 white boxes (2 CE, LCfGng, UI, LCG-GIIS, spare) - 1 SE (770 Gbytes usable disk space)  Functionality: - LCG core node (CE #1) - Gateway to Grid-Canada & WestGrid (CE #2) - Canadian regional centre: + coordinates & pre-certifies Canadian LCG centres + primary contact with LCG  Middleware: - Grid inter-operability: + integrate non-LCG sites; there is a lot of interest in this (UK, US) Rod Walker (SFU research associate) as been invaluable!

9 M.C. Vetterli; SFU/TRIUMF The Other Canadian Sites  Victoria: - Grid-Canada Production Grid (PG-1) - Grid inter-operability (Dan Vanderster et al)  SFU/WestGrid: - Non-LCG test site (incorporate into LCG through TRIUMF)  Alberta: - Grid-Canada Production Grid (PG-1) - LCG node - Coordination of DC2 for Canada (Bryan Caron)  Toronto :- LCG node - ATLAS software mirror  Montreal: - LCG node  Carleton: - LCG node

10 M.C. Vetterli; SFU/TRIUMF Canadian DC2 Computing Resources Note: 1 kSI2k  2.8 GHz Xeon  400 x 2.8 GHz CPUs  23 TBytes of disk  50 TBytes of tape

11 M.C. Vetterli; SFU/TRIUMF Federated Grids for ATLAS DC2 Grid-Canada PG-1 WestGrid LCG/WestGrid SFU/TRIUMF LCG/Grid-Can LCG In addition to LCG resources in Canada

12 M.C. Vetterli; SFU/TRIUMF Linking HEPGrid to LCG..... GC Res.1 GC Res.n Grid-Can negotiator/ scheduler WG UBC/ TRIUMF TRIUMF TRIUMF cpu & storage negotiator/ scheduler RB/ scheduler LCG BDII/RB/ scheduler Class ad 1) Each GC resource publishes a class ad to the GC collector 2) The GC CE aggregates this info and publishes it to TRIUMF as a single resource Class ad 3) The same is done for WG 3) The CondorG job manager at TRIUMF builds a submission script for the TRIUMF Grid 4) The TRIUMF negotiator matches the job to GC or WG 1) The LCG RB decides where to send the job (GC/WG or the TRIUMF farm) Job class ad 2) Job goes to the TRIUMF farm or TRIUMF decides to send the job to WG 5) The job is submitted to the proper resource TRIUMF decides to send the job to GC 6) The process is repeated on GC if necessary MDS 4) TRIUMF aggregates GC & WG and publishes this to LCG as one resource 5) TRIUMF also publishes its own resources separately


Download ppt "The following is a collection of slides from a few recent talks on computing for ATLAS in Canada, plus a few new ones. I might refer to all of them, I."

Similar presentations


Ads by Google