Presentation is loading. Please wait.

Presentation is loading. Please wait.

12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory.

Similar presentations


Presentation on theme: "12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory."— Presentation transcript:

1 12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory

2 12th November 2003LHCb Software Week2 RAL Tier-1

3 12th November 2003LHCb Software Week3 RAL Tier-1 2002 - 312 cpus 4 racks holding 156 dual 1.4GHz Pentium III cpus. March 2003 - extra 160 cpus 80 x dual processor P4, 2.66GHz, Xenon Dec 2003 - extra 400/500 cpu 200 - 250 dual systems Operating system now RedHat 7.3. Batch = PBS. CSF Legacy Equipment 250 cpus (450MHz - 1 GHz) Total ~1000 cpus

4 12th November 2003LHCb Software Week4 RAL Tier-1 Mass Storage March 2003 - Extra 40TB 11 new disk servers, each with 2 x 1.8TB filesystems Dec 2003 - Extra ~70TB 2002 - 40TB 26 disk servers, each with 2 x 0.8TB filesystems. Disk Cluster Total ~150TB

5 12th November 2003LHCb Software Week5 RAL DataStore STK 9930 (Powderhorn) tape robot June 2003 Updated with 8xSTK 9940B drives. Transfer speed 30MB/sec each drive Tape capacity 200GB 5,500 slots = 1PB potential capacity Current capacity limited by number of tapes.

6 12th November 2003LHCb Software Week6 Switch_1Switch_2 RS6000 fsc0fsc1 fsc0 9940B 12345678 1114111415 fsc1fsc0fsc1fsc0 1213121315 rmt1 rmt4rmt3rmt2 rmt5-8 AAAAAAAA STK 9310 “Powder Horn” Gbit network 1.2TB

7 12th November 2003LHCb Software Week7 GRID at RAL LCG Currently 5 worker nodes in testbed. 15.8.2003LCG On-Line 10.9.2003Upgraded to LCG1-1_0_0 22.9.2003Upgraded to LCG1-1_0_1 Amount of future hardware deployed in LCG depends on experiments and GRIDPP. EDG EDG 2.1Deployed on development testbed EDG 2.0On main production testbed. EDG 1.4Gatekeeper into main production farm.

8 12th November 2003LHCb Software Week8 LHCb CMS BaBar ATLAS

9 12th November 2003LHCb Software Week9 2003 Data Challenge2004 (same sites)? SiteSPECint2k*hours% ShareSPECint2k*hoursStorage (GB) Bristol5,520,7050.9%37,019,600183 Cambridge12,881,6462.1%86,379,067428 Imperial78,516,69812.8%526,500,9782,606 Oxford7,360,9401.2%49,359,467244 RAL54,593,6428.9%366,082,7111,812 ScotGrid47,232,7017.7%316,723,2441,568 TOTAL206,106,33233.6%1,382,065,0676,841 Factor of 6.7 Still hope for ~10% share from 3 largest centres?

10 12th November 2003LHCb Software Week10 Tier-1 Resources for LHCb Requested for DC04 (April - June): From Marco’s numbers (assuming same share as DC03) CPU requirement - 366M SI2k*hours 6TB of disk for "permanent copy" of all DSTs (may reduce to 1TB if pre-selection is used) to be used for analysis. Existing disk servers (3.2TB) used to store MC production from RAL and other UK sites before transfer to tape/CERN.. Mass storage of 7TB to store SIM+DIGI+ data from all UK sites. But actual resources will depend on competition from other experiments.

11 12th November 2003LHCb Software Week11 CPU Requirements (KSI2K) LHCb x3 LHCb LHCb need ~20% of farm for 3 months

12 12th November 2003LHCb Software Week12 UK Tier-2 Centres NorthGrid Daresbury, Lancaster, Liverpool, Manchester, Sheffield SouthGrid Birmingham, Bristol, Cambridge, Oxford, RAL PPD ScotGrid Durham, Edinburgh, Glasgow LondonGrid Brunel, Imperial, QMUL, RHUL, UCL

13 12th November 2003LHCb Software Week13 Tier-2 Number of CPUs Total CPU (KSI2000) Total Disk (TB) Total Tape (TB) London677335268 NorthGrid8152634328 SouthGrid436229328 ScotGrid2591261820 Total218795311864 Tier-2 Number of CPUs Total CPU (KSI2000) Total Disk (TB) Total Tape (TB) London245419969920 NorthGrid27182801209332 SouthGrid918930678 ScotGrid368318790 Total64586045455360 Existing Hardware (April 2003) Estimated Hardware (Sept 2004)

14 12th November 2003LHCb Software Week14 Liverpool New MAP2 facility now installed 940 3GHz/1GB/128GB P4 Dell Nodes SCALIManage installed, RH9 20 CPU CDF facility now installed, Fermi RH Linux and 5.9TB disk MAP memory upgrade (270 nodes) EDG 2.0 being installed 10% of DC04 would take 17 days on all processors. Initial LHCb scheduling proposal: 50% of farm for ~1 month. ~1.1M SPECint2k

15 12th November 2003LHCb Software Week15 ScotGrid ScotGRID Processing nodes at Glasgow (128 cpu) 59 IBM X Series 330 dual 1 GHz Pentium III with 2GB memory 2 IBM X Series 340 dual 1 GHz Pentium III with 2GB memory 3 IBM X Series 340 dual 1 GHz Pentium III with 2GB memory 1TB disk LTO/Ultrium Tape Library Cisco ethernet switches ScotGRID Storage at Edinburgh (5TB) IBM X Series 370 PIII Xeon 70 x 73.4 GB IBM FC Hot-Swap HDD Phase 1 complete Phase 2 - now commissioning Upgrade database server in Edinburgh 16-20TB disk storage in Edinburgh 5TB disk storage in Glasgow (relocation from Edinburgh) Edge servers for Edinburgh New kit for Glasgow - CDF and eDIKT

16 12th November 2003LHCb Software Week16 Imperial Viking at London e-Science Centre: Upgrade to ~500 cpu cluster (33% HEP + bioinformatics +…) Ready to join LCG1 Ulrik - “Factor of 5 seems realistic” running across ~3 months. Note: Other potential resources coming online in London Tier 2... Royal Holloway~100cpu UCL~100 cpu Brunel(BITLab)64 dual Xenon nodes + 128 more nodes Timescale? LHCb use?

17 12th November 2003LHCb Software Week17 Manpower Currently, little dedicated manpower. Rely on effort shared with other tasks. Gennady has been main technical link for Tier1. Easy installation/maintenance of production & analysis software and tools


Download ppt "12th November 2003LHCb Software Week1 UK Computing Glenn Patrick Rutherford Appleton Laboratory."

Similar presentations


Ads by Google