Presentation is loading. Please wait.

Presentation is loading. Please wait.

J. Harvey : Panel 3 – list of deliverables Slide 1 / 14 Planning Resources for LHCb Computing Infrastructure John Harvey LHCb Software Week July 5-7 2000.

Similar presentations


Presentation on theme: "J. Harvey : Panel 3 – list of deliverables Slide 1 / 14 Planning Resources for LHCb Computing Infrastructure John Harvey LHCb Software Week July 5-7 2000."— Presentation transcript:

1 J. Harvey : Panel 3 – list of deliverables Slide 1 / 14 Planning Resources for LHCb Computing Infrastructure John Harvey LHCb Software Week July 5-7 2000

2 J. Harvey : Panel 3 – list of deliverables Slide 2 / 14 Outline mBasic questions on LHCb Computing Model mTechnical requirements mPlan for tests and the 2003 Prototype mMilestones mCosting and mapower mGuidelines for MoU mCalendar of summer activities mSummary

3 Computing Model Data recording, calibration Reconstruction, reprocesssing Central data store Oxford Barcelona CERN Tier 0 Tier 1 UK Roma I Marseilles Simulation production Analysis Central data store Selected user analyses Local data store AOD,TAG real : 80TB/yr sim: 120TB/yr AOD,TAG 8-12 TB/yr CERN Tier 1 ? ? Tier 1 ItalyTier 1 France Tier 1 Spain ?

4 J. Harvey : Panel 3 – list of deliverables Slide 4 / 14 Some basic questions mWhich are the LHCb Tier 1 centres? ãUK, France, Italy, ……? ãWe are being asked this question now. mHow will countries without a Tier 1 centre access LHCb datasets and do analysis? mFor each centre we need to ascertain whether there is a policy for permitting remote access to resources e.g. is it: ãA collaboration-wide resource? ãRestricted by nationality? ãHow will use of resources be allocated and costed?

5 J. Harvey : Panel 3 – list of deliverables Slide 5 / 14 Technical Requirements mTier 0 - real data production and user analysis at CERN ã120,000 SI95 cpu, 500 TB/yr, 400TB (export), 120 TB(import) mTier 1 – simulation production and user analysis ã500,000 SI95 cpu, 330TB/yr, 400 TB (export), 120TB (import) ãShared between Tier 1 centres mRequirements need to be revised ãNew estimates of simulation events needed – evolution with time ãNumber of physicists doing analysis at each centre ãComparison with estimates from other experiments to look for discrepancies (e.g. analysis) mSharing between Tier 0 and Tier 1 needs to be checked ã1/3 (CERN) 2/3 (outside) guideline

6 J. Harvey : Panel 3 – list of deliverables Slide 6 / 14 Tests and the 2003 Prototype mAdaptation of LHCb production software to use grid middleware (i.e. Globus) has started mProduction tests of new data processing software – performance and quality mProduction tests of the computing infrastructure ãPerformance of hardware, scalability ãTests of the grid infrastructure mConstruction of a large scale prototype facility at CERN ãAdequate complexity (no of physical cpus, switches, disks etc.) ãSuitable architecture – scope covers on- and off-line ãTests – scalability under different workloads, farm controls, data recording ãShared participation and exploitation by all 4 experiments ãPlanned for 2003 – sufficient time to get experience ãScale limited by ATLAS/CMS needs

7 J. Harvey : Panel 3 – list of deliverables Slide 7 / 14 Tier 1 UK 2003 Prototype CERN Tier 0 CERN Tier 1 Tier 1 Italy x2 Tier 1 France Tier 1 others ? Tier 1 USA x2 mCERN Tier 0 shared by all 4 expts even in 2005 – large! mWhich are the Tier 1 centres that will participate in prototype? mTier 1s decide how big they will be in prototype

8 J. Harvey : Panel 3 – list of deliverables Slide 8 / 14 Computing Milestones m2H2001 – Computing MoU ãassignment of responsibilities (cost and manpower) m1H2002 –Data Challenge 1 (Functionality test) ãTest software framework machinery (database etc.) ãTest basic grid infrastructure (>2 Tier 1 centres) ãScale of test ~ 10 6 events in ~1 month – modest capacity m2H2002 – Computing TDR m1H2003 – Data Challenge 2 (Stress tests using prototype) ãStress tests of data processing ~10 7 events in 1 month ãProduction (simulation) and chaotic (analysis) ãRepeat tests of grid with all Tier 1 centres if possible ãInclude tests designed to gain experience with the online farm environment – data recording, controls

9 J. Harvey : Panel 3 – list of deliverables Slide 9 / 14 Planning mNeed to establish participation of Tier 1 centres ãIn grid tests ãIn 2003 prototype tests ãCoordination between Tier 1s and experiments (EU project etc.) mPlan Data Challenges ãTasks, manpower schedule and milestones ãDefine technical details ãMatch to physics milestones – ongoing simulation work mPlan resources needed at each centre between now and 2005 ãTo participate in tests ãTo participate in simulation production

10 J. Harvey : Panel 3 – list of deliverables Slide 10 / 14 Costing and Manpower mWe need to identify our requirements at each centre and work with their local experts to cost our needs. mThe centre should apply its own specific costing model to make estimates of the resources required to service LHCb needs as a function of time. This includes : ãUnit costs of cpu, managed storage, network bandwidth, … ãOn-going maintenance costs ãPolicy for replacement and upgrade mThe staffing levels in the Tier0 and Tier1s required for LHCb should be addressed jointly by LHCb and the centres’ experts to determine : ãDedicated LHCb support staff ãCost of services - breakdown as in-house and outsource mWe can then compare these figures with our own estimates of the cost – ultimately we have responsibility for ensuring that the LHCb computing cost estimate is correct. mWe need to estimate the evolution in the cost from now to 2005 i.e. including the cost of the prototype

11 J. Harvey : Panel 3 – list of deliverables Slide 11 / 14 Guidelines to write the Computing MOU mAssignment of responsibility (cost and manpower) for the computing infrastructure mProvision of computing resources ãBreakdown by centres providing production capacity ãRemote access policies of Tier 1 centres ãMaintenance and operation costs mManpower for computing centres ãPart of centre strategic planning i.e. not for LHCb MoU mManpower for CORE computing components ãCORE software components – institutional agreements ãSoftware production tasks – librarian, quality, planning, doc,.. ãData production tasks – operation, bookkeeping mManpower for detector specific computing ãFollows institutional responsibility for detector mTimescale – write during 2001, sign when ready

12 J. Harvey : Panel 3 – list of deliverables Slide 12 / 14 Construction of prototype mWe need to understand how the construction of the prototype will be funded and how costs will be shared mThis need not necessarily require an MoU if an informal agreement can be reached mWe are expected to describe how we intend to use the prototype and to make a commitment on the manpower that will be used to provide the applications and to perform the tests.

13 J. Harvey : Panel 3 – list of deliverables Slide 13 / 14 The calendar for the summer activities m17th July – Review Panel 3 ãLocation of Tier 1 centres ãLHCb technical requirements - to be sent to centres for costing ãParticipation of Tier 1 centres in prototype m22nd August – Review Panel 3 m20th September – Review Panel 3 m25-29 September – LHCb collaboration meeting mOctober 3 – Review Steering – report almost final mOctober 9 – Review Panel3 mOctober 10 th – finalise review report and distribute mOctober 24 th – RRB meeting – discussion of report

14 J. Harvey : Panel 3 – list of deliverables Slide 14 / 14 Summary mWe need to refine model : ãWhere are theTier 1s? ãHow will physicists at each institute access data and do analysis? mWe need to revise technical requirements ãNew estimates of simulation needs ãChecks against requirements of other experiments mWe need to define tests, MDCs and use of 2003 prototype mWe need to make cost and manpower estimates ãWorking with the various centres mNext year we will need to write a Computing MoU defining the assignment of responsibility for costs and manpower ãWill be signed only when all details are fixed


Download ppt "J. Harvey : Panel 3 – list of deliverables Slide 1 / 14 Planning Resources for LHCb Computing Infrastructure John Harvey LHCb Software Week July 5-7 2000."

Similar presentations


Ads by Google