Presentation is loading. Please wait.

Presentation is loading. Please wait.

LHC Computing – the 3 rd Decade Jamie Shiers LHC OPN meeting October 2010.

Similar presentations


Presentation on theme: "LHC Computing – the 3 rd Decade Jamie Shiers LHC OPN meeting October 2010."— Presentation transcript:

1 LHC Computing – the 3 rd Decade Jamie Shiers LHC OPN meeting October 2010

2 The 3 rd Decade of LHC Computing We are now entering the 3 rd decade of LHC Computing Marked by the successful use of the Worldwide LHC Computing Grid for extended data taking since the restart of the LHC at the end of March 2010 – See July 2010 Economist article What are the issues and challenges of this decade, in particular from a network viewpoint? Jamie.Shiers@cern.ch2

3 Evolution We heard in the Data Access and Management Jamboree in Amsterdam that modifications to the experiments’ data models are likely In a nutshell: exploit the success of the network; more dynamic caching; more data / network oriented… This might well be accompanied by increased network demands at least for some Tier2s (But you’ve heard this before…) Jamie.Shiers@cern.ch3

4 The First Decade Started at CHEP 1992, Annecy, France, where significant focus was on the challenges of the SSC and LHC – plus increasing focus on “industry standards” versus HEP-specific solutions Led to several years of R&D – object oriented analysis and design, object oriented languages and databases – and production use towards the end of the decade Co-existed with wide-scale LEP exploitation and a revolution in the IT world: Internet explosion, commodity PCs, the Web It ended with the elaboration of possible models for LHC Computing – the “MONARC proposal”MONARC Jamie.Shiers@cern.ch4

5 The MONARC model The MONARC project tried to define a set of viable models for LHC computing It proposed a hierarchical model with a small number of regional centres at the national level plus a larger number of local centres – Universities of Institutes This model – consisting of a Tier0, roughly 10 Tiers1 and some 100 Tier2s – is the basis of the today’s production environment N.B. MONARC foresaw optional airfreight as an alternative to costly and low-bandwidth networking (622Mbps or less…) Jamie.Shiers@cern.ch5

6 The MONARC model vs today Jamie.Shiers@cern.ch6 Desk tops CERN 6.10 7 MIPS 2000 Tbyte; Robot University n.10 6 MIPS 100 Tbyte; Robot FNAL/BNL 4.10 6 MIPS 200 Tbyte; Robot 622 Mbits/s Desk tops Desk tops

7 Enter the Grid Around the turn of the millennium cracks were beginning to appear in the solutions proposed by the various R&D projects – and adopted at the 100TB-1PB scale by experiments from several labs across the world – Major data and software migrations necessary At the same time, Ian Foster et al were evangelizing a new model for distributed computing HEP bit: CERN was the lead partner in a series of EU funded projects and (W)LCG was born Jamie.Shiers@cern.ch7

8 The Second Decade Several generations of grid R&D and deployment projects: in Europe EDG followed by EGEE I, II and III (EUR100M of investment from EU) plus partner projects in other areas of the world 1 st half of the decade included “data challenges” run by the experiments testing components of their computing models and specific services 2 nd half: a series of “service challenges” that contributed to the ramp-up of the global service to be ready well prior to planned data taking – Strong focus on network issues – but also end-to-end service: usability of the global system by the experiments Moving targets: computing models, experiment frameworks and underlying middleware and services all developed concurrently… Jamie.Shiers@cern.ch8

9 The Story So Far… 1990s: R&D, pilot strategies used in production 2000s: data & service challenges, production deployment & hardening ▶ 2010s: service – production exploitation plus changes to reflect experience from production plus evolution in IT world Jamie.Shiers@cern.ch9

10 Service IMHO it is inevitable that the models will become more network centric This means that any concerns with the current situation risk to be aggravated – and should be resolved asap A particular concern – for a long while – has been response to and resolution of network problems Jamie.Shiers@cern.ch10

11 Requirements N.B. these are general Tx-Ty needs – not limited to those connections based on OPN We need involvement of network experts early on in problem handling It is understood that these problems are often complex and involve multiple parties – it is for precisely this reason that “the network experts” become involved early (knowledge and contacts) We need ownership of these problems – someone / team who will follow up until the issue is resolved and perform the appropriate post-mortem Jamie.Shiers@cern.ch11

12 Summary There are other talks in this meeting where monitoring, Tier2 support and other key issues will be addressed I did not see anything explicit regarding the above key service issues, which is why I chose to focus on them There is a clear increase in network reliance over the past 2 decades of LHC computing To be successful in the 3 rd, these chronic service issues need to be resolved. Period. Jamie.Shiers@cern.ch12


Download ppt "LHC Computing – the 3 rd Decade Jamie Shiers LHC OPN meeting October 2010."

Similar presentations


Ads by Google