Presentation is loading. Please wait.

Presentation is loading. Please wait.

LHC-Computing Grid LCG

Similar presentations


Presentation on theme: "LHC-Computing Grid LCG"— Presentation transcript:

1 LHC-Computing Grid LCG
„Eventually, users will be unaware they are using any computer but the one on their desk, because it will have the capabilities to reach out across the (inter-) national network and obtain whatever computational resources are necessary” (Larry Smarr and Charles Catlett, 1992) Hans F Hoffmann, CERN November 2002 SPC

2 High Energy Physics is leading the way in data intensive science
4 large detectors for the Large Hadron Collider (LHC) CMS ATLAS LHCb Storage – Raw recording rate 0.1 – 1 GByte/sec Accumulating data at PetaBytes/year ~ 20 million CDs each year 10 PetaBytes of disk Processing – 150,000 of today’s fastest PCs 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

3 Grids Mobile Access Supercomputer, PC-Cluster G R I D M L E W A
Desktop Data-storage, Sensors, Experiments Hoffmann, Reinefeld, Putzer Internet, networks Visualising 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

4 DESY-Nov-02; H F Hoffmann/CERN
LCG Building a Grid Grid The virtual LHC Computing Centre Collaborating Computer Centres Alice VO CMS or Atlas or LHCb VOs What are the essential points of the Grid? the Grid hides the complexity of the structure from the user and from the application enables a single distributed facility to be run as a service for several experiments, several sciences 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

5 Virtual Computing Centre
LCG Virtual Computing Centre The User --- sees the image of a single cluster does not need to know - where the data is - where the processing capacity is - how things are interconnected - the details of the different hardware and is not concerned by the conflicting policies of the equipment owners and managers should be as simple to use as a shared service like LXBATCH at CERN should be as reliable and predictable as LXBATCH 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

6 What is the LHC Computing Grid Project?
LCG What is the LHC Computing Grid Project? Mission of Phase 1 – prepare the computing environment for the analysis of LHC data applications software - environment, tools, frameworks, common developments deploy and coordinate a global grid service encompassing the LHC Regional Centres coordination of significant efforts in middleware support, with emphasis on robustness, monitoring, error recovery (resilience) strategy and policy for resource allocation authentication, authorisation, accounting, security monitoring, modelling & simulation of Grid operations tools for optimising distributed systems 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

7 DESY-Nov-02; H F Hoffmann/CERN
LCG CERN will provide the data reconstruction & recording service (Tier 0) -- but only a small part of the analysis capacity current planning for capacity at CERN + principal Regional Centres 2002: 650 KSI  <1% of capacity required in 2008 2005: 6,600 KSI2000  < 10% of 2008 capacity 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

8 Centres taking part in the LCG-1
around the world  around the clock 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

9 SC2 PEB Project structure POB Grid Deployment Architects Board Forum
LCG Project structure POB SC2 PEB requirements Architects Forum Design decisions, implementation strategy for physics applications Grid Deployment Board Coordination, standards, management policies for operating the LCG Grid Service 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

10 DESY-Nov-02; H F Hoffmann/CERN
LCG LCG: SC2 (+ PEB) Roles SC2 – Software&Computing Committee PEB – Project Execution Board SC2 brings together the four experiments and Tier 1 Regional Centers It identifies common solutions and sets requirements for the project may use RTAGs – Requirements and Technical Assessment Groups limited scope, two-month lifetime with intermediate report one member per experiment + experts PEB manages the implementation organizing projects, work packages coordinating between the Regional Centers collaborating with Grid projects organizing grid services SC2 approves the work plan, monitors progress 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

11 SC2 Monitors Progress of the Project
LCG SC2 Monitors Progress of the Project Requirements for several key work packages were successfully defined PEB has turned these into work plans: Data Persistency, Software Support, Mass Storage Other work plans are in preparation: Grid use cases, Mathematical Libraries Key requirements are finishing in October: Detector Simulation ‘Blueprint for LHC experiments software architecture’ This will trigger important further activity in application developments 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

12 Project Execution Board
LCG Project Execution Board Two bodies set up to coordinate & take decisions: Architects Forum software architect from each experiment, the application area manager makes common design decisions and agreements between experiments in the applications area supported by a weekly applications area meeting open to all participants Grid Deployment Board representatives from the experiments and from each country with an active Regional Centre taking part in the LCG Grid Service prepares the agreements, takes the decisions, defines the standards and policies that are needed to set up and manage the LCG Global Grid Service 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

13 LCG Recruitment status - snapshot
25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

14 DESY-Nov-02; H F Hoffmann/CERN
LCG Materials at CERN LCG 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

15 DESY-Nov-02; H F Hoffmann/CERN
LCG LCG Level 1 Milestones Hybrid Event Store available for general users applications Distributed production using grid services Distributed end-user interactive analysis Full Persistency Framework 2002 2005 2004 2003 Q1 Q2 Q3 Q4 grid LHC Global Grid TDR “50% prototype” (LCG-3) available LCG-1 reliability and performance targets First Global Grid Service (LCG-1) available 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

16 ATLAS DC1 Phase 1 : July-August 2002
LCG ATLAS DC1 Phase 1 : July-August 2002 3200 CPU‘s 110 kSI95 71000 CPU days 39 Institutes in 18 Countries Australia Austria Canada CERN Czech Republic France Germany Israel Italy Japan Nordic Russia Spain Taiwan UK USA grid tools used at 11 sites 5*10*7 events generated 1*10*7 events simulated 3*10*7 single particles 30 Tbytes files 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

17 ATLAS DC2 : October 2003 - March 2004
LCG Use Geant4 Perform large scale physics analysis Use LCG common software Use widely Grid middleware Further test of the computing model ~ same amount of data as for DC1 ATLAS DC3 : End Begin 2005 5 times more data than for DC2 ATLAS DC4 : End Begin 2006 2 times more data than for DC3 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

18 DESY-Nov-02; H F Hoffmann/CERN
LCG 6 million events ~20 sites 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

19 DESY-Nov-02; H F Hoffmann/CERN
LCG State of play We are still solving basic reliability & functionality problems This is worrying as we still have a long way to go to get to a solid service At end 2002, a solid service in mid-2003 looks (surprisingly) ambitious HEP needs to limit divergence in developments. Complexity adds cost We have not yet addressed system level issues How to manage and maintain the Grid as a system providing a high-quality reliable service. Few tools and treatment in current developments of problem determination, error recovery, fault tolerance etc. Some of the advanced functionality we will need is only being thought about now Comprehensive data management, SLA’s, reservation schemes, interactive use. Many many initiatives are underway and more coming. How do we manage the complexity of all this ? 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

20 Establishing Priorities
LCG Establishing Priorities We need to create a basic infrastructure that works well. LHC needs a systems architecture and high-quality middleware – reliable and fault tolerant. Tools for systems administration. Focus on mainline physics requirements and robust data handling. Simple end-user tools that deal with the complexity. Need to look at the overall picture of what we are trying to do and focus resources on key priority developments KISS We must simplify and make the simple things work well. It is easy to expand scope, much harder to contract it ! 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

21 DESY-Nov-02; H F Hoffmann/CERN
LCG Summary on LCG basic middleware partnership science, computer scientists and software engineers industry and scientific research international collaboration global grid infrastructure for science built on a foundation of core nodes serving existing national and global collaborations advanced middleware research programme complementary projects with a focus on the emerging requirements of the LHC data analysis community 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

22 What else is going on The International Context with the examples Europe and US (Not mentioned here: UK: e-science, DE: FZK, Nordu-Grid, )

23 HEP related Grid projects
GriPhyN PPDG iVDGL Notare la complementarieta’ dei paesi di appartenenza dei partner di progetto Through links between sister projects, there is the potential for a truely global scientific applications grid 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

24 DESY-Nov-02; H F Hoffmann/CERN
Project start , duration 3 years, 21 partners Deliverables: Middleware tried out with Particle Physics-, Earth Observation-, Biomedical Applications 1st review successfully passed,production quality middleware asked by Expts Applications - Distributed Fabrics - "Transparent" Grid Middleware 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

25 The benefits of building Grids: an example from Astronomy
Crab Nebula viewed At four different wavelengths: X-ray, optical, infrared, radio. 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

26 The benefits of building Grids: an example from Earth observation
From global weather information to precision agriculture and emergency response 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

27 Managing Large Scale Data
Hierarchical Storage and Indexing Highly Distributed Source 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

28 Berkeley workshop (12-13 Nov 2002)
EU perspectives on Research on Grids Kyriakos Baxevanidis European Commission, DG INFSO

29 EU level effort complements National efforts
Foster cohesion, interoperability, cross-fertilization of knowledge, economies of scale, critical mass Increase value Multiply impact EU (IST-Programme: the “flagship”) National

30 Overview of EU level RTD-policy on Grids
Committed to develop, deploy Grids Strong support to synergetic and integrated approaches Strong support to international co- operation (EU with the other regions)

31 Major Infrastructure deployments (on-going)
1. DATAGRID, CROSSGRID, DATATAG Examples: Infrastructure across 17 European States Cross-Atlantic link Application requirements: Computing > 20 TFlops/s Downloads > 0.5PBytes Network speeds at 10Gbit/s Collaborations of more than 2000 scientists (2.5 Gbit/s) 2. EUROGRID, GRIP Infrastructure across 6 European States Industrial design, simulations (as well as scientific applications) Globus - Unicore interface Application requirements: Real-time, Resource brokerage, Portals, Coupled applications Links to National Grid infrastructures

32 Grids in FP6: some important priorities
From current prototype to production level systems (industrial quality); promote commercial uptake Research on new concepts (Semantic Grids, Grids and the Web, Mobile Grids, Ambient Intelligence Spaces) Strengthen middle-ware developments in Europe (Academia-Industry collaboration, skills) Involve new User/Application communities Funding targeted more to big scale efforts (tens of millions of EURO) - mobilize National, private funds Close ties with National efforts (National Grid Centres?) Strengthen International cooperation - standards

33 Synergetic work: at the core of the activity
GRIDLAB GRIA EGSO DATATAG CROSSGRID DATAGRID GRIP EUROGRID DAMIEN G R I D S T A R T AVO Synergy with new Grid projects, EU National and International efforts, GGF

34 DESY-Nov-02; H F Hoffmann/CERN
EGEE :EU 6th Framework Programme (Enabling Grids for E-science and industry in Europe) EU and EU member states major investment in Grid Technology Several good prototype results Next Step: Leverage current and planned national programmes work closely with relevant industrial Grid developers and NRNs build on existing middeware and expertise create a general European Grid production quality infrastructure This can be achieved for a minimum of €100m/4 years on top of the national and regional initiatives applications EGEE network 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

35 Blue Ribbon Panel on Cyberinfrastructure Presentation to MAGIC
Paul Messina November 6, 2002 1

36 Cyberinfrastructure: the Middle Layer
Applications in science and engineering research and education Cyberinfrastructure: hardware, software, personnel, services, institutions Base-technology: computation, storage, communication

37 Some roles of cyberinfrastructure
Processing, storage, connectivity Performance, sharing, integration, etc Make it easy to develop and deploy new applications Tools, services, application commonality Interoperability and extensibility enables future collaboration across disciplines Best practices, models, expertise Greatest need is software and experienced people

38 From Prime Minister Tony Blair’s Speech to the Royal Society (23 May 2002)
What is particularly impressive is the way that scientists are now undaunted by important complex phenomena. Pulling together the massive power available from modern computers, the engineering capability to design and build enormously complex automated instruments to collect new data, with the weight of scientific understanding developed over the centuries, the frontiers of science have moved into a detailed understanding of complex phenomena ranging from the genome to our global climate. Predictive climate modelling covers the period to the end of this century and beyond, with our own Hadley Centre playing the leading role internationally. The emerging field of e-science should transform this kind of work. It's significant that the UK is the first country to develop a national e-science Grid, which intends to make access to computing power, scientific data repositories and experimental facilities as easy as the Web makes access to information. One of the pilot e-science projects is to develop a digital mammographic archive, together with an intelligent medical decision support system for breast cancer diagnosis and treatment. An individual hospital will not have supercomputing facilties, but through the Grid it could buy the time it needs. So the surgeon in the operating room will be able to pull up a high-resolution mammogram to identify exactly where the tumour can be found.

39 Futures: The Computing Continuum
Petabyte Archives Smart Objects National Petascale Systems Ubiquitous Sensor/actuator Networks Collaboratories Responsive Environments Terabit Networks Laboratory Terascale Systems Contextual Awareness Ubiquitous Infosphere Building Up Building Out Science, Policy and Education

40 Key Points About the Proposed Initiative
There is grass roots vision and demand from broad S&E research communities. Many needs will not be met by commercial world. Scope is broad, systemic, strategic. A lot more than supercomputing. Extreme science - not flops. Potential to relax constraints of distance, time, and disciplinary boundaries. New methods: computation, visualization, collaboration, intelligent instruments, data mining, etc. Opportunity to leverage significantly prior NSF and other government investments. Potential large opportunity cost for not acting soon. The initiative is intrinsically international: cooperation and competition. Can’t assume US is in the lead.

41 Components of CI-enabled science & engineering
A broad, systemic, strategic conceptualization High-performance computing for modeling, simulation, data processing/mining Humans Instruments for observation and characterization. Individual & Global Connectivity Group Interfaces Physical World & Visualization Facilities for activation, Collaboration manipulation and Services construction Knowledge management institutions for collection building and curation of data, information, literature, digital objects

42 Coordination (synergy) Matrix
Research in technologies, systems, and applications Development or acquisition Operations in support of end users Applications of information technology to science and engineering research Cyberinfrastructure in support of applications Core technologies incorporated into cyberinfrastructure

43 Bottom-line Recommendations
NSF leadership for the Nation of an INITIATIVE to revolutionize science and engineering research capitalizing on new computing and communications opportunities. 21st Century Cyberinfrastructure includes supercomputing massive storage, networking, software, collaboration, visualization, and human resources Current centers (NCSA, SDSC, PSC) and other programs are a key resource for the INITIATIVE. Budget estimate: incremental $1020 M/year (continuing).

44 Basic Middleware Partnership
LCG Basic Middleware Partnership System Design Group Modelling Project management Software Process Standards Integration, test, certification Product Delivery Middleware I II System Tools End User Tools Design Production Development 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

45 Basic Middleware Partnership
LCG Basic Middleware Partnership industrial management, development process – s/w engineering focus few development centres – clear division of responsibilities including key technology owners close co-operation between US, European, Asian(?) teams design and implementation strong preference for a single project – one management one review process multiple funding sources an international partnership of – computer science and software engineering science and industry System Design Group Modelling Project management Software Process Standards Integration, test, certification Product Delivery Middleware I Middleware II System Tools End User Tools Design Production Development 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

46 The LHC Grid – the foundation of a Global Science Grid
LCG The LHC Grid – the foundation of a Global Science Grid LHC has a real need and a reasonable scale and has mature global collaborations of scientists establish the basic middleware for a global science grid deploy a core grid infrastructure in Europe, America, Asia learn how to provide a “production quality” service during LHC preparation (data challenges) exploit the LHC core infrastructure as the foundation for a general science grid 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

47 Development of a Global Grid Infrastructure for Science
LCG Development of a Global Grid Infrastructure for Science middleware research application-specific middleware lhc applications science application adaptation application-specific middleware bio applications science application adaptation application-specific middleware other science applications science application adaptation computer science providing solutions for exploitation of grids by scientists ….. science research Hardening/Reworking of basic middleware prototyped by current projects Advanced Middleware requirements defined by current projects engineering core grid infrastructure few core centres (including the LCG Tier 1s) operate the information service, catalogues, .. coordination, operations centre, .. call centre, user support, training, .. converge with other core infrastructures (DTF, UK e-science grid, ….) other grid nodes for physics, biology, medicine, …. infrastructure 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

48 Longer-term Requirements
LCG Longer-term Requirements emphasis so far has been on the basic infrastructure - as a solid base for constructing the LHC analysis environment building on this foundation we need a programme of collaborative/complementary research projects on advanced middleware advanced collaborative environments (dynamic workspaces for scientific analysis communities) resource optimisation – computation, storage, network data placement, clustering, migration strategies grid enabled object management autonomic management and operation of grid resources target 2009 – a computing environment for LHC analysis teams, efficiently and effortlessly exploiting global scientific computing resources, the scientist is not aware that she is using a grid 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

49 CERN users to be connected by a computer Grid
637 70 4306 22 538 87 55 27 10 Europe: institutes, 4603 users Elsewhere: 208 institutes, 1632 users 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN

50 Prototyping the Global Information Society
“Bright” World to offer easy, affordable participation to e-sciences, publications, results, e-education to all interested countries 25-Nov-2002 DESY-Nov-02; H F Hoffmann/CERN


Download ppt "LHC-Computing Grid LCG"

Similar presentations


Ads by Google