Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 The LHC Computing Project Common Solutions for the LHC ACAT 2002 Presented by Matthias Kasemann FNAL and CERN.

Similar presentations


Presentation on theme: "1 The LHC Computing Project Common Solutions for the LHC ACAT 2002 Presented by Matthias Kasemann FNAL and CERN."— Presentation transcript:

1 1 The LHC Computing Project Common Solutions for the LHC ACAT 2002 Presented by Matthias Kasemann FNAL and CERN

2 Matthias Kasemann, FNAL and CERN, June 25, 2002 2/39 Outline nThe LCG Project: goal and organization nCommon solutions: u Why common solutions u How to … u The Run2 common projects nThe LCG Project: status of planning u Results of the LCG workshop in March 02 u Planning in the Applications Area For the LCG Grid see: Les Robertson (Thursday) “The LHC Computing Grid Project - Creating a Global Virtual Computing Center for Particle Physics”

3 Matthias Kasemann, FNAL and CERN, June 25, 2002 3/39 From Raw Data to Physics: what happens during analysis Fragmentation, Decay Physics analysis Interaction with detector material Pattern, recognition, Particle identification Detector response apply calibration, alignment, 2037 2446 1733 1699 4003 3611 952 1328 2132 1870 2093 3271 4732 1102 2491 3216 2421 1211 2319 2133 3451 1942 1121 3429 3742 1288 2343 7142 Raw data Convert to physics quantities Reconstruction Simulation (Monte-Carlo) Analysis e+e+ e-e- f f Z0Z0 Basic physics Results 250Kb – 1 Mb100 Kb25 Kb5 Kb500 b _

4 Matthias Kasemann, FNAL and CERN, June 25, 2002 4/39 HEP analysis chain: common to LHC experiments Generate Events Generate Events Simulate Events Simulate Events Simulation geometry Build Simulation Geometry Build Simulation Geometry Reconstuction geometry Build Reconstruction Geometry Build Reconstruction Geometry Detector description Detector alignment Detector calibration Reconstruction parameters Reconstruct Events Reconstruct Events ESD AOD Analyze Events Analyze Events Physics Raw Data ATLAS Detector

5 Matthias Kasemann, FNAL and CERN, June 25, 2002 5/39 Developing Software for LHC experiments nChallenges in big collaborations u Long and careful planning process u More formal procedure required to commit resources u Long lifetime, need flexible solutions which allow for change  Any state of experiment longer than typical Ph.D. or postdoc time  Need for professional IT participation and support u New development, maintenance and support model required nChallenges in smaller collaborations u Limited in resources u Adapt and implement available solutions (“b-b-s”)

6 Matthias Kasemann, FNAL and CERN, June 25, 2002 6/39 CMS - CCS schedule (V33): the bottom line Milestones of ~ next year: delays of ~9 months Milestones a few yrs away: delays of ~15 months  LHC starts

7 Matthias Kasemann, FNAL and CERN, June 25, 2002 7/39 CMS - CCS Software Baseline: L2 milestones DDD ready for OSCAR, ORCA, IGUANA Data model defined Persistent and transient representations Demonstrably as correct as existing CMS description Switch from Geant3 to Geant4: Date not decided (just my estimate) E.g. it needs the new persistency Software Infrastructure deployed and working User analysis components Framework with coherent user interface Event display / interactive visualisation Tools for browsing / manipulating data sets Data presentation, histograms, numerical,… Framework for processing CMS data Working for simulation, reconstruction, analysis Supporting persistency and data management Strongly dependent on LCG success CCS Baseline Software for TDR’s

8 Matthias Kasemann, FNAL and CERN, June 25, 2002 8/39 Work Areas Applications Support & Coordination Computing Systems Grid Technology Grid Deployment Project Overview Board Software and Computing Committee (SC2) WP RTAG WP Project Execution Board Work Plan Definition The LHC Computing Grid Project (LCG) Work Areas Applications Support & Coordination Computing Systems Grid Technology Grid Deployment Common Solutins Experiments and Regional Centres agree on requirements for common projects LCG was approved in fall 2001 -resources contributed from some member states -1. Workshop in March 02

9 Matthias Kasemann, FNAL and CERN, June 25, 2002 9/39 LCG - Fundamental goal: The experiments have to get the best, most reliable and accurate physics results from the data provided by their detectors Their computing projects are fundamental to the achievement of this goal The LCG project at CERN was set up to help them all in this task Corollary Success of LCG is fundamental to success of LHC Computing

10 Matthias Kasemann, FNAL and CERN, June 25, 2002 10/39 Fulfilling LCG Project Goals nPrepare and deploy the LHC Computing Environment u Applications - provide the common components, tools and infrastructure for the physics application software u Computing system – fabric, grid, global analysis system u Deployment – foster collaboration and coherence u Not just another grid technology project nValidate the software by participating in Data Challenges using the progressively more complex Grid Prototype u Phase 1 - 50% model production grid in 2004 nProduce a TDR for full system to be built in Phase 2 u Software performance impacts on size and cost of production facility u Analysis models impact on exploitation of production grid nMaintain opportunities for reuse of deliverables outside LHC experimental programme

11 Matthias Kasemann, FNAL and CERN, June 25, 2002 11/39 Applications Activity Areas nApplication software infrastructure u physics software development environment, standard libraries, development tools nCommon frameworks for simulation and analysis u Development and integration of toolkits & components nSupport for physics applications u Development, support of common software tools & frameworks nAdaptation of Physics Applications to Grid environment nObject persistency and data management tools u Event data, metadata, conditions data, analysis objects,

12 Matthias Kasemann, FNAL and CERN, June 25, 2002 12/39 Goals for Applications Area nMany Software Production Teams u LHC experiments u CERN IT groups, ROOT team,.. u HEP software collaborations – CLHEP, Geant4,.. u External Software – python, Qt, XML, … nStrive to work together to develop and use software in common nWill involve identifying and packaging existing HEP software for reuse as well as developing new components nEach unit has its own approach to design and to supporting the development u Sharing in the development and deployment of software will be greatly facilitated if units follow a common approach nRecognise that there will be start-up costs associated with adapting to use new common products and development tools

13 Matthias Kasemann, FNAL and CERN, June 25, 2002 13/39 Why common and when? nWhy not: u Experiments have independent detectors and analysis tools verify physics results u Competition for best physics results u Coordination of common software development is significant overhead nWhy common solutions: u Need mature engineered software u Resources are scarce, in particular manpower u Effort: Common projects are a good way to become more efficient (,,, ?) u Lessons need to be learnt from past experience nFor LHC experiments: Everything non experiment–specific is a potential candidate for a common project

14 Matthias Kasemann, FNAL and CERN, June 25, 2002 14/39 R2JOP Steering Committee Directorate D0 CollaborationCDF Collaboration Task Coordinators Run II Committee Run II Computing Project Office External Review Committee Basic Infrastructure Mass Storage & Data Access Reconstruction Systems Physics Analysis Support Fermilab Class Library Configuration Management Support Databases Simulation Storage Management Serial Media Working Group MSS Hardware Reconstruction farm hardware Networking hardware Production Management Reconstruction input pipeline Physics analysis hardware Physics Anal- ysis Software Visualization Data Access FNAL: CDF/D0/CD - Run 2 Joint Project Organization 15 joint projects defined, 4 years before start of data taking

15 Matthias Kasemann, FNAL and CERN, June 25, 2002 15/39 Perceptions of Common Projects nExperiments u Whilst may be very enthusiastic about long-term advantages …. u …have to deliver on short term milestones u Devoting resources to both will be difficult u Already experience an out-flux of effort into common projects u Hosting projects in experiments excellent way of integrating effort  For initial phase and prototyping nTechnology groups u Great motivation to use expertise to produce useful solutions u Need the involvement of the experiments

16 Matthias Kasemann, FNAL and CERN, June 25, 2002 16/39 Common solutions - How to do? nRequirements are set by experiments in the SC2 + Requirements Technical Assessment Groups (RTAGs) nPlanning and implementation is done by LCG together with experiments nMonitoring of progress and adherence by the SC2 nFrequent releases and testing nGuaranteed life-time maintenance and support Issues: n‘How will applications area cooperate with other areas?’ n‘Not feasible to have a single LCG architect to cover all areas.’ nNeed mechanisms to bring coherence to the project

17 Matthias Kasemann, FNAL and CERN, June 25, 2002 17/39 Workflow around the organisation chart WPnPEBSC2RTAGm requirements mandate Prioritised requirements Updated workplan Workplan feedback ~2 mths Project plan Release 1 Release 2 ~4 mths Status report Review feedback time SC2 Sets the requirements SC2 approves the workplan SC2 reviews the status PEB develops workplan PEB manages LCG resources PEB tracks progress

18 Matthias Kasemann, FNAL and CERN, June 25, 2002 18/39 Issues related to partitioning the work n‘How do you go from present to future without dismantling existing projects?’ n‘Have to be careful that we don’t partition into too small chunks and lose coherence of overall software’ nWe are not starting afresh, we have a good knowledge of what the broad categories are going to be nExperiment architectures help to ensure coherency.

19 Matthias Kasemann, FNAL and CERN, June 25, 2002 19/39 Coherent Architecture nApplications common projects must follow a coherent overall architecture nThe software needs to be broken down into manageable pieces i.e. down to the component level nComponent-based, but not a bag of disjoint components u components designed for interoperability through clean interfaces u Does not preclude a common implementation foundation, such as ROOT, for different components  The ‘contract’ in the architecture is to respect the interfaces  No hidden communication among components u Starting point is existing products, not a clean slate

20 Matthias Kasemann, FNAL and CERN, June 25, 2002 20/39 Approach to making workplan n“Develop a global workplan from which the RTAGs can be derived” nConsiderations for the workplan: u Experiment need and priority u Is it suitable for a common project u Is it a key component of the architecture e.g. object dictionary u Timing: when will the conditions be right to initiate a common project  Do established solutions exist in the experiments  Are they open to review or are they entrenched u Availability of resources and allocation of effort  Is there existing effort which would be better spent doing something else u Availability, maturity of associated third party software  E.g. grid software nPragmatism and seizing opportunity. A workplan derived from a grand design does not fit the reality of this project

21 Matthias Kasemann, FNAL and CERN, June 25, 2002 21/39 RTAG: ‘blueprint’ of LCG application architecture nMandate: define the architectural ‘blueprint’ for LCG applications: u Define the main architectural domains (‘collaborating frameworks’) of LHC experiments and identify their principal components. (For example: Simulation is such an architectural domain; Detector Description is a component which figures in several domains.) u Define the architectural relationships between these ‘frameworks’ and components, including Grid aspects, identify the main requirements for their inter-communication, and suggest possible first implementations. (The focus here is on the architecture of how major ‘domains’ fit together, and not detailed architecture within a domain.) u Identify the high-level milestones for each domain and provide a first estimate of the effort needed. (Here the architecture within a domain could be considered.) u Derive a set of requirements for the LCG nTime-scale: started in June 02, draft report in July, final report in August 02

22 Matthias Kasemann, FNAL and CERN, June 25, 2002 22/39 RTAG status nIdentified and started eight Requirement Technical Assessments (RTAGs) u in application software area  Data persistencyfinished  Software support process and toolsfinished  Mathematical librariesfinished  Detector Geometry & Materials descriptionsstarted  ‘blueprint’ architecture of applicationsstarted  Monte Carlo event generatorsstarted u in compute fabric area  mass storage requirementsfinished u in Grid technology and deployment area  Grid technology use casesfinished  regional center category and services definitionfinished

23 Matthias Kasemann, FNAL and CERN, June 25, 2002 23/39 Software Process RTAG nMandate: u Define a process for managing LCG software. Specific tasks to include: Establish a structure for organizing software, for managing versions and coherent subsets for distribution u Identify external software packages to be supported u Identify recommended tools for use within the project – to include configuration and release management u Estimate resources (person power) needed to run an LCG support activity nGuidance: u Procedures and tools will be specified u Will be used within project u Can be packaged and supported for general use u Will evolve with time u The RTAG does not make any recommendations on how experiment internal software should be developed and managed. However, if an experiment specific program becomes an LCG product it should adhere to the development practices proposed by this RTAG

24 Matthias Kasemann, FNAL and CERN, June 25, 2002 24/39 nAll LCG projects must adopt the same set of tools, standards and procedures. The tools must be centrally installed, maintained and supported. nAdopt commonly used open-source or commercial software where available. Try to avoid “do it yourself” solutions in this area where we don’t have core competency. nConcerning commercial software, avoid commercial software that has to be installed on individual’s machines as this will cause well known problems of license agreements and management in our widely distributed environment. Commercial solutions for web-portals or other centrally managed solutions would be fine. Process RTAG – Recommendations(1)

25 Matthias Kasemann, FNAL and CERN, June 25, 2002 25/39 Process RTAG – Recommendations(2) n‘Release early, release often’ implies u major release 2-3 times per year u Development release every 2-3 weeks u Automated nightly builds, regression tests, benchmarks nTest and quality assurance nSupport of external software u installation and build up of local expertise nEffort needed for filling support roles u Librarian u Release manager u Toolsmith u Quality assurance u Technical writer

26 Matthias Kasemann, FNAL and CERN, June 25, 2002 26/39 Persistency RTAG nMandate: u Write the product specification for the Persistency Framework for Physics Applications at LHC u Construct a component breakdown for the management of all types of LHC data u Identify the responsibilities of Experiment Frameworks, existing products (such as ROOT) and as yet to be developed products u Develop requirements/use cases to specify (at least) the metadata /navigation component(s) u Estimate resources (manpower) needed to prototype missing components nGuidance: u The RTAG may decide to address all types of data, or may decide to postpone some topics for other RTAGS, once the components have been identified. u The RTAG should develop a detailed description at least for the event data management. u Issues of schema evolution, dictionary construction and storage, object and data models should be addressed.

27 Matthias Kasemann, FNAL and CERN, June 25, 2002 27/39 Persistency – Near term recommendations nto develop a common object streaming layer and associated persistence infrastructure. u a common object streaming layer based on ROOT-IO and several related components to support it, u including a (currently lightweight) relational database layer. u Dictionary services are included in the near-term project specification.  dictionary services may have additional clients nThis is first step towards a complete data management environment, one with enormous potential for commonality among the experiments.

28 Matthias Kasemann, FNAL and CERN, June 25, 2002 28/39 RTAG: math library review nMandate: Review the current situation with math libraries and make recommendations u Review the current situation of the usage of the various math libraries in the experiments (including but not limited to NagC++, GSL, CLHEP, ROOT) u Identify and recommend which ones should be adopted, which ones could be discontinued u Suggest possible improvements to the existing ones u Estimate resources needed for this activity nGuidance – The result of the RTAG should allow to establish a clear program of work to streamline the status of math libraries and find the maximum commonality between experiments, taking into account cost, maintenance and projected evolution of the experiment needs

29 Matthias Kasemann, FNAL and CERN, June 25, 2002 29/39 Math Library: Recommendations nTo design a support group u to provide advice and information about the use of existing libraries, u to assure their continued availability, u to identify where new functionality is needed, and u to develop that functionality themselves or by coordinating with other HEP-specific library developers. u The goal would be to have close contact with the experiments and provide expertise on mathematical methods, aiming at common solutions, nThe experiments should maintain a data base of mathematical libraries used in their software, and within each library, the individual modules used. nA detailed study should be undertaken to determine whether there is any functionality needed by the experiments and available in the NAG library which is not covered as well by a free library such as GSL.

30 Matthias Kasemann, FNAL and CERN, June 25, 2002 30/39 RTAG: Detector Geometry & Materials Description n Write the product specification for detector geometry and materials description services. u Specify scope: e.g. Services to define, provide transient access to, and store the geometry and materials descriptions required by simulation, reconstruction, analysis, online and event display applications, with the various descriptions using the same information source u Identify requirements including end-user needs such as ease and naturalness of use of the description tools, readability and robustness against errors e.g. provision for named constants and derived quantities u Explore commonality of persistence requirements with conditions data management  Interaction of the DD with a conditions DB. In that context versioning and ‘configuration management’ of the detector description, coherence issues… u Identify where experiments have differing requirements and examine how to address them within common tools u Address migration from current tools

31 Matthias Kasemann, FNAL and CERN, June 25, 2002 31/39 RTAG: Monte Carlo Event Generators nMandate: To best explore the common solutions needed and how to engage the HEP community external to the LCG it is proposed to study: u How to maintain a common code repository for the generator code and related tools such as PDFLIB. u The development or adaptation of generator-related tools (e.g.HepMC) for LHC needs. u How to provide support for the tuning, evaluation and maintenance of the generators. u The integration of the Monte Carlo generators into the experimental software frameworks. u The structure of possible forums to facilitate interaction with the distributed external groups who provide the Monte Carlo generators.

32 Matthias Kasemann, FNAL and CERN, June 25, 2002 32/39 Possible Organisation of activities Project WP Project WP Project WP Project WP Overall management, coordination, architecture, integration, support Activity area: Physics data management Possible projects: Hybrid event store, Conditions DB, … Work Packages: Component breakdown and work plan lead to Work Package definitions. ~1-3 FTEs per WP Activity area Example: Architect Project leader

33 Matthias Kasemann, FNAL and CERN, June 25, 2002 33/39 Global Workplan – 1 st priority level 1.Establish process and infrastructure u Nicely covered by software process RTAG 2.Address core areas essential to building a coherent architecture u Object dictionary – essential piece u Persistency - strategic u Interactive frameworks - also driven by assigning personnel optimally 3.Address priority common project opportunities u Driven by a combination of experiment need, appropriateness to common project, and ‘the right moment’ (existing but not entrenched solutions in some experiments)  Detector description and geometry model u Driven by need and available manpower  Simulation tools

34 Matthias Kasemann, FNAL and CERN, June 25, 2002 34/39 Global Workplan – 2 nd priority level nBuild outward from the core top-priority components u Conditions database u Statistical analysis u Framework services, class libraries nAddress common project areas of less immediate priority u Math libraries u Physics packages (scope?) nExtend and elaborate the support infrastructure u Software testing and distribution

35 Matthias Kasemann, FNAL and CERN, June 25, 2002 35/39 Global Workplan – 3rd priority level nThe core components have been addressed, architecture and component breakdown laid out, work begun. Grid products have had another year to develop and mature. Now explicitly address physics applications integration into the grid applications layer. u Distributed production systems. End-to-end grid application/framework for production. u Distributed analysis interfaces. Grid-aware analysis environment and grid-enabled tools. nSome common software components are now available. Build on them. u Lightweight persistency, based on persistency framework u Release LCG benchmarking suite

36 Matthias Kasemann, FNAL and CERN, June 25, 2002 36/39 Global Workplan – 4th priority level nLonger term items waiting for their moment u ‘Hard’ ones, perhaps made easier by a growing common software architecture  Event processing framework u Address evolution of how we write software  OO language usage u Longer term needs; capabilities emerging from R&D (more speculative)  Advanced grid tools, online notebooks, …

37 Matthias Kasemann, FNAL and CERN, June 25, 2002 37/39 Candidate RTAGs (1) Simulation toolsNon-physics activity Detector description, model Description tools, geometry model Conditions databaseIf necessary after existing RTAG Data dictionaryKey need for common service Interactive frameworksWhat do we want, have, need Statistical analysisTools, interfaces, integration VisualizationTools, interfaces, integration Physics packagesImportant area but scope unclear Framework servicesIf common framework is too optimistic… C++ class librariesStandard foundation libraries

38 Matthias Kasemann, FNAL and CERN, June 25, 2002 38/39 Candidate RTAGs (2) Event processing framework Hard, long term Distributed analysisApplication layer over grid Distributed productionApplication layer over grid Small scale persistency Simple persistency tools Software testingMay be covered by process RTAG Software distributionFrom central ‘Program Library’ to convenient broad distribution OO language usageC++, Java (..?) roles in the future Benchmarking suiteComprehensive suite for LCG software Online notebooksLong term; low priority

39 Matthias Kasemann, FNAL and CERN, June 25, 2002 39/39 Common Solutions: Conclusions nCommon Solutions for LHC software are required for success u Common solutions are agreed upon by experiments u The requirements are set by the experiments u The development is done jointly by the LCG project and the LHC experiments u All LCG software is centrally supported and maintained. nWhat makes us believe that we succeed? What is key to success? u The process in the LCG organization u The collaboration between players u Common technology u Central resources, jointly steer-able by experiments and management u Participants have prototyping experience !!

40 40 Backup & Additional slides

41 Matthias Kasemann, FNAL and CERN, June 25, 2002 41/39 Post-RTAG Participation of Architects – Draft Proposal (1) nMonthly open meeting (expanded weekly meeting) u Accumulated issues to be taken up with architects u Architects in attendance; coordinators invited nInformation has gone out beforehand, so architects are ‘primed’ nMeeting is informational, and decision-making (for the easier decisions) u An issue is either  Resolved (the easy ones)  Flagged for addressing in the ‘architects committee’

42 Matthias Kasemann, FNAL and CERN, June 25, 2002 42/39 Post-RTAG Participation of Architects – Draft Proposal (2) nArchitects committee: u Members: experiment architects + applications manager (chair) u Invited: computing coordinators, LCG project manager and CTO u Others invited at discretion of members  e.g. project leader of project at issue nMeets shortly after the open meeting (also bi-weekly?) nDecides the difficult issues u Most of the time, committee will converge on a decision u If not, try harder u If still not, applications manager takes decision  Such decisions can be accepted or challenged nChallenged decisions go to full PEB, then if necessary to SC2 u PEB role of raising issues to be taken up by SC2 u We all abide happily by an SC2 decision nCommittee meetings also cover general current issues and exchange of views nCommittee decisions, actions documented in public minutes

43 Matthias Kasemann, FNAL and CERN, June 25, 2002 43/39 Distributed Character of Components (1) nPersistency framework u Naming based on logical filenames u Replica catalog and management u Cost estimators; policy modules nConditions database u Inherently distributed (but configurable for local use) nInteractive frameworks u Grid-aware environment; ‘transparent’ access to grid- enabled tools and services nStatistical analysis, visualization u Integral parts of distributed analysis environment nFramework services u Grid-aware message and error reporting, error handling, grid-related framework services

44 Matthias Kasemann, FNAL and CERN, June 25, 2002 44/39 Distributed Character of Components (2) nEvent processing framework u Cf. framework services, persistency framework, interactive frameworks nDistributed analysis nDistributed production nSoftware distribution u Should use the grid nOO language usage u Distributed computing considerations nOnline notebook u Grid-aware tool

45 Matthias Kasemann, FNAL and CERN, June 25, 2002 45/39 RTAG?: Simulation tools nGeant4 is establishing a HEP physics requirements body within the collaboration, accepted by SC2 as a mechanism for addressing G4 physics performance issues nHowever, there are important simulation needs to which LCG resources could be applied in the near term. nBy the design of LCG, this requires SC2 delivering requirements to PEB nJohn Apostolakis has recently assembled G4 requests and requirements from the LHC collaborations nProposal: Use these requirements as the groundwork for a quick 1-month RTAG to guide near term simulation activity in the project, leaving the addressing of physics performance requirements to the separate process within Geant4

46 Matthias Kasemann, FNAL and CERN, June 25, 2002 46/39 RTAG?: Simulation tools (2) nSome possible activity areas in simulation, from the Geant4 requests/requirements received from the experiments, which would be input to the RTAG: u Error propagation tool for reconstruction (‘GEANE’) u Assembly and documentation of standard physics lists u Python interface u Documentation, tutorials, communication u Geant4 CVS server access issues nThe RTAG could also address FLUKA support u Requested by ALICE as an immediate priority u Strong interest expressed by other experiments as well

47 Matthias Kasemann, FNAL and CERN, June 25, 2002 47/39 RTAG?: Detector geometry & materials description and modeling services nWrite the product specification for detector geometry and materials description and modeling services u Specify scope: eg. Services to define, provide transient access to, and store the geometry and materials descriptions required by simulation, reconstruction, analysis, online and event display applications, with the various descriptions using the same information source u Identify requirements including end-user needs such as ease and naturalness of use of the description tools, readibility and robustness against errors e.g. provision for named constants and derived quantities u Explore commonality of persistence requirements with conditions data management u Identify where experiments have differing requirements and examine how to address them within common tools u Address migration from current tools

48 Matthias Kasemann, FNAL and CERN, June 25, 2002 48/39 RTAG?: Conditions database nWill depend on persistency RTAG outcome nRefine the requirements and product specification of a conditions database serving the needs of the LHC experiments, using the existing requirements and products as a reference point. Give due consideration to effective distributed/remote usage. nIdentify the extent to which the persistency framework (hybrid store) can be directly used at the lower levels of a conditions database implementation. nIdentify the component(s) and interfaces atop a common persistency foundation that complete the conditions database

49 Matthias Kasemann, FNAL and CERN, June 25, 2002 49/39 RTAG?: Data dictionary service nCan the experiments converge on common data definition and dictionary tools in the near term? nEven if the answer is no, it should be possible to establish a standard dictionary service (generic API) by which common tools can interact, while leaving free to the experiments how their class models are defined and implemented nDevelop a product specification for a generic high-level data dictionary service able to accommodate distinct data definition and dictionary tools and present a common, generic interface to the dictionary nReview the current data definition and dictionary approaches and seek to expand commonality among the experiments. Write the product specifications for common (even if N<4) components.

50 Matthias Kasemann, FNAL and CERN, June 25, 2002 50/39 RTAG?: Interactive frameworks nFrameworks providing interactivity for various environments including physics analysis and event processing control (simulation and reconstruction) are critical. They serve end users directly and must match end user requirements extremely well. They can be a powerful and flexible ‘glue’ in a modular environment, providing interconnectivity between widely distinct components and making the ‘whole’ offered by such an environment much greater than the sum of its parts. nDevelop the requirements for an interactive framework common across the various application environments nRelate the requirements to existing tools and approaches (e.g. ROOT/CINT, Python-based tools) nWrite a product specification, with specific recommendations on tools and technologies to employ nAddress both command line and GUI interactivity

51 Matthias Kasemann, FNAL and CERN, June 25, 2002 51/39 RTAG?: Statistical analysis interfaces & tools nAddress requirements on analysis tools u What data analysis services and tools are required u What is and is not provided by existing tools nAddress what existing tools should be supported and what further development is needed u Including long term maintenance issues nAddress role of abstract interfaces to statistical analysis services u Are they to be used? u If so, what tools should be interfaced to a common abstract interface to meet LHC needs (and how, when, etc.) nAddress requirements and approaches to persistency and data interchange

52 Matthias Kasemann, FNAL and CERN, June 25, 2002 52/39 RTAG?: Detector and event visualization nExamine the range of tools available and identify those which should be developed as common components within the LCG Applications architecture nAddress requirements, recommendations and needed/desired implementations in such areas as u existing and planned standard interfaces and their applicability u GUI integration u Interactivity requirements (picking) u Interface to visualizing objects (eg. Draw() method) u Use of standard 3D graphics libraries nVery dependent on other RTAG outcomes

53 Matthias Kasemann, FNAL and CERN, June 25, 2002 53/39 RTAG?: Physics packages nNeeds and requirements in event generators and their interfaces & persistency, particle property services, … nScope of the LCG in this area needs to be made clearer before a well defined candidate RTAG can be developed

54 Matthias Kasemann, FNAL and CERN, June 25, 2002 54/39 RTAG?: Framework services nWhile converging on a common event processing framework among the LHC experiments may be impractical at least on the near term, this does not preclude adopting common approaches and tools for Framework services u Examples: message handling and error reporting; execution monitoring and state management; exception handling and recovery; job state persistence and recording of history information; dynamic component loading; interface definition, versioning, etc. nSeek to identify framework services and tools which can be developed in common, possibly starting from existing products. nDevelop requirements on their functionality and interfaces.

55 Matthias Kasemann, FNAL and CERN, June 25, 2002 55/39 RTAG?: C++ class libraries nAddress needs and requirements in standard C++ class libraries, with recommendations on specific tools nProvide recommendations on the application and evolution of community libraries such as ROOT, CLHEP, HepUtilities, … nSurvey third party libraries and provide recommendations on which should be adopted and what should be used from them nMerge with Framework Services candidate RTAG?

56 Matthias Kasemann, FNAL and CERN, June 25, 2002 56/39 RTAG?: Event processing framework nThere is no consensus to pursue a common event processing framework in the near term. There is perhaps more agreement that this should be pursued in the long term (but there’s no consensus on a likely candidate for a common framework in the long term) nThis looks at best to be a long term RTAG nTwo experiments do use a common event processing framework kernel (Gaudi) nMany difficult issues in growing N past 2, whether with Gaudi, AliRoot, COBRA or something else!

57 Matthias Kasemann, FNAL and CERN, June 25, 2002 57/39 RTAG?: Interfaces to distributed analysis nDevelop requirements on end-user interfaces to distributed analysis, layered over grid middleware services, and write a product specification u Grid portals, but not only; e.g. PROOF and Jas fall into this category u A grid portal for analysis is presumably an evolution of tools like these nFocus on analysis interface; address the distinct requirements of production separately u Production interface should probably be addressed first, as it is simpler and will probably have components usable as parts of the analysis interface

58 Matthias Kasemann, FNAL and CERN, June 25, 2002 58/39 RTAG?: Distributed production systems nDistributed production systems will have much common ground at the grid middleware level. How much can be done in common at the higher level of end-to-end distributed production applications layered over the grid middleware? u Recognizing that the grid projects are active at this level too, and coordination is needed nSurvey existing and planned production components and end-to- end systems at the application level (AliEn, MOP, etc.) and identify tools and approaches to develop in common nWrite product specifications for common components, and/or explicitly identify specific tools to be adapted and developed as common components nInclude end user (production operations) interface u Grid portal for production

59 Matthias Kasemann, FNAL and CERN, June 25, 2002 59/39 RTAG?: Small-scale persistency & databases nIf not covered by the existing persistency RTAG, and if there is agreement this is needed… nWrite the product specification for a simple, self-contained, low-overhead object persistency service for small-scale persistency in C++ applications u Marshal objects to a byte stream which may be stored on a file, in an RDBMS record, etc. u In implementation, very likely a simplified derivative of the object streamer of the hybrid store u For small scale persistence applications, e.g. saving state, saving configuration information nExamine the utility of and requirements on a simple, standard, easily installed and managed database service complementing the persistency service for small scale applications u MySQL, PostgreSQL etc are casually adopted for simple applications with increasing frequency. Is it possible and worthwhile to converge on a common database service

60 Matthias Kasemann, FNAL and CERN, June 25, 2002 60/39 RTAG?: Software testing tools & services nHow much commonality can be achieved in the infrastructure and tools used u Memory checking, unit tests, regression tests, validation tests, performance tests nA large part of this has been covered by the process RTAG

61 Matthias Kasemann, FNAL and CERN, June 25, 2002 61/39 RTAG?: Software distribution nMay or may not be adequately addressed in the process RTAG nRequirements for a central distribution point at CERN u A ‘CERN LHC Program Library Office’ nRequirements on software distribution taking into account all tiers nSurvey and recommend on the various approaches, their utility, complementarity u Tarballs (DAR) u RPMs and other standard open software tools u Role of AFS, asis u Higher level automated distribution tools (pacman)

62 Matthias Kasemann, FNAL and CERN, June 25, 2002 62/39 RTAG?: Evolution of OO language usage nLong-term evolution of C++ nRole for other language(s), e.g. Java? u Near, medium and (to the extent possible) long term application of other languages among LHC experiments u Implications for tools and support requirements nIdentify any requirements arising u Applications, services to be developed in common u Third party tools to be integrated and supported u Compilers and other infrastructure to be supported u Libraries required

63 Matthias Kasemann, FNAL and CERN, June 25, 2002 63/39 RTAG?: LCG Benchmarking suite nBelow threshold for an RTAG? u Every LCG application should come with a benchmarking suite, and should be made available and readily usable as part of a comprehensive benchmarking suite nDevelop requirements for a comprehensive benchmarking suite of LCG applications for use in performance evaluation, testing, platform validation and performance measurement, etc. u Tools which should be represented u Tests which should be included u Packaging and distribution requirements

64 Matthias Kasemann, FNAL and CERN, June 25, 2002 64/39 RTAG?: Online notebooks and other remote control / collaborative tools nIdentify near term and long term needs and requirements common across the experiments nSurvey existing, planned tools and approaches nDevelop recommendations for common development/adaptation and support of tools for LHC


Download ppt "1 The LHC Computing Project Common Solutions for the LHC ACAT 2002 Presented by Matthias Kasemann FNAL and CERN."

Similar presentations


Ads by Google