ROOT Application Area Internal Review September 2006.

Slides:



Advertisements
Similar presentations
1 User Analysis Workgroup Update  All four experiments gave input by mid December  ALICE by document and links  Very independent.
Advertisements

May 9, 2008 Reorganization of the OSG Project The existing project organization chart was put in place at the beginning of It has worked very well.
D. Düllmann - IT/DB LCG - POOL Project1 POOL Release Plan for 2003 Dirk Düllmann LCG Application Area Meeting, 5 th March 2003.
Database Administration
ISO 9001:2015 Revision overview - General users
Release & Deployment ITIL Version 3
Picture 1 model: ICT lifecycle in a company 1. business needs & business strategy 2. ICT strategy - ICT assessment - ICT strategic plan - ICT implementation/tactical.
Client/Server Architectures
ISO 9001:2015 Revision overview - General users
Configuration Management Process and Environment MACS Review 1 February 5th, 2010 Roland Moser PR a-RMO, February 5 th, 2010 R. Moser 1 R. Gutleber.
SEAL V1 Status 12 February 2003 P. Mato / CERN Shared Environment for Applications at LHC.
F Run II Experiments and the Grid Amber Boehnlein Fermilab September 16, 2005.
Database System Development Lifecycle © Pearson Education Limited 1995, 2005.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Introduction to RUP Spring Sharif Univ. of Tech.2 Outlines What is RUP? RUP Phases –Inception –Elaboration –Construction –Transition.
Project Management Development & developers
Software Development *Life-Cycle Phases* Compiled by: Dharya Dharya Daisy Daisy
Copyright 2013, Net.Orange, Inc. All rights reserved.Confidential and proprietary. Do not distribute without permission. Net.Orange App Development Net.Orange.
LHCC Comprehensive Review – September WLCG Commissioning Schedule Still an ambitious programme ahead Still an ambitious programme ahead Timely testing.
A. Aimar - EP/SFT LCG - Software Process & Infrastructure1 Software Process panel SPI GRIDPP 7 th Collaboration Meeting 30 June – 2 July 2003 A.Aimar -
Offline Coordinators  CMSSW_7_1_0 release: 17 June 2014  Usage:  Generation and Simulation samples for run 2 startup  Limited digitization and reconstruction.
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
SEAL: Core Libraries and Services Project CERN/IT After-C5 Meeting 6 June 2003 P. Mato / CERN.
1 Planning for Reuse (based on some ideas currently being discussed in LHCb ) m Obstacles to reuse m Process for reuse m Project organisation for reuse.
20th September 2004ALICE DCS Meeting1 Overview FW News PVSS News PVSS Scaling Up News Front-end News Questions.
Issues Autonomic operation (fault tolerance) Minimize interference to applications Hardware support for new operating systems Resource management (global.
Overall Goal of the Project  Develop full functionality of CMS Tier-2 centers  Embed the Tier-2 centers in the LHC-GRID  Provide well documented and.
Distributed Information Systems. Motivation ● To understand the problems that Web services try to solve it is helpful to understand how distributed information.
GLite – An Outsider’s View Stephen Burke RAL. January 31 st 2005gLite overview Introduction A personal view of the current situation –Asked to be provocative!
OSG Tier 3 support Marco Mambelli - OSG Tier 3 Dan Fraser - OSG Tier 3 liaison Tanya Levshina - OSG.
BalticGrid-II Project The Second BalticGrid-II All-Hands Meeting, Riga, May, Joint Research Activity Enhanced Application Services on Sustainable.
20/09/2006LCG AA 2006 Review1 Committee feedback to SPI.
SEAL: Common Core Libraries and Services for LHC Applications CHEP’03, March 24-28, 2003 La Jolla, California J. Generowicz/CERN, M. Marino/LBNL, P. Mato/CERN,
SEAL Core Libraries and Services CLHEP Workshop 28 January 2003 P. Mato / CERN Shared Environment for Applications at LHC.
SEAL Project Core Libraries and Services 18 December 2002 P. Mato / CERN Shared Environment for Applications at LHC.
GDB Meeting - 10 June 2003 ATLAS Offline Software David R. Quarrie Lawrence Berkeley National Laboratory
- Athena Data Dictionary (28nov00 - SW CERN) Athena Data Dictionary Craig E. Tull HCG/NERSC/LBNL Software CERN November 28,
The LHC Computing Grid Project (LCG) and ROOT Torre Wenaus, BNL/CERN LCG Applications Area Manager John Harvey, CERN EP/SFT Group Leader
Architecture View Models A model is a complete, simplified description of a system from a particular perspective or viewpoint. There is no single view.
INFSO-RI Enabling Grids for E-sciencE Ganga 4 – The Ganga Evolution Andrew Maier.
Report from the WLCG Operations and Tools TEG Maria Girone / CERN & Jeff Templon / NIKHEF WLCG Workshop, 19 th May 2012.
2012 Objectives for CernVM. PH/SFT Technical Group Meeting CernVM/Subprojects The R&D phase of the project has finished and we continue to work as part.
CIWQS Review Phase II: Evaluation and Final Recommendations March 14, 2008.
EGEE-II INFSO-RI Enabling Grids for E-sciencE EGEE and gLite are registered trademarks Task tracking SA3 All Hands Meeting Prague.
G.Govi CERN/IT-DB 1 September 26, 2003 POOL Integration, Testing and Release Procedure Integration  Packages structure  External dependencies  Configuration.
A. Aimar - EP/SFT LCG - Software Process & Infrastructure1 SPI Software Process & Infrastructure for LCG Project Overview LCG Application Area Internal.
New Product Development Page 1 Teddy Concurrent Engineering by Teddy Sjafrizal.
Week 7 Lecture Part 2 Introduction to Database Administration Samuel S. ConnSamuel S. Conn, Asst Professor.
ITIL V3 Release- Control and Validation -RCV- - Complete Examination Package 1 Get everything you need to pass your Release, Control & Validation Exam.
1 Comments to SPI. 2 General remarks Impressed by progress since last review Widespread adoption by experiments and projects Savannah, ExtSoft Build system.
Summary of persistence discussions with LHCb and LCG/IT POOL team David Malon Argonne National Laboratory Joint ATLAS, LHCb, LCG/IT meeting.
Enabling Grids for E-sciencE INFSO-RI Enabling Grids for E-sciencE Gavin McCance GDB – 6 June 2007 FTS 2.0 deployment and testing.
Site Services and Policies Summary Dirk Düllmann, CERN IT More details at
Follow-up to SFT Review (2009/2010) Priorities and Organization for 2011 and 2012.
D. Duellmann, IT-DB POOL Status1 POOL Persistency Framework - Status after a first year of development Dirk Düllmann, IT-DB.
Requirements Engineering Processes, York EngD Programme, 2009Slide 1 Requirements engineering processes Prof Ian Sommerville.
CS223: Software Engineering Lecture 32: Software Maintenance.
CMS Experience with the Common Analysis Framework I. Fisk & M. Girone Experience in CMS with the Common Analysis Framework Ian Fisk & Maria Girone 1.
2001 ROOT Workshop The Summary. Content Most Requested Request already satisfied Build and Distribution Web Page Information and Documentation.
International Planetary Data Alliance Registry Project Update September 16, 2011.
POOL Historical Notes POOL has been the most advanced and the most used AA project. Currently, excellent teamwork with experiments on new features and.
SPI Report for the LHCC Comprehensive Review Stefan Roiser for the SPI project.
Ian Bird, CERN WLCG Project Leader Amsterdam, 24 th January 2012.
CS 5150 Software Engineering
The Systems Engineering Context
Driving Digital Business with SAP Digital Business Services
Virtual Memory: Working Sets
SEAL Project Core Libraries and Services
Executive Sponsor: Tom Church, Cabinet Secretary
Presentation transcript:

ROOT Application Area Internal Review September 2006

Progress Acknowledge the work done and progress achieved on the ROOT/Seal merger Good release schedule Good documentation –Need to integrate some new package doc (eg. Minuit2, RooFit) Very responsive to support requests

Packaging Compartimentalization of Run and Link time dependencies is there… –Do not link against GUI when not needed Split ROOT in packages that can be build and released separately –example: CORE, I/O,MATH, GRAPHICS, GEOMETRY/EV DISPLAY, PROOF –Setup a standardized package by package build procedure –insures rapid deployment and easier/faster uptake of eg. maintenance,bug fixes, new features Surely will improves ‘TTU’ –ability to built minimal applications –split 'implementation libraries' / 'dictionaries' when possible

CINT/Reflex migration Still very much needed, can't be left 1/2 done; Appreciate the honest presentation of delay, good explanation of work involved Careful (if slow) process Looks like the planning is now on a more solid basis than at the time of the last review But very ambitious, few experts. Good Luck! Ensuring continued binary compatibility will hopefully (should!) ease the experiment's uptake of Reflex

More Core pyROOT: several projects using it but single experiment-funded developer –Long term maintenance issue, eg. migration to Reflex Plugin manager convergence: –not much happened -- but it is still important/relevant! Packaging, dependency management, future maintenance –Ensure experiment buy-in by setting schedule, design in the AF As recommended during last review

The Technical Choices (iii) plugin management: substantially different approach: Factory (SEAL) vs. Interpreter (ROOT) carefully evaluate the impact on existing experiment schemes e.g. Gaudi component-model Esp. when visible to end-users From last year’s review

Math Much progress, impressed with the new fitting package enhancement, SMatrix, MVA, SPlot,… Future Plans as presented look reasonable

Miscellaneous Please keep names explicit/evocative –TRandom2  TRandomTausWorth –TRandom3  TRandomMersenneTwister Identify & reduce duplications, declare deprecated –Remaining SEAL code base, legacy ROOT classes –Needs of course close collaboration with experiments / users : Coordinate through AF Make full use of current C++ standard –Eg. TString vs. std::string

I/O Caching I/O, tree merging/splitting, –encourage further work on optimization in this direction Symptoms of communication problems? –CMS: SMatrix persistence –Atlas: Virtual inheritance in EDM –Ensure comm at all levels Beware of duplication of functionality ROOT & POOL projects –ROOT-RDBC/TFileSQL vs CORAL/POOL-Ora? Requested Functionality: –Thread-safety vs. active use of Multi-threading Reduction of use of global state Review of object lifetime policies –ATLAS request for RTAG Schema Evolution Stability is and must be paramount

PROOF PROOF is being used by Alice, deployed on reasonably large testbed.. Currently the only parallel interactive analysis environment –it is important that its development continues but in direct collaboration with the LHC experiment(s). Convert PROOF into a ‘product’ –predictable schedule for releases, –documentation and instructions how to deploy it as well as –detailed description of the architecture and required services so that important aspects of security can be understood by the security experts as well as the site administrators… Actively seek explicit endorsement of additional experiments –concept interesting, however, it would be desirable to implement a prototype that would mesh well with e.g. Gaudi –product that sits well at level of multicore CPU, local cluster, but not clear how useful/efficient it is when looking at very large data sets; in that respect, complementary to Grid. work on PROOF has other potential spin-offs: asynchronous de/ compression, xrootd,... which can be leveraged in Grid environment. –but be careful not to introduce complexity in order to satisfy requirements unique to PROOF.

GUI/Graphics –Many improvements, mostly asked for by users but which ones? LHC expriments? –Good for ROOT in general esp. as analysis package/presenter – But should not interfere with the role as core for the I/O, and foundation of the experiment software.