December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) MONARC Plenary December 9 Agenda u Introductions HN, LP15’ è Status of Actual CMS ORCA databases.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

Resources for the ATLAS Offline Computing Basis for the Estimates ATLAS Distributed Computing Model Cost Estimates Present Status Sharing of Resources.
T1 at LBL/NERSC/OAK RIDGE General principles. RAW data flow T0 disk buffer DAQ & HLT CERN Tape AliEn FC Raw data Condition & Calibration & data DB disk.
Data Clustering Research in CMS Koen Holtman CERN/CMS Eindhoven University of technology CHEP ’2000 Feb 7-11, 2000.
EU-GRID Work Program Massimo Sgaravatto – INFN Padova Cristina Vistoli – INFN Cnaf as INFN members of the EU-GRID technical team.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
23/04/2008VLVnT08, Toulon, FR, April 2008, M. Stavrianakou, NESTOR-NOA 1 First thoughts for KM3Net on-shore data storage and distribution Facilities VLV.
GRID DATA MANAGEMENT PILOT (GDMP) Asad Samar (Caltech) ACAT 2000, Fermilab October , 2000.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
The new The new MONARC Simulation Framework Iosif Legrand  California Institute of Technology.
Large scale data flow in local and GRID environment V.Kolosov, I.Korolko, S.Makarychev ITEP Moscow.
October 24, 2000Milestones, Funding of USCMS S&C Matthias Kasemann1 US CMS Software and Computing Milestones and Funding Profiles Matthias Kasemann Fermilab.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Ian Fisk and Maria Girone Improvements in the CMS Computing System from Run2 CHEP 2015 Ian Fisk and Maria Girone For CMS Collaboration.
CMS Report – GridPP Collaboration Meeting VI Peter Hobson, Brunel University30/1/2003 CMS Status and Plans Progress towards GridPP milestones Workload.
MONARC : results and open issues Laura Perini Milano.
08/06/00 LHCb(UK) Meeting Glenn Patrick LHCb(UK) Computing/Grid: RAL Perspective Glenn Patrick Central UK Computing (what.
23 Feb 2000F Harris Hoffmann Review Status1 Status of Hoffmann Review of LHC computing.
CERN TERENA Lisbon The Grid Project Fabrizio Gagliardi CERN Information Technology Division May, 2000
9/16/2000Ian Bird/JLAB1 Planning for JLAB Computational Resources Ian Bird.
ARGONNE  CHICAGO Ian Foster Discussion Points l Maintaining the right balance between research and development l Maintaining focus vs. accepting broader.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
Grid Workload Management & Condor Massimo Sgaravatto INFN Padova.
Fermilab User Facility US-CMS User Facility and Regional Center at Fermilab Matthias Kasemann FNAL.
DataGrid Applications Federico Carminati WP6 WorkShop December 11, 2000.
LHC Computing Review - Resources ATLAS Resource Issues John Huth Harvard University.
Your university or experiment logo here Caitriana Nicholson University of Glasgow Dynamic Data Replication in LCG 2008.
Finnish DataGrid meeting, CSC, Otaniemi, V. Karimäki (HIP) DataGrid meeting, CSC V. Karimäki (HIP) V. Karimäki (HIP) Otaniemi, 28 August, 2000.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
1 Grid Related Activities at Caltech Koen Holtman Caltech/CMS PPDG meeting, Argonne July 13-14, 2000.
LHC Computing Review Recommendations John Harvey CERN/EP March 28 th, th LHCb Software Week.
Data Grid projects in HENP R. Pordes, Fermilab Many HENP projects are working on the infrastructure for global distributed simulated data production, data.
Introduction to dCache Zhenping (Jane) Liu ATLAS Computing Facility, Physics Department Brookhaven National Lab 09/12 – 09/13, 2005 USATLAS Tier-1 & Tier-2.
Virtual Data Grid Architecture Ewa Deelman, Ian Foster, Carl Kesselman, Miron Livny.
7April 2000F Harris LHCb Software Workshop 1 LHCb planning on EU GRID activities (for discussion) F Harris.
December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) Phase 3 Letter of Intent (1/2)  Short: N Pages è May Refer to MONARC Internal Notes to Document.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
ATLAS Data Challenges US ATLAS Physics & Computing ANL October 30th 2001 Gilbert Poulard CERN EP-ATC.
The LHCb Italian Tier-2 Domenico Galli, Bologna INFN CSN1 Roma,
…building the next IT revolution From Web to Grid…
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
CASTOR evolution Presentation to HEPiX 2003, Vancouver 20/10/2003 Jean-Damien Durand, CERN-IT.
Claudio Grandi INFN Bologna CMS Computing Model Evolution Claudio Grandi INFN Bologna On behalf of the CMS Collaboration.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Network design Topic 6 Testing and documentation.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Computing R&D and Milestones LHCb Plenary June 18th, 1998 These slides are on WWW at:
July 26, 1999MONARC Meeting CERN MONARC Meeting CERN July 26, 1999.
Testing and integrating the WLCG/EGEE middleware in the LHC computing Simone Campana, Alessandro Di Girolamo, Elisa Lanciotti, Nicolò Magini, Patricia.
Data Transfer Service Challenge Infrastructure Ian Bird GDB 12 th January 2005.
6 march Building the INFN Grid Proposal outline a.ghiselli,l.luminari,m.sgaravatto,c.vistoli INFN Grid meeting, milano.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
Models of Networked Analysis at Regional Centres Harvey Newman MONARC Workshop CERN May 10, 1999
Large scale data flow in local and GRID environment Viktor Kolosov (ITEP Moscow) Ivan Korolko (ITEP Moscow)
The ATLAS Computing Model and USATLAS Tier-2/Tier-3 Meeting Shawn McKee University of Michigan Joint Techs, FNAL July 16 th, 2007.
Ian Bird Overview Board; CERN, 8 th March 2013 March 6, 2013
January 20, 2000K. Sliwa/ Tufts University DOE/NSF ATLAS Review 1 SIMULATION OF DAILY ACTIVITITIES AT REGIONAL CENTERS MONARC Collaboration Alexander Nazarenko.
ALICE RRB-T ALICE Computing – an update F.Carminati 23 October 2001.
GDB, 07/06/06 CMS Centre Roles à CMS data hierarchy: n RAW (1.5/2MB) -> RECO (0.2/0.4MB) -> AOD (50kB)-> TAG à Tier-0 role: n First-pass.
Markus Frank (CERN) & Albert Puig (UB).  An opportunity (Motivation)  Adopted approach  Implementation specifics  Status  Conclusions 2.
12 March, 2002 LCG Applications Area - Introduction slide 1 LCG Applications Session LCG Launch Workshop March 12, 2002 John Harvey, CERN LHCb Computing.
Building PetaScale Applications and Tools on the TeraGrid Workshop December 11-12, 2007 Scott Lathrop and Sergiu Sanielevici.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
Hall D Computing Facilities Ian Bird 16 March 2001.
ScotGRID is the Scottish prototype Tier 2 Centre for LHCb and ATLAS computing resources. It uses a novel distributed architecture and cutting-edge technology,
UK GridPP Tier-1/A Centre at CLRC
Dagmar Adamova (NPI AS CR Prague/Rez) and Maarten Litmaath (CERN)
Simulation use cases for T2 in ALICE
Ákos Frohner EGEE'08 September 2008
LHCb thinking on Regional Centres and Related activities (GRIDs)
Presentation transcript:

December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) MONARC Plenary December 9 Agenda u Introductions HN, LP15’ è Status of Actual CMS ORCA databases and relationship to MONARC Work: HN u Working Group Reports (by Chairs or Designees)40’ u Simulation Reports: Recent Progress AN, LP,IL 30’ u Discussion 15’ u Regional Centre Progress: France, Italy, UK, 45’ US, Russia, Hungary; Others u Tier2 Centre Concept and GriphyN: HN 10’ u Discussion of Phase 330’ u Steering Group 30’

December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) To Solve: the HENP “Data Problem” While the proposed future computing and data handling facilities are large by present-day standards, They will not support FREE access, transport or reconstruction for more than a Minute portion of the data. While the proposed future computing and data handling facilities are large by present-day standards, They will not support FREE access, transport or reconstruction for more than a Minute portion of the data. è Need effective global strategies to handle and prioritise requests (based on both policies and marginal utility) è Strategies must be studied and prototyped, to ensure Viability: acceptable turnaround times; efficient resource utilization  Problem to be Explored in Phase 3; How to Use Limited Resources to è Meet the demands of hundreds of users who need “transparent” (or adequate) access to local and remote data, in disk caches and tape stores è Prioritise hundreds to thousands of requests from local and remote communities è Ensure that the system is dimensioned “optimally”

December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) Phase 3 Letter of Intent (1/2)  Short: N Pages è May Refer to MONARC Internal Notes to Document Progress  Suggested Format: Similar to PEP Extension è Introduction: deliverables are realistic technical options and the associated resource requirements for LHC Computing; to be presented to the experiments and CERN, in support of Computing Model development for the Computing TDRs. è Brief Status; Existing Notes è Motivations for a Common Project --> Justification (1) è Goals and Scope of the Extension --> Justification (2) è Schedule: Preliminary estimate is 12 Months from completion of Phase 2, that will occur with the submission of the final Phase 1+2 Report. Final report will contain a proposal for the Phase 3 milestones and detailed schedule k Phase 3A: Decision on which prototypes to build or exploit u MONARC/Experiments/Regional Centres Working Meeting k Phase 3B: Specification of resources and prototype configurations u Setup of simulation and prototype environment k Phase 3C: Operation of prototypes and of simulation; analysis of results k Phase 3D: Feedback; strategy optimization

December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) Phase 3 Letter of Intent (2/2)  Equipment Needs (Scale specified further in Phase 3A) è MONARC Sun E450 server upgrade k TB RAID Array, GB memory upgrade k To act as a client to the System in CERN/IT, for distributed system studies è Access to Substantial system in the CERN/IT infrastructure consisting of a Linux farm, and a Sun-based data server over Gigabit Ethernet è Access to a Multi-Terabyte robotic tape store è Non-blocking access to WAN links to some of the main potential RC (e.g. 10 Mbps reserved to Japan; some tens of Mbps to US) è Temporary use of a large volume of tape media  Relationship to Other Projects and Groups è Work in collaboration with CERN/IT groups involved databases and large scale data and processing services è Our role is to seek common elements that may be used effectively in the experiments’ Computing Models è Computational Grid Projects in US; Cooperate in upcoming EU Grid proposals è US other National Funded efforts with R&D components  Submitted to Hans Hoffmann for Information on our Intention to Continue è Copy to Manuel Delfino

December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) Phase 3 LoI Status  Monarc has met its milestones up until now è Progress Report è Talks in Marseilles: General + Simulation è Testbed Notes: 99/4, 99/6, Youhei’s Note --> MONARC number è Architecture group notes: 99/1-3 è Simulation: Appendix of Progress Report è Short papers (Titles) for CHEP 2000 by January 15

December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) MONARC Phase 3: Justification (1) General: TIMELINESS and USEFUL IMPACT  Facilitate the efficient planning and design of mutually compatible site and network architectures, and services è Among the experiments, the CERN Centre and Regional Centres  Provide modelling consultancy and service to the experiments and Centres  Provide a core of advanced R&D activities, aimed at LHC computing system optimisation and production prototyping  Take advantage of work on distributed data-intensive computing for HENP this year in other “next generation” projects [*] è For example in US: “Particle Physics Data Grid” (PPDG) of DoE/NGI; + “Joint “GriPhyN” proposal on Computational Data Grids by ATLAS/CMS/LIGO/SDSS. Note EU Plans as well. [*] See H. Newman,

December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) MONARC Phase 3 Justification (2A) More Realistic Computing Model Development (LHCb and Alice Notes)  Confrontation of Models with Realistic Prototypes;  At Every Stage: Assess Use Cases Based on Actual Simulation, Reconstruction and Physics Analyses; è Participate in the setup of the prototyopes è We will further validate and develop MONARC simulation system using the results of these use cases (positive feedback) u Continue to Review Key Inputs to the Model k CPU Times at Various Phases k Data Rate to Storage k Tape Storage: Speed and I/O  Employ MONARC simulation and testbeds to study CM variations, and suggest strategy improvements

December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) MONARC Phase 3 Justification (2B) u Technology Studies u Data Model u Data structures k Reclustering, Restructuring; transport operations k Replication k Caching, migration (HMSM), etc. è Network k QoS Mechanisms: Identify Which are important è Distributed System Resource Management and Query Estimators k (Queue management and Load Balancing)  Development of MONARC Simulation Visualization Tools for interactive Computing Model analysis (forward reference)

December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) MONARC Phase 3: Justification (3) Meet Near Term Milestones for LHC Computing  For example CMS Data Handling Milestones: ORCA4: March 2000 ~1 Million event fully-simulated data sample(s) è Simulation of data access patterns, and mechanisms used to build and/or replicate compact object collections è Integration of database and mass storage use (including caching/migration strategy for limited disk space) è Other milestones will be detailed, and/or brought forward to meet the actual needs for HLT Studies and the TDRs for the Trigger, DAQ, Software and Computing and Physics  ATLAS Geant4 Studies  Event production and and analysis must be spread amongst regional centres, and candidates è Learn about RC configurations, operations, network bandwidth, by modeling real systems, and analyses actually with è Feedback information from real operations into simulations è Use progressively more realistic models to develop future strategies

December 10,1999: MONARC Plenary Meeting Harvey Newman (CIT) MONARC: Computing Model Constraints Drive Strategies u Latencies and Queuing Delays è Resource Allocations and/or Advance Reservations è Time to Swap In/Out Disk Space è Tape Handling Delays: Get a Drive, Find a Volume, Mount a Volume, Locate File, Read or Write è Interaction with local batch and device queues è Serial operations: tape/disk, cross-network, disk-disk and/or disk-tape after network transfer u Networks è Useable fraction of bandwidth (Congestion, Overheads): 30-60% (?) Fraction for event-data transfers: 15-30% ? è Nonlinear throughput degradation on loaded or poorly configured network paths. u Inter-Facility Policies è Resources available to remote users è Access to some resources in quasi-real time