HEP Experiment Integration within GriPhyN/PPDG/iVDGL Rick Cavanaugh University of Florida DataTAG/WP4 Meeting 23 May, 2002.

Slides:



Advertisements
Similar presentations
Claudio Grandi INFN Bologna DataTAG WP4 meeting, Bologna 14 jan 2003 CMS Grid Integration Claudio Grandi (INFN – Bologna)
Advertisements

GridPP July 2003Stefan StonjekSlide 1 SAM middleware components Stefan Stonjek University of Oxford 7 th GridPP Meeting 02 nd July 2003 Oxford.
SAM-Grid Status Core SAM development SAM-Grid architecture Progress Future work.
4/2/2002HEP Globus Testing Request - Jae Yu x Participating in Globus Test-bed Activity for DØGrid UTA HEP group is playing a leading role in establishing.
 Contributing >30% of throughput to ATLAS and CMS in Worldwide LHC Computing Grid  Reliant on production and advanced networking from ESNET, LHCNET and.
CMS Applications Towards Requirements for Data Processing and Analysis on the Open Science Grid Greg Graham FNAL CD/CMS for OSG Deployment 16-Dec-2004.
Distributed IT Infrastructure for U.S. ATLAS Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
Experience with ATLAS Data Challenge Production on the U.S. Grid Testbed Kaushik De University of Texas at Arlington CHEP03 March 27, 2003.
Title US-CMS User Facilities Vivian O’Dell US CMS Physics Meeting May 18, 2001.
Workload Management Massimo Sgaravatto INFN Padova.
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
LCG Milestones for Deployment, Fabric, & Grid Technology Ian Bird LCG Deployment Area Manager PEB 3-Dec-2002.
Grappa: Grid access portal for physics applications Shava Smallen Extreme! Computing Laboratory Department of Physics Indiana University.
Experiment Requirements for Global Infostructure Irwin Gaines FNAL/DOE.
Grid Testbed Activities in US-CMS Rick Cavanaugh University of Florida 1. Infrastructure 2. Highlight of Current Activities 3. Future Directions NSF/DOE.
5 November 2001F Harris GridPP Edinburgh 1 WP8 status for validating Testbed1 and middleware F Harris(LHCb/Oxford)
10/20/05 LIGO Scientific Collaboration 1 LIGO Data Grid: Making it Go Scott Koranda University of Wisconsin-Milwaukee.
Grid Canada CLS eScience Workshop 21 st November, 2005.
ARGONNE  CHICAGO Ian Foster Discussion Points l Maintaining the right balance between research and development l Maintaining focus vs. accepting broader.
OSG Services at Tier2 Centers Rob Gardner University of Chicago WLCG Tier2 Workshop CERN June 12-14, 2006.
Grid Job and Information Management (JIM) for D0 and CDF Gabriele Garzoglio for the JIM Team.
Building a distributed software environment for CDF within the ESLEA framework V. Bartsch, M. Lancaster University College London.
VOX Project Status T. Levshina. Talk Overview VOX Status –Registration –Globus callouts/Plug-ins –LRAS –SAZ Collaboration with VOMS EDG team Preparation.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
U.T. Arlington High Energy Physics Research Summary of Activities August 1, 2001.
K. De UTA Grid Workshop April 2002 U.S. ATLAS Grid Testbed Workshop at UTA Introduction and Goals Kaushik De University of Texas at Arlington.
1 st December 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
Tier 1 Facility Status and Current Activities Rich Baker Brookhaven National Laboratory NSF/DOE Review of ATLAS Computing June 20, 2002.
05/23/2002Flavia Donno, INFN-Pisa1 DataTAG experiment grid integration plan Flavia Donno, INFN-Pisa DataTAG/iVDGL joint meeting, May 2002.
GriPhyN Status and Project Plan Mike Wilde Mathematics and Computer Science Division Argonne National Laboratory.
Ruth Pordes, Fermilab CD, and A PPDG Coordinator Some Aspects of The Particle Physics Data Grid Collaboratory Pilot (PPDG) and The Grid Physics Network.
Data Grid projects in HENP R. Pordes, Fermilab Many HENP projects are working on the infrastructure for global distributed simulated data production, data.
Status of Grid-enabled UTA McFarm software Tomasz Wlodek University of the Great State of TX At Arlington.
22 nd September 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
29 May 2002Joint EDG/WP8-EDT/WP4 MeetingClaudio Grandi INFN Bologna LHC Experiments Grid Integration Plans C.Grandi INFN - Bologna.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
US ATLAS Grid Projects Rob Gardner Indiana University Mid Year Review of US ATLAS Computing NSF Headquarters, Arlington VA June 20, 2002
Quick Introduction to NorduGrid Oxana Smirnova 4 th Nordic LHC Workshop November 23, 2001, Stockholm.
D0RACE: Testbed Session Lee Lueking D0 Remote Analysis Workshop February 12, 2002.
16 September GridPP 5 th Collaboration Meeting D0&CDF SAM and The Grid Act I: Grid, Sam and Run II Rick St. Denis – Glasgow University Act II: Sam4CDF.
Virtual Batch Queues A Service Oriented View of “The Fabric” Rich Baker Brookhaven National Laboratory April 4, 2002.
1 DØ Grid PP Plans – SAM, Grid, Ceiling Wax and Things Iain Bertram Lancaster University Monday 5 November 2001.
Atlas Grid Status - part 1 Jennifer Schopf ANL U.S. ATLAS Physics and Computing Advisory Panel Review Argonne National Laboratory Oct 30, 2001.
The GriPhyN Planning Process All-Hands Meeting ISI 15 October 2001.
Owen SyngeTitle of TalkSlide 1 Storage Management Owen Synge – Developer, Packager, and first line support to System Administrators. Talks Scope –GridPP.
May 12, 2005Batch Workshop HEPiX Karlsruhe 1 Preparing for the Grid— Changes in Batch Systems at Fermilab HEPiX Batch System Workshop.
Grid User Interface for ATLAS & LHCb A more recent UK mini production used input data stored on RAL’s tape server, the requirements in JDL and the IC Resource.
High Energy Physics and Grids at UF (Dec. 13, 2002)Paul Avery1 University of Florida High Energy Physics.
High Energy Physics & Computing Grids TechFair Univ. of Arlington November 10, 2004.
US CMS Centers & Grids – Taiwan GDB Meeting1 Introduction l US CMS is positioning itself to be able to learn, prototype and develop while providing.
6/23/2005 R. GARDNER OSG Baseline Services 1 OSG Baseline Services In my talk I’d like to discuss two questions:  What capabilities are we aiming for.
Alain Roy Computer Sciences Department University of Wisconsin-Madison Condor & Middleware: NMI & VDT.
December 26, 2015 RHIC/USATLAS Grid Computing Facility Overview Dantong Yu Brookhaven National Lab.
CD FY09 Tactical Plan Status FY09 Tactical Plan Status Report for Neutrino Program (MINOS, MINERvA, General) Margaret Votava April 21, 2009 Tactical plan.
1 DataTAG-WP4 and GLUE ( mainly from A.Ghiselli II INFN-Grid workshop) Mirco Mazzucato Gridstart meeting at HPC Cetraro.
June 15, PMG Ruth Pordes Status Report US CMS PMG July 15th Tier-1 –LCG Service Challenge 3 (SC3) –FY05 hardware delivery –UAF support Grid Services.
Adapting SAM for CDF Gabriele Garzoglio Fermilab/CD/CCF/MAP CHEP 2003.
April 25, 2006Parag Mhashilkar, Fermilab1 Resource Selection in OSG & SAM-On-The-Fly Parag Mhashilkar Fermi National Accelerator Laboratory Condor Week.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Grid Status - PPDG / Magda / pacman Torre Wenaus BNL DOE/NSF Review of US LHC Software and Computing Fermilab Nov 29, 2001.
D0 File Replication PPDG SLAC File replication workshop 9/20/00 Vicky White.
1 Application status F.Carminati 11 December 2001.
BNL dCache Status and Plan CHEP07: September 2-7, 2007 Zhenping (Jane) Liu for the BNL RACF Storage Group.
U.S. ATLAS Grid Production Experience
CMS report from FNAL demo week Marco Verlato (INFN-Padova)
U.S. ATLAS Testbed Status Report
Status of Grids for HEP and HENP
Presentation transcript:

HEP Experiment Integration within GriPhyN/PPDG/iVDGL Rick Cavanaugh University of Florida DataTAG/WP4 Meeting 23 May, 2002

VO Management2 Lawrence Berkeley National Laboratory Brookhaven National Laboratory Indiana University Boston University Argonne National Laboratory U Michigan University of Texas at Arlington Oklahoma University l Grid credentials (based on globus CA) distributed Process of updating to ESnet CA credentials l Grid software: Globus 1.1.4/2.0, Condor 6.3 (moving towards full VDT 1.x) l ATLAS core software distribution at 2 sites (for developers) (RH 6.2) l ATLAS related grid software: Pacman, Magda, Gridview, Grappa l Testbed has been functional for ~ 1 year l Accounts (individual user, group) created at all sites l GRAT – Grid Application Toolkit for ATLAS grid applications (RH 7.2) US-ATLAS Test Grid

VO Management3 l Develop a Condor+GDMP+Magda package Magda: distributed data manager prototype Data production is waiting for cataloguing hooks using Magda l Develop data analysis tools (to simplify user experience) Enhance GRAPPA web portal l Use Virtual Data Toolkit (VDT) and test the GriPhyN Virtual Data Catalog (VDC) l Participate in Data Challenge 1 l Automate grid package production mechanism l Deploy a hierarchical GIIS server l Develop an MDS information provider for Pacman-deployed software l Interoperate with US-CMS Test Grid and EDG Run ATLAS apps on US-CMS Test Grid (done!) Run ATLAS apps from US-ATLAS Site on EDG Testbed (done!) Near Term US-ATLAS Plan

VO Management4 Master Site Remote Site 1 IMPALA/ BOSS mop_submitter DAGMan Condor-G GDMP Batch Queue GDMP Remote Site N Batch Queue GDMP l Grid credentials (based on globus CA) distributed Process of updating to ESnet CA credentials l Grid software: VDT 1.0 l Globus 2.0 beta l Condor-G l Condor l ClassAds 0.9 l GDMP 3.0 l Objectivity 6.1 l MOP – distributed CMS Monte carlO Production l Testbed has been functional for ~ 1/2 year l Decentralised account management l DAR – Distribution After Release for CMS applications (RH 6.2) US-CMS Test Grid “MOP”

VO Management5 Near Term US-CMS Plans l Prototype Virtual Data Grid System (VDGS) Based upon VDT (and the GriPhyN Virtual Data Catalog) First prototype by August Production prototype for November l Grid-enabled Monte Carlo Production Build upon the CMS and MOP experience (already quite mature) Run live CMS production this Summer Integrate with VDGS for November l Grid-enabled Analysis Environment Based upon web services (XML, RPC, SOAP, etc) Integrate with VDT and VDGS for November l Interoperate with US-ATLAS Test Grid and EDG Run CMS apps on US-ATLAS Test Grid Run CMS apps from US-CMS Site on EDG Testbed

VO Management6 D0 SAM Deployment Map l Cluster data according to access patterns l Cache data which is frequently accessed l Organize requests to minimize tape mounts l Estimate resources for file requests before they are submitted l Make decisions concerning data delivery priority l All sites are functional D0 centers that routinely send/receive data to/from FNAL anticipate one or more stations at each collaborating institution eventually Processing Center Analysis site

VO Management7 Commissioning of SAM for CDF GOALS support 5 groups that do data analysis enable access to datasets of interest production availability of the systems limit impact on CDF enstore CDF portakamp 6509 CDF Offline 6509 Border Router CDFen DCache fcdfsam sam station enstore stat ~1TB Cache Perm. Disk fndaut (sun) name service sam db server optimizer logger web server monitoring fcdfora2 Oracle DB (dev, int) nglas09 sam_station (analysis) Cache Fermilab remote sam_station (analysis) Cache remote sam_station (analysis) Cache remote sam_station (analysis) Cache remote sam_station (analysis) Cache remote sam_station (analysis) Cache STKen 5TB 100 MB 1 GB 100 MB (multiple) fcdfora1 Oracle DB (prd) CD Switch STATUS Hardware and Software infrastructure in place Translation of the CDF DFC ready to go in production Developed AC++ interfaces to SAM to retrieve and analyze files. Automatic output to SAM not ready, yet. Enabled access to DCache. Deploying to test sites to sort out configuration issues. Test user are starting now to use SAM to do physics.

VO Management8 Conclusion l Other non-HEP experiments (LIGO, SDSS) not mentioned in this talk l LHC Experiments have short term plans which are aggressive Test Grids are still young and fault-prone Inter-experiment and inter-grid integration Distributed data analysis Distributed Monte Carlo data production but realistic Uses existing (for the most part) software and tools Emphasis is on building prototypes and learning from them l FNAL Experiments appear well integrated ! l Critical need to demonstrate the value of the grid!