PHENIX and the data grid >400 collaborators Active on 3 continents + Brazil 100’s of TB of data per year Complex data with multiple disparate physics goals.

Slides:



Advertisements
Similar presentations
31/03/00 CMS(UK)Glenn Patrick What is the CMS(UK) Data Model? Assume that CMS software is available at every UK institute connected by some infrastructure.
Advertisements

GRID DATA MANAGEMENT PILOT (GDMP) Asad Samar (Caltech) ACAT 2000, Fermilab October , 2000.
Workload Management Workpackage Massimo Sgaravatto INFN Padova.
Office of Science U.S. Department of Energy Grids and Portals at NERSC Presented by Steve Chan.
Experience with ATLAS Data Challenge Production on the U.S. Grid Testbed Kaushik De University of Texas at Arlington CHEP03 March 27, 2003.
Data Management for Physics Analysis in PHENIX (BNL, RHIC) Evaluation of Grid architecture components in PHENIX context Barbara Jacak, Roy Lacey, Saskia.
K.Harrison CERN, 23rd October 2002 HOW TO COMMISSION A NEW CENTRE FOR LHCb PRODUCTION - Overview of LHCb distributed production system - Configuration.
Workload Management Massimo Sgaravatto INFN Padova.
Magda – Manager for grid-based data Wensheng Deng Physics Applications Software group Brookhaven National Laboratory.
LHC Experiment Dashboard Main areas covered by the Experiment Dashboard: Data processing monitoring (job monitoring) Data transfer monitoring Site/service.
Grid Data Management A network of computers forming prototype grids currently operate across Britain and the rest of the world, working on the data challenges.
Alexandre A. P. Suaide VI DOSAR workshop, São Paulo, 2005 STAR grid activities and São Paulo experience.
Tech talk 20th June Andrey Grid architecture at PHENIX Job monitoring and related stuff in multi cluster environment.
STAR scheduling future directions Gabriele Carcassi 9 September 2002.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
Central Reconstruction System on the RHIC Linux Farm in Brookhaven Laboratory HEPIX - BNL October 19, 2004 Tomasz Wlodek - BNL.
BaBar Grid Computing Eleonora Luppi INFN and University of Ferrara - Italy.
Informix IDS Administration with the New Server Studio 4.0 By Lester Knutsen My experience with the beta of Server Studio and the new Informix database.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
QCDGrid Progress James Perry, Andrew Jackson, Stephen Booth, Lorna Smith EPCC, The University Of Edinburgh.
Jean-Yves Nief CC-IN2P3, Lyon HEPiX-HEPNT, Fermilab October 22nd – 25th, 2002.
Cosener’s House – 30 th Jan’031 LHCb Progress & Plans Nick Brook University of Bristol News & User Plans Technical Progress Review of deliverables.
8th November 2002Tim Adye1 BaBar Grid Tim Adye Particle Physics Department Rutherford Appleton Laboratory PP Grid Team Coseners House 8 th November 2002.
Grid Status - PPDG / Magda / pacman Torre Wenaus BNL U.S. ATLAS Physics and Computing Advisory Panel Review Argonne National Laboratory Oct 30, 2001.
1 DIRAC – LHCb MC production system A.Tsaregorodtsev, CPPM, Marseille For the LHCb Data Management team CHEP, La Jolla 25 March 2003.
PNPI HEPD seminar 4 th November Andrey Shevel Distributed computing in High Energy Physics with Grid Technologies (Grid tools at PHENIX)
CHEP Sep Andrey PHENIX Job Submission/Monitoring in transition to the Grid Infrastructure Andrey Y. Shevel, Barbara Jacak,
4/20/02APS April Meeting1 Database Replication at Remote sites in PHENIX Indrani D. Ojha Vanderbilt University (for PHENIX Collaboration)
HEPD sem 14-Dec Andrey History photos: A. Shevel reports on CSD seminar about new Internet facilities at PNPI (Jan 1995)
9 February 2000CHEP2000 Paper 3681 CDF Data Handling: Resource Management and Tests E.Buckley-Geer, S.Lammel, F.Ratnikov, T.Watts Hardware and Resources.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Author - Title- Date - n° 1 Partner Logo WP5 Summary Paris John Gordon WP5 6th March 2002.
Cracow Grid Workshop October 2009 Dipl.-Ing. (M.Sc.) Marcus Hilbrich Center for Information Services and High Performance.
Andrey Meeting 7 October 2003 General scheme: jobs are planned to go where data are and to less loaded clusters SUNY.
PPDG update l We want to join PPDG l They want PHENIX to join NSF also wants this l Issue is to identify our goals/projects Ingredients: What we need/want.
Stefano Belforte INFN Trieste 1 CMS Simulation at Tier2 June 12, 2006 Simulation (Monte Carlo) Production for CMS Stefano Belforte WLCG-Tier2 workshop.
ATLAS is a general-purpose particle physics experiment which will study topics including the origin of mass, the processes that allowed an excess of matter.
February 28, 2003Eric Hjort PDSF Status and Overview Eric Hjort, LBNL STAR Collaboration Meeting February 28, 2003.
Databases for data management in PHENIX Irina Sourikova Brookhaven National Laboratory for the PHENIX collaboration.
David Adams ATLAS DIAL: Distributed Interactive Analysis of Large datasets David Adams BNL August 5, 2002 BNL OMEGA talk.
PHENIX and the data grid >400 collaborators 3 continents + Israel +Brazil 100’s of TB of data per year Complex data with multiple disparate physics goals.
A proposal: from CDR to CDH 1 Paolo Valente – INFN Roma [Acknowledgements to A. Di Girolamo] Liverpool, Aug. 2013NA62 collaboration meeting.
Magda Distributed Data Manager Prototype Torre Wenaus BNL September 2001.
Stephen Gowdy FNAL 9th Feb 2015CMS Computing Model Simulation 1.
CLRC Grid Team Glenn Patrick LHCb GRID Plans Glenn Patrick LHCb has formed a GRID technical working group to co-ordinate practical Grid.
Grid Activities in CMS Asad Samar (Caltech) PPDG meeting, Argonne July 13-14, 2000.
10 March Andrey Grid Tools Working Prototype of Distributed Computing Infrastructure for Physics Analysis SUNY.
CCJ introduction RIKEN Nishina Center Kohei Shoji.
Meeting with University of Malta| CERN, May 18, 2015 | Predrag Buncic ALICE Computing in Run 2+ P. Buncic 1.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
1 GlueX Software Oct. 21, 2004 D. Lawrence, JLab.
Magda Distributed Data Manager Torre Wenaus BNL October 2001.
Status of BESIII Distributed Computing BESIII Collaboration Meeting, Nov 2014 Xiaomei Zhang On Behalf of the BESIII Distributed Computing Group.
BaBar & Grid Eleonora Luppi for the BaBarGrid Group TB GRID Bologna 15 febbraio 2005.
Current Generation Hypervisor Type 1 Type 2.
Overview of the Belle II computing
U.S. ATLAS Grid Production Experience
ALICE Monitoring
Sergio Fantinel, INFN LNL/PD
LHCb Computing Model and Data Handling Angelo Carbone 5° workshop italiano sulla fisica p-p ad LHC 31st January 2008.
Job workflow Pre production operations:
Simulation use cases for T2 in ALICE
ALICE Computing Model in Run3
US CMS Testbed.
Near Real Time Reconstruction of PHENIX Run7 Minimum Bias Data From RHIC Project Goals Reconstruct 10% of PHENIX min bias data from the RHIC Run7 (Spring.
Preparations for Reconstruction of Run6 Level2 Filtered PRDFs at Vanderbilt’s ACCRE Farm Charles Maguire et al. March 14, 2006 Local Group Meeting.
Production Manager Tools (New Architecture)
Expanding the PHENIX Reconstruction Universe
The LHCb Computing Data Challenge DC06
Presentation transcript:

PHENIX and the data grid >400 collaborators Active on 3 continents + Brazil 100’s of TB of data per year Complex data with multiple disparate physics goals

Grid use that would help PHENIX l Data management Replica management to/from remote sites Management of simulated data Replica management within RCF l Job management Simulated events generation and analysis Centralized analysis of summary data at remote sites

Replica management: export to remote sites l Export of PHENIX data Send data by network or FedEx net to Japan, France (IN2P3) and US collaborator sites Network to Japan via APAN using bbftp (right?) Network to France using bbtfp (right?) Network within US using bbftp and globus-url-copy Currently transfers initiated & logged by hand Much/most transfers use disks as buffer l Goals Automate data export and logging into replica catalog Allow transfer of data from most convenient site, rather than only the central repository at RCF

Simulated data management l Simulations are performed at CC-J(RIKEN/Wako),Vanderbilt, UNM, LLNL,USB Will add other sites, including IN2P3 for run3 l Simulated hits data were imported to RCF For detector response, reconstruction, analysis Simulation projects managed by C. Maguire actual simulation jobs run by expert at each site Data transfers initiated by scripts or by hand l Goals Automate importation and archive of simulated data Ideally by merging with centralized job submission utility Export PHENIX software effectively to allow remote site detector response and reconstruction

Replica management within RCF l VERY important short term goal! l PHENIX tools have been developed Replica catalog, including DAQ/production/QA info lightweight POSTGRES version as well as Objy logical/physical filename translator l Goals Use and optimize existing tools at RCF Investigate merging with Globus middleware relation to GDMP? different from Magda – carry more file info (?) Integrate into job management/submission Can we collect statistics for optimization and scheduling?

Job management l Currently use scripts and batch queues at each site l Have two kinds of jobs we should manage better Simulations User analysis jobs

Requirements for simulation jobs l Job specifications Conditions & particle types to simulate Number of events May need embedding into real events (multiplicity effects) l I/O requirements I=database access for run # ranges, detector geometry O= the big requirement send files to RCF for further processing eventually can reduce to DST volume for RCF import l Job sequence requirements Initially rather small, only interaction is random # seed Eventually: hits generation -> response -> reconstruction l Site selection criteria CPU cycles! Also buffer disk space & access for expert

Current user analysis approach

Requirements for analysis using grid