High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.

Slides:



Advertisements
Similar presentations
GridPP July 2003Stefan StonjekSlide 1 SAM middleware components Stefan Stonjek University of Oxford 7 th GridPP Meeting 02 nd July 2003 Oxford.
Advertisements

Oklahoma Center for High Energy Physics A DOE EPSCoR Proposal Submitted by a consortium of University of Oklahoma, Oklahoma State University, Langston.
4/2/2002HEP Globus Testing Request - Jae Yu x Participating in Globus Test-bed Activity for DØGrid UTA HEP group is playing a leading role in establishing.
McFarm: first attempt to build a practical, large scale distributed HEP computing cluster using Globus technology Anand Balasubramanian Karthik Gopalratnam.
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
The SAM-Grid Fabric Services Gabriele Garzoglio (for the SAM-Grid team) Computing Division Fermilab.
OSG Grid Workshop in KNUST, Kumasi, Ghana August 6-8, 2012 following the AFRICAN SCHOOL OF FUNDAMENTAL PHYSICS AND ITS APPLICATIONS July 15-Aug 04, 2012.
DOSAR SURA Cyberinfrastructure Workshop: "Grid Application Planning & Implementation" Texas Advanced Computing Center (TACC) University of Texas at Austin.
Slide 1 Experiences with NMI R2 Grids Software at Michigan Shawn McKee April 8, 2003 Internet2 Spring Meeting.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
Grid Job and Information Management (JIM) for D0 and CDF Gabriele Garzoglio for the JIM Team.
TechFair ‘05 University of Arlington November 16, 2005.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
BaBar Grid Computing Eleonora Luppi INFN and University of Ferrara - Italy.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
Building a distributed software environment for CDF within the ESLEA framework V. Bartsch, M. Lancaster University College London.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
DØSAR, State of the Organization Jae Yu DOSAR, Its State of Organization 7th DØSAR (3 rd DOSAR) Workshop University of Oklahoma Sept. 21 – 22, 2006 Jae.
Status of DØ Computing at UTA Introduction The UTA – DØ Grid team DØ Monte Carlo Production The DØ Grid Computing –DØRAC –DØSAR –DØGrid Software Development.
DOSAR Workshop Sept , 2007 J. Cochran 1 The State of DOSAR Outline What Exactly is DOSAR (for the new folks) Brief History Goals, Accomplishments,
CHEP 2003Stefan Stonjek1 Physics with SAM-Grid Stefan Stonjek University of Oxford CHEP th March 2003 San Diego.
Installation Sites. Active SAM-Grid Sites  The following sites are actively used for different types of jobs Monte Carlo – production of simulated events.
The Texas High Energy Grid (THEGrid) A Proposal to Build a Cooperative Data and Computing Grid for High Energy Physics and Astrophysics in Texas Alan Sill,
1 st December 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
Jan. 17, 2002DØRAM Proposal DØRACE Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Remote Analysis Station ArchitectureRemote.
DØ RAC Working Group Report Progress Definition of an RAC Services provided by an RAC Requirements of RAC Pilot RAC program Open Issues DØRACE Meeting.
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
DOSAR Workshop at Sao Paulo Dick Greenwood What’s Next for DOSAR? Dick Greenwood Louisiana Tech University 1 st DOSAR Workshop at the Sao Paulo, Brazil.
ATLAS and GridPP GridPP Collaboration Meeting, Edinburgh, 5 th November 2001 RWL Jones, Lancaster University.
Instrumentation of the SAM-Grid Gabriele Garzoglio CSC 426 Research Proposal.
GridPP18 Glasgow Mar 07 DØ – SAMGrid Where’ve we come from, and where are we going? Evolution of a ‘long’ established plan Gavin Davies Imperial College.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
Status of UTA IAC + RAC Jae Yu 3 rd DØSAR Workshop Apr. 7 – 9, 2004 Louisiana Tech. University.
November SC06 Tampa F.Fanzago CRAB a user-friendly tool for CMS distributed analysis Federica Fanzago INFN-PADOVA for CRAB team.
Status of Grid-enabled UTA McFarm software Tomasz Wlodek University of the Great State of TX At Arlington.
22 nd September 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
And Tier 3 monitoring Tier 3 Ivan Kadochnikov LIT JINR
DØSAR a Regional Grid within DØ Jae Yu Univ. of Texas, Arlington THEGrid Workshop July 8 – 9, 2004 Univ. of Texas at Arlington.
What is SAM-Grid? Job Handling Data Handling Monitoring and Information.
…building the next IT revolution From Web to Grid…
From DØ To ATLAS Jae Yu ATLAS Grid Test-Bed Workshop Apr. 4-6, 2002, UTA Introduction DØ-Grid & DØRACE DØ Progress UTA DØGrid Activities Conclusions.
High Energy Physics and Grids at UF (Dec. 13, 2002)Paul Avery1 University of Florida High Energy Physics.
High Energy Physics & Computing Grids TechFair Univ. of Arlington November 10, 2004.
ATLAS is a general-purpose particle physics experiment which will study topics including the origin of mass, the processes that allowed an excess of matter.
GridPP11 Liverpool Sept04 SAMGrid GridPP11 Liverpool Sept 2004 Gavin Davies Imperial College London.
THEGrid Workshop Jae Yu THEGrid Workshop UTA, July 8 – 9, 2004 Introduction An Example of Existing Activities What do we want to accomplish at the workshop?
High Energy FermiLab Two physics detectors (5 stories tall each) to understand smallest scale of matter Each experiment has ~500 people doing.
GRID activities in Wuppertal D0RACE Workshop Fermilab 02/14/2002 Christian Schmitt Wuppertal University Taking advantage of GRID software now.
DØRACE Workshop Jae Yu Feb.11, 2002 Fermilab Introduction DØRACE Progress Workshop Goals Arrangements.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Feb. 14, 2002DØRAM Proposal DØ IB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) Introduction Partial Workshop Results DØRAM Architecture.
U.S. ATLAS Computing Facilities Overview Bruce G. Gibbard Brookhaven National Laboratory U.S. LHC Software and Computing Review Brookhaven National Laboratory.
The State of DOSAR DOSAR VI Workshop at Ole Miss April Dick Greenwood Louisiana Tech University.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Feb. 13, 2002DØRAM Proposal DØCPB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Partial Workshop ResultsPartial.
Distributed Physics Analysis Past, Present, and Future Kaushik De University of Texas at Arlington (ATLAS & D0 Collaborations) ICHEP’06, Moscow July 29,
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
DØ Grid Computing Gavin Davies, Frédéric Villeneuve-Séguier Imperial College London On behalf of the DØ Collaboration and the SAMGrid team The 2007 Europhysics.
OSG All Hands Meeting P. Skubic DOSAR OSG All Hands Meeting March 5-8, 2007 Pat Skubic University of Oklahoma Outline What is DOSAR? History of DOSAR Goals.
PROTO-GRID Status of Grid-enabled UTA McFarm software Tomasz Wlodek University of the Great State of Texas At Arlington.
Monte Carlo Production and Reprocessing at DZero
OUHEP STATUS Hardware OUHEP0, 2x Athlon 1GHz, 2 GB, 800GB RAID
Southwest Tier 2.
DOSAR: State of Organization
Proposal for a DØ Remote Analysis Model (DØRAM)
HEP Data Grid for DØ and Its Regional Grid, DØ SAR
Presentation transcript:

High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.

Considerations Why do we need OSCER? What will we do with OSCER? What we are now doing. Difficulties encountered using OSCER. Overcoming Problems. Summary See poster presentation for more background and detail. OU Supercomputing Symposium 2003 Joel Snow, Langston U.

Computing Challenges of HEP Huge Data Sets –DØ experiment running at the Fermilab Tevatron in Batavia, IL, USA is producing a petabyte of raw data to be analyzed –ATLAS experiment will run at CERN’s LHC in Geneva, Switzerland and produce a few petabytes of raw data per year to be analyzed –To understand the physics large scale Monte Carlo simulations must produce an equally large amount of data and this has to be analyzed OU Supercomputing Symposium 2003 Joel Snow, Langston U.

Collaboratory Nature of HEP Huge Collaborations Provide Needed Resources –DØ has 700 physicists from 18 countries –ATLAS has 2000 physicists from 34 countries Huge Collaborations Create Challenges –Geographically disbursed – 6 continents –Collaborators need access to data and each other OU Supercomputing Symposium 2003 Joel Snow, Langston U.

Why Do We Need OSCER? The massive computational resources needed can only be met by the distributed resources of the collaborators OSCER is a fine and powerful local computational resource for the OU/LU DØ and ATLAS collaborators. –Lots of CPU –Good network connectivity –Knowledgeable and helpful staff OU Supercomputing Symposium 2003 Joel Snow, Langston U.

What Will We Do With OSCER? Incorporate OSCER into the emerging Grid computing framework of the present and future HEP experiments. Near term plans are to use OSCER for Monte Carlo simulation production and analysis for DØ & ATLAS. In the longer term OSCER will be used for analysis of raw data. OU Supercomputing Symposium 2003 Joel Snow, Langston U.

How Will It Be Done? For efficiency and economy, will move on a large scale from a data production site centric model of organization to a hierarchal distributed model. Accomplish this by: –Developing and deploying Grid enabled tools and systems. –Organizing resources regionally. OU Supercomputing Symposium 2003 Joel Snow, Langston U.

Regional HEP Infrastructure DØ Southern Analysis Region –Provide GRID enabled software and computer resources to DØ collaboration –Provide regional technical support and coordination ATLAS Tier 2 Facility –Strong effort in progress to site such a facility in this region OU Supercomputing Symposium 2003 Joel Snow, Langston U.

Institutions Involved Cinvestav, Mexico Kansas State University Langston University Louisiana Tech University Rice University University of Arizona, Tucson, AZ Universidade Estadual Paulista, Brazil University of Kansas University of Oklahoma University of Texas at Arlington

Desktop Analysis Stations Institutional Analysis Centers Regional Analysis Centers Normal Interaction Communication Path Occasional Interaction Communication Path Central Analysis Center (CAC) DAS …. DAS …. IAC... IAC … RAC …. RAC DØ Remote Analysis Model Fermilab OU Supercomputing Symposium 2003 Joel Snow, Langston U.

Grid Hierarchy for LHC Experiments

What We Are Now Doing Produce simulation data for DØ using: –Available Linux clusters at institutions (Not yet OSCER!) –Collaborator written code and standard HEP written libraries and packages DØ code releases, mc_runjob, McFarm Store data and metadata in Fermilab’s central data store via SAM over the network – Sequential data Access via Meta-data A file based data management and access layer between the Storage Management System and the data processing layers.access layer While this a distributed computing model it is not multi-tiered and does not use standard Grid tools. OU Supercomputing Symposium 2003 Joel Snow, Langston U.

What We Are Now Doing Produce simulation data for ATLAS using: –Available Linux clusters at institutions (OSCER yes!) –Collaborator written code and standard HEP written libraries and packages. Software system not yet as complicated and massive as DØ’s. Job submission and data management done using standard Grid tools. –Globus, Condor-G Incomplete hierarchy, Tier 1 (BNL) and test sites. OUHEP assisted in the installation of Grid tools at OSCER to make this operational. OU Supercomputing Symposium 2003 Joel Snow, Langston U.

OSCER Deployment Difficulties DØ MC production not deployed on OSCER –High level McFarm management and bookkeeping system designed for dedicated cluster environment. Root access for installation, running daemons for operations and external communications like SAM, limited to specific inter node communications protocols. –Not suited to OSCER’s environment and purpose as a general purpose cluster. OU Supercomputing Symposium 2003 Joel Snow, Langston U.

Overcoming Deployment Problems Have SAM work with standard Grid tools. –SAMGrid software suite Globus Job and Information Management (JIM) broker SAM Condor/McFarm Have McFarm work on generic clusters. –Remove root dependencies, encapsulate daemon usage, allow flexible inter node communication. –Improve Grid interface. OU Supercomputing Symposium 2003 Joel Snow, Langston U.

Until Then… Work in progress to create a transitory McFarm environment within the confines of a PBS job on OSCER. –Testing inter node communication schemes allowable in OSCER for use with McFarm. Adapt McFarm in PBS environment to transfer data to the SAM station at OUHEP for conveyance to Fermilab. OU Supercomputing Symposium 2003 Joel Snow, Langston U.

Summary Large data sets and the need for massive computational resources make local resources like OSCER attractive to HEP. Successful production systems must be adapted to work on large generic clusters through standard Grid tools. OSCER staff and OU physicists working to bring OSCER online for DØ Monte Carlo production. OU Supercomputing Symposium 2003 Joel Snow, Langston U.