DØSAR a Regional Grid within DØ Jae Yu Univ. of Texas, Arlington THEGrid Workshop July 8 – 9, 2004 Univ. of Texas at Arlington.

Slides:



Advertisements
Similar presentations
GridPP July 2003Stefan StonjekSlide 1 SAM middleware components Stefan Stonjek University of Oxford 7 th GridPP Meeting 02 nd July 2003 Oxford.
Advertisements

4/2/2002HEP Globus Testing Request - Jae Yu x Participating in Globus Test-bed Activity for DØGrid UTA HEP group is playing a leading role in establishing.
McFarm: first attempt to build a practical, large scale distributed HEP computing cluster using Globus technology Anand Balasubramanian Karthik Gopalratnam.
Amber Boehnlein, FNAL D0 Computing Model and Plans Amber Boehnlein D0 Financial Committee November 18, 2002.
HEP Prospects, J. Yu LEARN Strategy Meeting Prospects on Texas High Energy Physics Network Needs LEARN Strategy Meeting University of Texas at El Paso.
DOSAR SURA Cyberinfrastructure Workshop: "Grid Application Planning & Implementation" Texas Advanced Computing Center (TACC) University of Texas at Austin.
High Energy Physics At OSCER A User Perspective OU Supercomputing Symposium 2003 Joel Snow, Langston U.
The SAMGrid Data Handling System Outline:  What Is SAMGrid?  Use Cases for SAMGrid in Run II Experiments  Current Operational Load  Stress Testing.
Remote Production and Regional Analysis Centers Iain Bertram 24 May 2002 Draft 1 Lancaster University.
UTA Site Report Jae Yu UTA Site Report 4 th DOSAR Workshop Iowa State University Apr. 5 – 6, 2007 Jae Yu Univ. of Texas, Arlington.
TechFair ‘05 University of Arlington November 16, 2005.
03/27/2003CHEP20031 Remote Operation of a Monte Carlo Production Farm Using Globus Dirk Hufnagel, Teela Pulliam, Thomas Allmendinger, Klaus Honscheid (Ohio.
November 7, 2001Dutch Datagrid SARA 1 DØ Monte Carlo Challenge A HEP Application.
Building a distributed software environment for CDF within the ESLEA framework V. Bartsch, M. Lancaster University College London.
D0 SAM – status and needs Plagarized from: D0 Experiment SAM Project Fermilab Computing Division.
UTA Site Report Jae Yu UTA Site Report 2 nd DOSAR Workshop UTA Mar. 30 – Mar. 31, 2006 Jae Yu Univ. of Texas, Arlington.
3rd June 2004 CDF Grid SAM:Metadata and Middleware Components Mòrag Burgon-Lyon University of Glasgow.
DØSAR, State of the Organization Jae Yu DOSAR, Its State of Organization 7th DØSAR (3 rd DOSAR) Workshop University of Oklahoma Sept. 21 – 22, 2006 Jae.
Status of DØ Computing at UTA Introduction The UTA – DØ Grid team DØ Monte Carlo Production The DØ Grid Computing –DØRAC –DØSAR –DØGrid Software Development.
DOSAR Workshop Sept , 2007 J. Cochran 1 The State of DOSAR Outline What Exactly is DOSAR (for the new folks) Brief History Goals, Accomplishments,
CHEP 2003Stefan Stonjek1 Physics with SAM-Grid Stefan Stonjek University of Oxford CHEP th March 2003 San Diego.
The Texas High Energy Grid (THEGrid) A Proposal to Build a Cooperative Data and Computing Grid for High Energy Physics and Astrophysics in Texas Alan Sill,
1 st December 2003 JIM for CDF 1 JIM and SAMGrid for CDF Mòrag Burgon-Lyon University of Glasgow.
A Design for KCAF for CDF Experiment Kihyeon Cho (CHEP, Kyungpook National University) and Jysoo Lee (KISTI, Supercomputing Center) The International Workshop.
Jan. 17, 2002DØRAM Proposal DØRACE Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Remote Analysis Station ArchitectureRemote.
DØ RAC Working Group Report Progress Definition of an RAC Services provided by an RAC Requirements of RAC Pilot RAC program Open Issues DØRACE Meeting.
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
DOSAR Workshop at Sao Paulo Dick Greenwood What’s Next for DOSAR? Dick Greenwood Louisiana Tech University 1 st DOSAR Workshop at the Sao Paulo, Brazil.
DØ Computing Model & Monte Carlo & Data Reprocessing Gavin Davies Imperial College London DOSAR Workshop, Sao Paulo, September 2005.
DØ RACE Introduction Current Status DØRAM Architecture Regional Analysis Centers Conclusions DØ Internal Computing Review May 9 – 10, 2002 Jae Yu.
International Workshop on HEP Data Grid Nov 9, 2002, KNU Data Storage, Network, Handling, and Clustering in CDF Korea group Intae Yu*, Junghyun Kim, Ilsung.
Status of UTA IAC + RAC Jae Yu 3 rd DØSAR Workshop Apr. 7 – 9, 2004 Louisiana Tech. University.
Status of Grid-enabled UTA McFarm software Tomasz Wlodek University of the Great State of TX At Arlington.
Spending Plans and Schedule Jae Yu July 26, 2002.
26SEP03 2 nd SAR Workshop Oklahoma University Dick Greenwood Louisiana Tech University LaTech IAC Site Report.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
What is SAM-Grid? Job Handling Data Handling Monitoring and Information.
Data reprocessing for DZero on the SAM-Grid Gabriele Garzoglio for the SAM-Grid Team Fermilab, Computing Division.
ISU DOSAR WORKSHOP Dick Greenwood DOSAR/OSG Statement of Work (SoW) Dick Greenwood Louisiana Tech University April 5, 2007.
From DØ To ATLAS Jae Yu ATLAS Grid Test-Bed Workshop Apr. 4-6, 2002, UTA Introduction DØ-Grid & DØRACE DØ Progress UTA DØGrid Activities Conclusions.
High Energy Physics & Computing Grids TechFair Univ. of Arlington November 10, 2004.
GridPP11 Liverpool Sept04 SAMGrid GridPP11 Liverpool Sept 2004 Gavin Davies Imperial College London.
THEGrid Workshop Jae Yu THEGrid Workshop UTA, July 8 – 9, 2004 Introduction An Example of Existing Activities What do we want to accomplish at the workshop?
GRID activities in Wuppertal D0RACE Workshop Fermilab 02/14/2002 Christian Schmitt Wuppertal University Taking advantage of GRID software now.
Outline  Introduction  Higgs Particle Searches for Origin of Mass  Grid Computing Effort  Linear Collider Detector R&D  Conclusions A Quest for the.
UTA MC Production Farm & Grid Computing Activities Jae Yu UT Arlington DØRACE Workshop Feb. 12, 2002 UTA DØMC Farm MCFARM Job control and packaging software.
Feb. 14, 2002DØRAM Proposal DØ IB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) Introduction Partial Workshop Results DØRAM Architecture.
Outline  Higgs Particle Searches for Origin of Mass  Grid Computing  A brief Linear Collider Detector R&D  The  The grand conclusion: YOU are the.
DCAF(DeCentralized Analysis Farm) for CDF experiments HAN DaeHee*, KWON Kihwan, OH Youngdo, CHO Kihyeon, KONG Dae Jung, KIM Minsuk, KIM Jieun, MIAN shabeer,
International Workshop on HEP Data Grid Aug 23, 2003, KNU Status of Data Storage, Network, Clustering in SKKU CDF group Intae Yu*, Joong Seok Chae Department.
The State of DOSAR DOSAR VI Workshop at Ole Miss April Dick Greenwood Louisiana Tech University.
Computing Issues for the ATLAS SWT2. What is SWT2? SWT2 is the U.S. ATLAS Southwestern Tier 2 Consortium UTA is lead institution, along with University.
Feb. 13, 2002DØRAM Proposal DØCPB Meeting, Jae Yu 1 Proposal for a DØ Remote Analysis Model (DØRAM) IntroductionIntroduction Partial Workshop ResultsPartial.
UTA Site Report Jae Yu UTA Site Report 7 th DOSAR Workshop Louisiana State University Apr. 2 – 3, 2009 Jae Yu Univ. of Texas, Arlington.
McFarm Improvements and Re-processing Integration D. Meyer for The UTA Team DØ SAR Workshop Oklahoma University 9/26 - 9/27/2003
Jianming Qian, UM/DØ Software & Computing Where we are now Where we want to go Overview Director’s Review, June 5, 2002.
Apr. 25, 2002Why DØRAC? DØRAC FTFM, Jae Yu 1 What do we want DØ Regional Analysis Centers (DØRAC) do? Why do we need a DØRAC? What do we want a DØRAC do?
DØ Grid Computing Gavin Davies, Frédéric Villeneuve-Séguier Imperial College London On behalf of the DØ Collaboration and the SAMGrid team The 2007 Europhysics.
OSG All Hands Meeting P. Skubic DOSAR OSG All Hands Meeting March 5-8, 2007 Pat Skubic University of Oklahoma Outline What is DOSAR? History of DOSAR Goals.
Monte Carlo Production and Reprocessing at DZero
Southwest Tier 2 Center Status Report
5th DOSAR Workshop Louisiana Tech University Sept. 27 – 28, 2007
Southwest Tier 2.
DOSAR: State of Organization
High Energy Physics at UTA
DØ MC and Data Processing on the Grid
News p has no fetchpkgs.sh and fetchprod.sh shell script in the .../D0reltools/ These are missing due to the switch over to the new distribution.
DØ RAC Working Group Report
Proposal for a DØ Remote Analysis Model (DØRAM)
HEP Data Grid for DØ and Its Regional Grid, DØ SAR
Presentation transcript:

DØSAR a Regional Grid within DØ Jae Yu Univ. of Texas, Arlington THEGrid Workshop July 8 – 9, 2004 Univ. of Texas at Arlington

High Energy Physics –Total expected data size is over 5 PB (5,000 inche stack of 100GB hard drives) for CDF and DØ –Detectors are complicated  Need many people to construct and make them work –Collaboration is large and scattered all over the world –Allow software development at remote institutions –Optimized resource management, job scheduling, and monitoring tools –Efficient and transparent data delivery and sharing Use the opportunity of having large data set in furthering grid computing technology –Improve computational capability for education –Improve quality of life The Problem

DØ and CDF at Fermilab Tevatron World’s Highest Energy proton-anti-proton collider –E cm =1.96 TeV (=6.3x10 -7 J/p  13M Joules on m 2 )  Equivalent to the kinetic energy of a 20t truck at a speed 80 mi/hr Chicago  Tevatron pp p CDF DØ

650 Collaborators 78 Institutions 18 Countries DØ Collaboration

Centeralized Deployment Models Started with Lab-centric SAM infrastructure in place, … …transition to hierarchically distributed Model 

Desktop Analysis Stations Institutional Analysis Centers Regional Analysis Centers Normal Interaction Communication Path Occasional Interaction Communication Path Central Analysis Center (CAC) DAS …. DAS …. IAC... IAC … RAC …. RAC DØ Remote Analysis Model (DØRAM) Fermilab

DØ Southern Analysis Region (DØSAR) One of the regional grids within the DØGrid Consortium coordinating activities to maximize computing and analysis resources in addition to the whole European efforts UTA, OU, LTU, LU, SPRACE, Tata, KSU, KU, Rice, UMiss, CSF, UAZ MC farm clusters – mixture of dedicated and multi-purpose, rack mounted and desktop, 10’s-100’s of CPU’s

UTA is the first US DØRAC Mexico/Brazil OU/ LU UAZ Rice LTU UTA KU KSU Ole Miss DØRAM Implementation Mainz Wuppertal Munich Aachen Bonn GridKa (Karlsruhe) DØSAR formed around UTA

UTA – RAC (DPCC) 100 P4 Xeon 2.6GHz CPU = 260 GHz 64TB of Disk space 84 P4 Xeon 2.4GHz CPU = 202 GHz 7.5TB of Disk space Total CPU: 462 GHz Total disk: 73TB Total Memory: 168Gbyte Network bandwidth: 68Gb/sec

The tools Sequential Access via Metadata (SAM) –Data replication and cataloging system Batch Systems –FBSNG: Fermilab’s own batch system –Condor Three of the DØSAR farms consists of desktop machines under condor –PBS Most the dedicated DØSAR farms use this manager Grid framework: JIM = Job Inventory Management –Provide framework for grid operation  Job submission, match making and scheduling –Built upon Condor-G and globus

Operation of a SAM Station /Consumers Producers/ Station & Cache Manager File Storage Server File Stager(s) Project Managers eworkers File Storage Clients MSS or Other Station MSS or Other Station Data flow Control Cache Disk Temp Disk

Tevatron Grid Framework (JIM) UTA TTU

The tools cnt’d Local Task managements –DØSAR Monte Carlo Farm (McFarm) management  Cloned to other institutions Various Monitoring Software –Ganglia resource –McFarmGraph: MC Job status monitoring –McPerM: Farm performance monitor DØSAR Grid: Submit requests onto a local machine and the requests gets transferred to a submission site and executed at an execution site –DØGrid Uses mcrun_job request script More adaptable to a generic cluster

Ganglia Grid Resource Monitoring Operating since Apr. 2003

Job Status Monitoring: McFarmGraph Operating since Sept. 2003

Farm Performance Monitor: McPerMMcPerM Designed, implemented and improved by UTA Students Operating since Sept. 2003

D0 Grid/Remote Computing April 2004 Joel Snow Langston University DØSAR MC Delivery Stat. (as of May 10, 2004) InstitutionInceptionN MC (TMB) x10 6 LTU6/ LU7/ OU4/ Tata, India6/ Sao Paulo, Brazil4/ UTA-HEP1/ UTA–RAC12/ D0SAR TotalAs of 5/10/0418.9

DØSAR Computing & Human Resources InstitutionsCPU(GHz) [future]Storage (TB)People Cinvestav131.11F+? Langston221.31F+1GA LTU25+[12]1.01F+1PD+2GA KU12??1F+1PD KSU401.21F+2GA OU (OSCER) (tape)4F+3PD+2GA Sao Paulo60+[120]4.52F+Many Tata Institute521.61F+1Sys UTA F+1sys+1.5PD+3G A Total943 [1075] (tape) 14.5F+2sys+6.5PD+10 GA

How does current Tevatron MC Grid work? Client Site Global Grid Sub. Sites Regional Grids Exe. Sites Desktop. Clst. Ded. Clst. SAM

Actual DØ Data Re-processing at UTA

Network Bandwidth Needs

Summary and Plans Significant progress has been made in implementing grid computing technologies for DØ experiment –DØSAR Grid has been operating since April, 2004 Large amount of documents and expertise accumulated Moving toward data re-processing and analysis –First set of 180million event partial reprocessing completed –Different level of complexity Improved infrastructure necessary, especially network bandwidths –LEARN will boost the stature of Texas in HEP grid computing world –Started working with AMPATH, Oklahoma, Louisiana, Brazilian Consortia (Tentatively named the BOLT Network)  Need the Texan consortium UTA’s experience on DØSARGrid will be an important asset to expeditious implementation of THEGrid