ALICE-USA Grid-Deployment Plans (By the way, ALICE is an LHC Experiment, TOO!) Or (We Sometimes Feel Like and “AliEn” in our own Home…) Larry Pinsky—Computing.

Slides:



Advertisements
Similar presentations
Plateforme de Calcul pour les Sciences du Vivant SRB & gLite V. Breton.
Advertisements

Towards a Virtual European Supercomputing Infrastructure Vision & issues Sanzio Bassini
Kathy Benninger, Pittsburgh Supercomputing Center Workshop on the Development of a Next-Generation Cyberinfrastructure 1-Oct-2014 NSF Collaborative Research:
1 Software & Grid Middleware for Tier 2 Centers Rob Gardner Indiana University DOE/NSF Review of U.S. ATLAS and CMS Computing Projects Brookhaven National.
Office of Science U.S. Department of Energy Grids and Portals at NERSC Presented by Steve Chan.
Simo Niskala Teemu Pasanen
April 2009 OSG Grid School - RDU 1 Open Science Grid John McGee – Renaissance Computing Institute University of North Carolina, Chapel.
Assessment of Core Services provided to USLHC by OSG.
Open Science Grid For CI-Days Internet2: Fall Member Meeting, 2007 John McGee – OSG Engagement Manager Renaissance Computing Institute.
HEP Experiment Integration within GriPhyN/PPDG/iVDGL Rick Cavanaugh University of Florida DataTAG/WP4 Meeting 23 May, 2002.
INFSO-RI Enabling Grids for E-sciencE The US Federation Miron Livny Computer Sciences Department University of Wisconsin – Madison.
Open Science Grid For CI-Days Elizabeth City State University Jan-2008 John McGee – OSG Engagement Manager Manager, Cyberinfrastructure.
SG - OSG Improving Campus Research CI Through Leveraging and Integration: Developing a SURAgrid-OSG Collaboration John McGee, RENCI/OSG Engagement Coordinator.
CERN IT Department CH-1211 Genève 23 Switzerland t Internet Services Job Monitoring for the LHC experiments Irina Sidorova (CERN, JINR) on.
Grid Technologies  Slide text. What is Grid?  The World Wide Web provides seamless access to information that is stored in many millions of different.
DOSAR Workshop, Sao Paulo, Brazil, September 16-17, 2005 LCG Tier 2 and DOSAR Pat Skubic OU.
10/24/2015OSG at CANS1 Open Science Grid Ruth Pordes Fermilab
14 Aug 08DOE Review John Huth ATLAS Computing at Harvard John Huth.
SouthGrid SouthGrid SouthGrid is a distributed Tier 2 centre, one of four setup in the UK as part of the GridPP project. SouthGrid.
São Paulo Regional Analysis Center SPRACE Status Report 22/Aug/2006 SPRACE Status Report 22/Aug/2006.
David Adams ATLAS ADA, ARDA and PPDG David Adams BNL June 28, 2004 PPDG Collaboration Meeting Williams Bay, Wisconsin.
Tools for collaboration How to share your duck tales…
Grid Middleware Tutorial / Grid Technologies IntroSlide 1 /14 Grid Technologies Intro Ivan Degtyarenko ivan.degtyarenko dog csc dot fi CSC – The Finnish.
Authors: Ronnie Julio Cole David
What is SAM-Grid? Job Handling Data Handling Monitoring and Information.
HiPCAT, The Texas HPC and Grid Organization 4 th DOSAR Workshop Iowa State University Jaehoon Yu University of Texas at Arlington.
Ruth Pordes November 2004TeraGrid GIG Site Review1 TeraGrid and Open Science Grid Ruth Pordes, Fermilab representing the Open Science.
Evolution of a High Performance Computing and Monitoring system onto the GRID for High Energy Experiments T.L. Hsieh, S. Hou, P.K. Teng Academia Sinica,
High Performance Computing Across Texas –HPC systems, Clusters, and advanced visualization –Grids and Massive data storage –Scientific Computing and Projects.
March 16,2005 LHC GDB Meeting (Lyon) L. Pinsky--ALICE-USA1 ALICE-USA Grid-Deployment Plans Or (We Sometimes Feel Like and “AliEn” in our own Home…) Larry.
INFSO-RI Enabling Grids for E-sciencE Experience of using gLite for analysis of ATLAS combined test beam data A. Zalite / PNPI.
Status Organization Overview of Program of Work Education, Training It’s the People who make it happen & make it Work.
CMS Usage of the Open Science Grid and the US Tier-2 Centers Ajit Mohapatra, University of Wisconsin, Madison (On Behalf of CMS Offline and Computing Projects)
OSG Area Coordinator’s Report: Workload Management Maxim Potekhin BNL May 8 th, 2008.
Parag Mhashilkar Computing Division, Fermi National Accelerator Laboratory.
The National Grid Service Mike Mineter.
INRNE's participation in LCG Elena Puncheva Preslav Konstantinov IT Department.
1 Open Science Grid.. An introduction Ruth Pordes Fermilab.
INFSO-RI Enabling Grids for E-sciencE File Transfer Software and Service SC3 Gavin McCance – JRA1 Data Management Cluster Service.
Breaking the frontiers of the Grid R. Graciani EGI TF 2012.
Northwest Indiana Computational Grid Preston Smith Rosen Center for Advanced Computing Purdue University - West Lafayette West Lafayette Calumet.
1 Open Science Grid: Project Statement & Vision Transform compute and data intensive science through a cross- domain self-managed national distributed.
ALICE Physics Data Challenge ’05 and LCG Service Challenge 3 Latchezar Betev / ALICE Geneva, 6 April 2005 LCG Storage Management Workshop.
INTRODUCTION TO GRID & CLOUD COMPUTING U. Jhashuva 1 Asst. Professor Dept. of CSE.
DIRAC for Grid and Cloud Dr. Víctor Méndez Muñoz (for DIRAC Project) LHCb Tier 1 Liaison at PIC EGI User Community Board, October 31st, 2013.
Acronyms GAS - Grid Acronym Soup, LCG - LHC Computing Project EGEE - Enabling Grids for E-sciencE.
Building on virtualization capabilities for ExTENCI Carol Song and Preston Smith Rosen Center for Advanced Computing Purdue University ExTENCI Kickoff.
EGI-InSPIRE RI EGI-InSPIRE EGI-InSPIRE RI EGI Services for Distributed e-Infrastructure Access Tiziana Ferrari on behalf.
The EPIKH Project (Exchange Programme to advance e-Infrastructure Know-How) gLite Grid Introduction Salma Saber Electronic.
HTCondor-CE. 2 The Open Science Grid OSG is a consortium of software, service and resource providers and researchers, from universities, national laboratories.
May 23, 2007ALICE DOE Review - Computing1 ALICE-USA Computing Overview of Hard and Soft Computing Resources Needed to Achieve Research Goals 1.Calibration.
Scientific Computing at Fermilab Lothar Bauerdick, Deputy Head Scientific Computing Division 1 of 7 10k slot tape robots.
Report from US ALICE Yves Schutz WLCG 24/01/2007.
10-Feb-00 CERN HepCCC Grid Initiative ATLAS meeting – 16 February 2000 Les Robertson CERN/IT.
July 19, 2005-LHC GDB T0/T1 Networking L. Pinsky--ALICE-USA1 ALICE-USA T0/T1 Networking Plans Larry Pinsky—University of Houston For ALICE-USA.
Scientific Data Processing Portal and Heterogeneous Computing Resources at NRC “Kurchatov Institute” V. Aulov, D. Drizhuk, A. Klimentov, R. Mashinistov,
Ian Bird, CERN WLCG Project Leader Amsterdam, 24 th January 2012.
HPDC Grid Monitoring Workshop June 25, 2007 Grid monitoring from the VO/user perspectives Shava Smallen.
Grid Computing: Running your Jobs around the World
Deploying Regional Grids Creates Interaction, Ideas, and Integration
Clouds , Grids and Clusters
How to enable computing
ALICE Physics Data Challenge 3
Grid Computing.
MC data production, reconstruction and analysis - lessons from PDC’04
Simulation use cases for T2 in ALICE
LCG middleware and LHC experiments ARDA project
LHC Data Analysis using a worldwide computing grid
Status of Grids for HEP and HENP
Welcome to (HT)Condor Week #19 (year 34 of our project)
Presentation transcript:

ALICE-USA Grid-Deployment Plans (By the way, ALICE is an LHC Experiment, TOO!) Or (We Sometimes Feel Like and “AliEn” in our own Home…) Larry Pinsky—Computing Coordinator ALICE-USA

2 ALICE/Pinsky OSG Applications SLAC  1 Creighton University  2 Kent State University Lawrence Berkeley National Laboratory  3 Lawrence Berkeley National Laboratory  4 Michigan State University  5 Oak Ridge National Laboratory  6 The Ohio State University The Ohio Supercomputing Center  7 The Ohio Supercomputing Center  8 Purdue University  9 University of California, Berkeley  10 University of California, Davis  11 University of California, Los Angeles University of Houston  12 University of Houston  13 University of Tennessee  14 University of Texas at Austin  15 Vanderbilt University  16 Wayne State University ALICE-USA Institutions Already Official Members of ALICE Major Computing Sites

3 ALICE/Pinsky OSG Applications SLAC ALICE Computing Needs From as posted 25 Feb Table 2.6T0Sum T1sSum T2sTotal CPU (MSI2K) [Peak] Transient Storage (PB) Permanent storage (PB/year) Bandwidth in (Gbps) Bandwidth out (Gbps)

4 ALICE/Pinsky OSG Applications SLAC ALICE-USA Target year % total (ALICE-USA sum MSI2K)CPU ALICE-USA sum (PB)Disk ALICE-USA sum (PB/yr)Perm. St ALICE-USA sum (Gbps)Network Each Major US siteCPU (1/3 ALICE-USA sum)Disk Perm. St Network One Full External T1 with Full Share of Supporting T2 Capabilities—Net in the US [Based on 6 External T1s] Note OSC is a Member of ALICE and has made this Now Commitment Now…

5 ALICE/Pinsky OSG Applications SLAC ALICE-USA Commitments  OSC is commited now to getting NSF funding to Acquire this Level of Support.  LBL (NERSC) & UH are DOE funded and Commited to supplying these resources contingent upon DOE’s approval of the ALICE-USA EMCAL project.  All three institutions CONTINUE TO SUPPORT THE DATA CHALLENGES…  DOE is Currently well into the decision process regarding budgeting the Construction of EMCAL by ALICE-USA for ALICE. Funding to support Prototyping has been provided…

6 ALICE/Pinsky OSG Applications SLAC ALICE-USA Data Challenge Support  Since 2002, ALICE-USA has provided significant support for the Data Challenges.  Most recently (2004) ALICE-USA supplied ~14% (106 MSI2k-Hours) of the total (755 MSI2k- Hours) CPU and external storage capacity.  For ALICE-USA intends to supply a similar fraction from existing commitments.

7 ALICE/Pinsky OSG Applications SLAC ALICE-USA Grid Middleware  We will support ALICE’s needs with whatever Middleware is consistent with them…  …As Well As what is consistent with our local needs in the US…  Our institutions are participating in OSG in the US, and some are members of PPDG.

8 ALICE/Pinsky OSG Applications SLAC Simplified view of the ALICE Grid with AliEn Local scheduler ALICE VO – central services Central Task Queue Job submission File Catalogue Configuration Accounting User authentication Computing Element Workload management Job Monitoring Storage volume manager Data Transfer Storage Element Cluster Monitor AliEn Site services Disk and MSS Existing site components ALICE VO – Site services integration

9 ALICE/Pinsky OSG Applications SLAC ALICE VO interaction with various Grids User (Production Manager) LCG UI/RB Data Registration LCG site LCG CE WN ALICE VO Box ARC UI/RB ARC site ARC CE WN ALICE VO Box AliEn site AliEn CE WN ALICE VO Box OSG site OSG CE WN ALICE VO Box OSG UI/RB ALICE TaskQ ALICE File Catalogue Job submission

10 ALICE/Pinsky OSG Applications SLAC Some Issues  ALICE software will have to blend with many GRID infrastructures.  ALICE will use resources that will include many different platforms. (e.g. AliEn, PROOF and AliROOT now run on a variety of platforms such as IA32, IA64, and G5’s).  The detailed OS versions cannot be mandated on all resources that will need to be used.  ALICE File Catalogs, Task Queues and Production Manager will interface directly to the UI/RBs & Local Services.  ALICE is evolving towards a “Cloud” model of distributed computing and away from a rigid “MONARC” model… T-1’s are distinguished from T-2’s by local MS capability and not tasks…

11 ALICE/Pinsky OSG Applications SLAC Meeting Next LBL  There will be a meeting at LBL next Friday, June 10, to discuss ALICE and OSG specifically…

June 1, PinskyALICE/Pinsky OSG Applications SLAC 12 A Joint Grid Project Between Physics Departments at Universities in Texas Initiated by the High Energy (Particle) Physics Groups… To Harness Unused Local Computing Resources…

June 1, PinskyALICE/Pinsky OSG Applications SLAC 13 …In Support of HiPCAT High Performance Computing Across Texas (HiPCAT) is a consortium of Texas institutions that use advanced computational technologies to enhance research, development, and educational activities. These advanced computational technologies include traditional high performance computing (HPC) systems and clusters, in addition to complementary advanced computing technologies including massive data storage systems and scientific visualization resources. The advent of computational grids -- based on high speed networks connecting computing resources and grid 'middleware' running on these resources to integrate them into 'grids' -- has enabled the coordinated, concurrent usage of multiple resources/systems and stimulated new methods of computing and collaboration. HiPCAT institutions support the development, deployment, and utilization of all of these advanced computing technologies toenable Texas researchers to address the most challenging computational problems. (HiPCAT) is a consortium of Texas institutions that use advanced computational technologies to enhance research, development, and educational activities. These advanced computational technologies include traditional high performance computing (HPC) systems and clusters, in addition to complementary advanced computing technologies including massive data storage systems and scientific visualization resources. The advent of computational grids -- based on high speed networks connecting computing resources and grid 'middleware' running on these resources to integrate them into 'grids' -- has enabled the coordinated, concurrent usage of multiple resources/systems and stimulated new methods of computing and collaboration. HiPCAT institutions support the development, deployment, and utilization of all of these advanced computing technologies to enable Texas researchers to address the most challenging computational problems.

June 1, PinskyALICE/Pinsky OSG Applications SLAC 14 …And TIGRE The Texas Internet Grid for Research and Education (TIGRE) project goal is to build a computational grid that integrates computing systems, storage systems and databases, visualization laboratories and displays, and even instruments and sensors across Texas. TIGRE will enhance the computational capabilities for Texas researchers in academia, government, and industry by integrating massive computing power. Areas of research which will benefit in particular: biomedicine, energy and the environment, aerospace, materials science, agriculture, and information technology.

June 1, PinskyALICE/Pinsky OSG Applications SLAC 15 Setting Up THEGrid THEGrid has set up a Grid infrastructure using existing hardware in Physics Departments on campuses in Texas… Initially, a Grid3-like approach was taken using VDT (Going to OSG Soon…) Local unused resources were harnessed using Condor…

June 1, PinskyALICE/Pinsky OSG Applications SLAC 16 Using THEGrid Individual Students and Faculty at each participating campus can submit batch jobs! Jobs are submitted through a local portal on each campus… The middleware distributes the submitted jobs to one of the available locations throughout THEGrid… The output from each job is returned to the user…

June 1, PinskyALICE/Pinsky OSG Applications SLAC 17 THEGrid Texas Tech will run The OSG VOM Server